docs: update documentation for v2.3.2 and remove legacy files

Major documentation cleanup and updates:

Updates:
- Add USE_FIXED_HTTP=true to all Docker and HTTP deployment examples
- Update main README with v2.3.2 release notes and version badges
- Add HTTP server troubleshooting section for stream errors
- Update CHANGELOG with v2.3.1 and v2.3.2 entries
- Update all configuration examples (.env.example, docker-compose.yml)
- Add clear instructions for using the fixed HTTP implementation

Removed legacy documentation (11 files):
- Implementation plans that have been completed
- Architecture analysis documents
- Intermediate fix documentation
- Planning documents for features now implemented
- Duplicate SETUP.md (content merged into INSTALLATION.md)

The documentation now accurately reflects the current v2.3.2 state
with the complete HTTP server fix using USE_FIXED_HTTP=true.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
czlonkowski
2025-06-14 18:52:25 +02:00
parent baf5293cb8
commit 63ef011aec
21 changed files with 191 additions and 3632 deletions

View File

@@ -36,6 +36,10 @@ MCP_SERVER_HOST=localhost
# Server mode: stdio (local) or http (remote)
MCP_MODE=stdio
# Use fixed HTTP implementation (recommended for stability)
# Set to true to bypass StreamableHTTPServerTransport issues
USE_FIXED_HTTP=true
# HTTP Server Configuration (only used when MCP_MODE=http)
PORT=3000
HOST=0.0.0.0

View File

@@ -1,5 +1,9 @@
# n8n-MCP
[![Version](https://img.shields.io/badge/version-2.3.2-blue.svg)](https://github.com/czlonkowski/n8n-mcp)
[![Docker](https://img.shields.io/badge/docker-ghcr.io%2Fczlonkowski%2Fn8n--mcp-green.svg)](https://github.com/czlonkowski/n8n-mcp/pkgs/container/n8n-mcp)
[![License](https://img.shields.io/badge/license-Sustainable%20Use-orange.svg)](LICENSE)
A Model Context Protocol (MCP) server that provides AI assistants with comprehensive access to n8n node documentation, properties, and operations.
## Overview
@@ -19,6 +23,8 @@ n8n-MCP serves as a bridge between n8n's workflow automation platform and AI mod
- **Versioned Node Support**: Handles complex versioned nodes like HTTPRequest and Code
- **Fast Search**: SQLite with FTS5 for instant full-text search across all documentation
- **MCP Protocol**: Standard interface for AI assistants to query n8n knowledge
- **HTTP Server Mode**: Deploy remotely with fixed implementation (v2.3.2) that bypasses stream issues
- **Universal Compatibility**: Works with any Node.js version through automatic adapter fallback
## Quick Start
@@ -74,6 +80,7 @@ The easiest way to get started is using Docker:
# Generate a secure token
AUTH_TOKEN=$(openssl rand -base64 32)
echo "AUTH_TOKEN=$AUTH_TOKEN" > .env
echo "USE_FIXED_HTTP=true" >> .env
```
2. Run with Docker Compose:
@@ -126,10 +133,20 @@ curl http://localhost:3000/health
}
```
### Building Locally
### Building and Running with Docker
The Docker image (~150MB) includes all dependencies and a pre-built database:
To build the Docker image locally:
```bash
# Using pre-built image (recommended)
docker run -d \
-e MCP_MODE=http \
-e USE_FIXED_HTTP=true \
-e AUTH_TOKEN="your-secure-token" \
-p 3000:3000 \
ghcr.io/czlonkowski/n8n-mcp:latest
# Or build locally
docker build -t n8n-mcp:local .
```
@@ -272,10 +289,11 @@ n8n-MCP now supports HTTP mode for remote deployments. This allows you to:
```bash
# Set environment variables
export MCP_MODE=http
export USE_FIXED_HTTP=true # Important: Use the fixed implementation
export AUTH_TOKEN=$(openssl rand -base64 32)
# Start the server
npm run start:http
npm run http # This runs the fixed HTTP server
```
2. On your client, configure Claude Desktop with mcp-remote:
@@ -314,6 +332,20 @@ This project uses the Sustainable Use License. See LICENSE file for details.
Copyright (c) 2024 AiAdvisors Romuald Czlonkowski
## Recent Updates (v2.3.2)
### HTTP Server Fix
The v2.3.2 release fixes critical issues with the HTTP server:
- ✅ Fixed "stream is not readable" error by removing body parsing middleware
- ✅ Fixed "Server not initialized" error with direct JSON-RPC implementation
- ✅ Added `USE_FIXED_HTTP=true` environment variable for stable HTTP mode
- ✅ Improved performance with average response time of ~12ms
### Node.js Compatibility (v2.3)
- ✅ Automatic database adapter fallback (better-sqlite3 → sql.js)
- ✅ Works with any Node.js version without manual configuration
- ✅ Perfect for Claude Desktop integration
## Acknowledgments
- n8n team for the excellent workflow automation platform

View File

@@ -11,6 +11,7 @@ services:
environment:
# Mode configuration
MCP_MODE: ${MCP_MODE:-http}
USE_FIXED_HTTP: ${USE_FIXED_HTTP:-true} # Use fixed implementation for stability
AUTH_TOKEN: ${AUTH_TOKEN:?AUTH_TOKEN is required for HTTP mode}
# Application settings

View File

@@ -2,6 +2,39 @@
All notable changes to this project will be documented in this file.
## [2.3.2] - 2025-06-14
### Fixed
- **HTTP Server Stream Error**: Complete fix for "stream is not readable" error
- Removed Express body parsing middleware that was consuming request streams
- Fixed "Server not initialized" error with direct JSON-RPC implementation
- Added `USE_FIXED_HTTP=true` environment variable for stable HTTP mode
- Bypassed problematic StreamableHTTPServerTransport implementation
- HTTP server now works reliably with average response time of ~12ms
- Updated all HTTP server implementations to preserve raw streams
### Added
- `http-server-fixed.ts` - Direct JSON-RPC implementation
- `ConsoleManager` utility for stream isolation
- `MCP Engine` interface for service integration
- Comprehensive documentation for HTTP server fixes
### Changed
- Default HTTP mode now uses fixed implementation when `USE_FIXED_HTTP=true`
- Updated Docker configuration to use fixed implementation by default
- Improved error handling and logging in HTTP mode
## [2.3.1] - 2025-06-14
### Added
- **Single-Session Architecture**: Initial attempt to fix HTTP server issues
- Implemented session reuse across requests
- Added console output isolation
- Created engine interface for service integration
### Fixed
- Partial fix for "stream is not readable" error (completed in v2.3.2)
## [2.3.0] - 2024-12-06
### Added

View File

@@ -1,125 +0,0 @@
# Docker Build Fix
## Issues Fixed
### 1. Database COPY Error
The Docker build was failing with:
```
ERROR: failed to solve: failed to compute cache key: failed to calculate checksum of ref: "/data/nodes.db": not found
```
### 2. Missing Dockerfile.nginx
```
ERROR: failed to solve: failed to read dockerfile: open Dockerfile.nginx: no such file or directory
```
### 3. npm ci Production Flag
```
ERROR: failed to solve: process "/bin/sh -c npm ci --only=production && npm cache clean --force" did not complete successfully: exit code: 1
```
### 4. Network Timeout in GitHub Actions
```
npm error network If you are behind a proxy, please make sure that the 'proxy' config is set properly
```
## Root Causes & Solutions
### 1. Invalid COPY Syntax
Docker's COPY command doesn't support shell operators like `2>/dev/null || true`.
**Solution**: Removed the problematic COPY command and created data directory with RUN.
### 2. Dockerfile.nginx Not Yet Implemented
The GitHub Actions workflow referenced `Dockerfile.nginx` which is a Phase 2 feature.
**Solution**: Commented out the nginx build job until Phase 2 implementation.
### 3. Deprecated npm Flag
The `--only=production` flag is deprecated in newer npm versions.
**Solution**: Changed to `--omit=dev` which is the current syntax.
### 4. Network Timeouts During npm Install
Installing production dependencies fresh was causing network timeouts in GitHub Actions.
**Solution**:
- Added npm retry configuration for reliability
- Created separate stage for production dependencies
- Use `npm prune` instead of fresh install to avoid network issues
- Copy pre-pruned dependencies to runtime stage
## Changes Made
### Complete Dockerfile Optimization
1. **Added npm configuration for reliability**:
```dockerfile
RUN npm config set fetch-retries 5 && \
npm config set fetch-retry-mintimeout 20000 && \
npm config set fetch-retry-maxtimeout 120000 && \
npm config set fetch-timeout 300000
```
2. **Created separate production dependencies stage**:
```dockerfile
# Stage 2: Production Dependencies
FROM node:20-alpine AS prod-deps
WORKDIR /app
COPY package*.json ./
COPY --from=deps /app/node_modules ./node_modules
RUN npm prune --omit=dev
```
3. **Optimized runtime stage**:
```dockerfile
# Copy pre-pruned production dependencies
COPY --from=prod-deps /app/node_modules ./node_modules
# No need for npm install in runtime!
```
### GitHub Actions Workflow Changes
```diff
- name: Log in to GitHub Container Registry
+ if: github.event_name != 'pull_request'
uses: docker/login-action@v3
# Commented out nginx build until Phase 2
- build-nginx:
- name: Build nginx-enhanced Docker Image
+ # build-nginx:
+ # name: Build nginx-enhanced Docker Image
```
## Result
✅ Docker build now succeeds without network timeouts
✅ Optimized build process with 4 stages
✅ Production dependencies are pruned efficiently
✅ Database initialization happens at container startup
✅ GitHub Actions workflow will work properly
✅ No manual intervention required
## Key Improvements
1. **Network Reliability**: Added npm retry configuration
2. **Build Efficiency**: Only hit npm registry once, then prune
3. **Stage Optimization**: 4 stages for clear separation of concerns
4. **No Redundant Installs**: Eliminated duplicate npm operations
5. **GitHub Actions Ready**: Added build optimizations
## Testing
```bash
# Build locally
docker build -t n8n-mcp:test .
# Run and verify
docker run -d --name test -e MCP_MODE=http -e AUTH_TOKEN=test -p 3000:3000 n8n-mcp:test
docker logs test
curl http://localhost:3000/health
# Check image size
docker images n8n-mcp:test
```
## Performance Impact
- Build time: Reduced by avoiding duplicate npm installs
- Network usage: Single npm ci instead of two
- Reliability: 5 retries with exponential backoff
- Cache efficiency: Better layer caching with separate stages

File diff suppressed because it is too large Load Diff

View File

@@ -1,198 +0,0 @@
# Docker Image Optimization Plan
## Current State Analysis
### Problems Identified:
1. **Image Size**: 2.61GB (way too large for an MCP server)
2. **Runtime Dependencies**: Includes entire n8n ecosystem (`n8n`, `n8n-core`, `n8n-workflow`, `@n8n/n8n-nodes-langchain`)
3. **Database Built at Runtime**: `docker-entrypoint.sh` runs `rebuild.js` on container start
4. **Runtime Node Extraction**: Several MCP tools try to extract node source code at runtime
### Root Cause:
The production `node_modules` includes massive n8n packages that are only needed for:
- Extracting node metadata during database build
- Source code extraction (which should be done at build time)
## Optimization Strategy
### Goal:
Reduce Docker image from 2.61GB to ~150-200MB by:
1. Building complete database at Docker build time
2. Including pre-extracted source code in database
3. Removing n8n dependencies from runtime image
## Implementation Plan
### Phase 1: Database Schema Enhancement
Modify `schema.sql` to store source code directly:
```sql
-- Add to nodes table
ALTER TABLE nodes ADD COLUMN node_source_code TEXT;
ALTER TABLE nodes ADD COLUMN credential_source_code TEXT;
ALTER TABLE nodes ADD COLUMN source_extracted_at INTEGER;
```
### Phase 2: Enhance Database Building
#### 2.1 Modify `rebuild.ts`:
- Extract and store node source code during build
- Extract and store credential source code
- Save all data that runtime tools need
#### 2.2 Create `build-time-extractor.ts`:
- Dedicated extractor for build-time use
- Extracts ALL information needed at runtime
- Stores in database for later retrieval
### Phase 3: Refactor Runtime Services
#### 3.1 Update `NodeDocumentationService`:
- Remove dependency on `NodeSourceExtractor` for runtime
- Read source code from database instead of filesystem
- Remove `ensureNodeDataAvailable` dynamic loading
#### 3.2 Modify MCP Tools:
- `get_node_source_code`: Read from database, not filesystem
- `list_available_nodes`: Query database, not scan packages
- `rebuild_documentation_database`: Remove or make it a no-op
### Phase 4: Dockerfile Optimization
```dockerfile
# Build stage - includes all n8n packages
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Database build stage - has n8n packages
FROM builder AS db-builder
WORKDIR /app
# Build complete database with all source code
RUN npm run rebuild
# Runtime stage - minimal dependencies
FROM node:20-alpine AS runtime
WORKDIR /app
# Only runtime dependencies (no n8n packages)
COPY package*.json ./
RUN npm ci --omit=dev --ignore-scripts && \
npm uninstall n8n n8n-core n8n-workflow @n8n/n8n-nodes-langchain && \
npm install @modelcontextprotocol/sdk better-sqlite3 express dotenv sql.js
# Copy built application
COPY --from=builder /app/dist ./dist
# Copy pre-built database
COPY --from=db-builder /app/data/nodes.db ./data/
# Copy minimal required files
COPY src/database/schema.sql ./src/database/
COPY .env.example ./
COPY docker/docker-entrypoint-optimized.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
USER nodejs
EXPOSE 3000
HEALTHCHECK CMD curl -f http://localhost:3000/health || exit 1
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
CMD ["node", "dist/mcp/index.js"]
```
### Phase 5: Runtime Adjustments
#### 5.1 Create `docker-entrypoint-optimized.sh`:
- Remove database building logic
- Only check if database exists
- Simple validation and startup
#### 5.2 Update `package.json`:
- Create separate `dependencies-runtime.json` for Docker
- Move n8n packages to `buildDependencies` section
## File Changes Required
### 1. Database Schema (`src/database/schema.sql`)
- Add source code columns
- Add extraction metadata
### 2. Rebuild Script (`src/scripts/rebuild.ts`)
- Extract and store source code during build
- Store all runtime-needed data
### 3. Node Repository (`src/database/node-repository.ts`)
- Add methods to save/retrieve source code
- Update data structures
### 4. MCP Server (`src/mcp/server.ts`)
- Modify `getNodeSourceCode` to use database
- Update `listAvailableNodes` to query database
- Remove/disable `rebuildDocumentationDatabase`
### 5. Node Documentation Service (`src/services/node-documentation-service.ts`)
- Remove runtime extractors
- Use database for all queries
- Simplify initialization
### 6. Docker Files
- Create optimized Dockerfile
- Create optimized entrypoint script
- Update docker-compose.yml
## Expected Results
### Before:
- Image size: 2.61GB
- Runtime deps: Full n8n ecosystem
- Startup: Slow (builds database)
- Memory: High usage
### After:
- Image size: ~150-200MB
- Runtime deps: Minimal (MCP + SQLite)
- Startup: Fast (pre-built database)
- Memory: Low usage
## Migration Strategy
1. **Keep existing functionality**: Current Docker setup continues to work
2. **Create new optimized version**: `Dockerfile.optimized`
3. **Test thoroughly**: Ensure all MCP tools work with pre-built database
4. **Gradual rollout**: Tag as `n8n-mcp:slim` initially
5. **Documentation**: Update guides for both versions
## Risks and Mitigations
### Risk 1: Dynamic Nodes
- **Issue**: New nodes added after build won't be available
- **Mitigation**: Document rebuild process, consider scheduled rebuilds
### Risk 2: Source Code Extraction
- **Issue**: Source code might be large
- **Mitigation**: Compress source code in database, lazy load if needed
### Risk 3: Compatibility
- **Issue**: Some tools expect runtime n8n access
- **Mitigation**: Careful testing, fallback mechanisms
## Success Metrics
1. ✅ Image size < 300MB
2. Container starts in < 5 seconds
3. All MCP tools functional
4. Memory usage < 100MB idle
5. No runtime dependency on n8n packages
## Implementation Order
1. **Database schema changes** (non-breaking)
2. **Enhanced rebuild script** (backward compatible)
3. **Runtime service refactoring** (feature flagged)
4. **Optimized Dockerfile** (separate file)
5. **Testing and validation**
6. **Documentation updates**
7. **Gradual rollout**

View File

@@ -19,7 +19,10 @@ git clone https://github.com/czlonkowski/n8n-mcp.git
cd n8n-mcp
# Create .env file with auth token
echo "AUTH_TOKEN=$(openssl rand -base64 32)" > .env
cat > .env << EOF
AUTH_TOKEN=$(openssl rand -base64 32)
USE_FIXED_HTTP=true
EOF
# Start the server
docker compose up -d
@@ -36,13 +39,14 @@ curl http://localhost:3000/health
Pre-built images are available on GitHub Container Registry:
```bash
# Pull the latest image (~283MB)
# Pull the latest image (~150MB optimized)
docker pull ghcr.io/czlonkowski/n8n-mcp:latest
# Run with HTTP mode
docker run -d \
--name n8n-mcp \
-e MCP_MODE=http \
-e USE_FIXED_HTTP=true \
-e AUTH_TOKEN=your-secure-token \
-p 3000:3000 \
ghcr.io/czlonkowski/n8n-mcp:latest

View File

@@ -1,141 +0,0 @@
# Docker Testing Results
## Testing Date: June 13, 2025
### Test Environment
- Docker version: Docker Desktop on macOS
- Platform: arm64 (Apple Silicon)
- Node.js in container: v20.19.2
## Test Results Summary
### ✅ Successful Tests
1. **Docker Build Process**
- Multi-stage build completes successfully
- Build context optimized from 1.75GB to 6.87KB with proper .dockerignore
- All layers cache properly for faster rebuilds
2. **Health Endpoint**
- Returns proper JSON response
- Shows correct uptime, memory usage, and version
- Accessible at http://localhost:3000/health
3. **Authentication (HTTP Mode)**
- Correctly rejects requests with wrong token (401 Unauthorized)
- Accepts requests with correct AUTH_TOKEN
- Warns when AUTH_TOKEN is less than 32 characters
4. **Docker Compose Deployment**
- Creates named volumes for persistence
- Respects resource limits (512MB max, 256MB reserved)
- Health checks run every 30 seconds
- Graceful shutdown on SIGTERM
5. **Stdio Mode**
- Container starts in stdio mode with MCP_MODE=stdio
- Accepts JSON-RPC input via stdin
- Returns responses via stdout
### ⚠️ Issues Discovered
1. **Database Initialization Failure**
```
Error: ENOENT: no such file or directory, open '/app/src/database/schema.sql'
```
- Cause: schema.sql not included in Docker image
- Impact: Database cannot be initialized on first run
- Fix: Include src/database/schema.sql in Dockerfile
2. **MCP Endpoint Error**
```json
{
"error": {
"code": -32700,
"message": "Parse error",
"data": "InternalServerError: stream is not readable"
}
}
```
- Likely related to missing database
- Needs investigation after fixing database initialization
3. **Large Image Size**
- Current size: 2.61GB
- Cause: All node_modules included in production
- Potential optimization: Use Alpine packages where possible
### 📊 Performance Metrics
- Build time: ~5 minutes (with cache)
- Startup time: <2 seconds
- Memory usage: ~8-9MB (idle)
- Health check response time: <50ms
### 🔧 Recommended Fixes
1. **Immediate (Phase 1)** ✅ FIXED
- ✅ Include schema.sql in Docker image
- ✅ Add scripts directory for rebuild functionality
- ✅ Removed invalid COPY syntax that caused build errors
- ✅ Database initialization happens at runtime, not build time
2. **Future Improvements (Phase 2)**
- Optimize image size with multi-stage pruning
- Add database migration support
- Implement proper logging rotation
- Add Prometheus metrics endpoint
### 📋 Testing Checklist
- [x] Docker build completes
- [x] Image runs without crashes
- [x] Health endpoint responds
- [x] Authentication works
- [x] Docker Compose deploys
- [x] Volumes persist data
- [x] Resource limits enforced
- [x] Graceful shutdown works
- [ ] Database initializes properly
- [ ] MCP tools function correctly
- [ ] Cross-platform compatibility (arm64/amd64)
## Next Steps
1. Apply fixes from Dockerfile.fixed
2. Test database initialization thoroughly
3. Verify MCP functionality with initialized database
4. Test multi-architecture builds in CI
5. Document troubleshooting steps
## Test Commands Used
```bash
# Build image
docker build -t n8n-mcp:test .
# Test stdio mode
echo '{"jsonrpc":"2.0","method":"tools/list","id":1}' | \
docker run --rm -i -e MCP_MODE=stdio n8n-mcp:test
# Test HTTP mode
docker run -d --name test-http \
-e MCP_MODE=http \
-e AUTH_TOKEN=test-token \
-p 3001:3000 \
n8n-mcp:test
# Test with docker-compose
docker compose up -d
docker compose logs -f
# Health check
curl http://localhost:3000/health
# Test authentication
curl -H "Authorization: Bearer test-token" \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}' \
http://localhost:3000/mcp
```

View File

@@ -15,7 +15,10 @@ The easiest way to deploy n8n-MCP is using Docker:
#### Quick Start
```bash
# 1. Create configuration
echo "AUTH_TOKEN=$(openssl rand -base64 32)" > .env
cat > .env << EOF
AUTH_TOKEN=$(openssl rand -base64 32)
USE_FIXED_HTTP=true
EOF
# 2. Start with Docker Compose
docker compose up -d
@@ -33,6 +36,7 @@ cd n8n-mcp
# 2. Create production .env
cat > .env << EOF
AUTH_TOKEN=$(openssl rand -base64 32)
USE_FIXED_HTTP=true
NODE_ENV=production
LOG_LEVEL=info
PORT=3000
@@ -87,6 +91,7 @@ Edit `.env`:
```env
# HTTP mode configuration
MCP_MODE=http
USE_FIXED_HTTP=true # Important: Use the fixed implementation (v2.3.2+)
PORT=3000
HOST=0.0.0.0

View File

@@ -16,7 +16,10 @@ The fastest way to get n8n-MCP running:
```bash
# Using Docker (recommended)
echo "AUTH_TOKEN=$(openssl rand -base64 32)" > .env
cat > .env << EOF
AUTH_TOKEN=$(openssl rand -base64 32)
USE_FIXED_HTTP=true
EOF
docker compose up -d
```
@@ -46,6 +49,7 @@ docker compose up -d
environment:
MCP_MODE: ${MCP_MODE:-http}
USE_FIXED_HTTP: ${USE_FIXED_HTTP:-true}
AUTH_TOKEN: ${AUTH_TOKEN:?AUTH_TOKEN is required}
NODE_ENV: ${NODE_ENV:-production}
LOG_LEVEL: ${LOG_LEVEL:-info}

View File

@@ -1,301 +0,0 @@
# MCP Server Architecture Analysis: Stateful vs Stateless
## Executive Summary
After deep analysis of the MCP protocol, StreamableHTTPServerTransport implementation, and our specific use case (single-player repository as an engine for a service), I recommend a **Hybrid Single-Session Architecture** that provides the simplicity of stateless design with the protocol compliance of stateful implementation.
## Context and Requirements
### Project Goals
1. **Single-player repository** - One user at a time, not concurrent sessions
2. **Engine for a service** - This repo will be integrated into a larger system
3. **Simplicity** - Easy to understand, maintain, and deploy
4. **Separation of concerns** - Multi-user features in separate repository
### Protocol Reality
- MCP is inherently **stateful by design**
- StreamableHTTPServerTransport **expects session management**
- The protocol maintains context across multiple tool invocations
- Attempting pure stateless breaks protocol expectations
## Architecture Options Analysis
### Option A: Full Stateful Implementation
```typescript
class StatefulMCPServer {
private sessions = new Map<string, SessionData>();
// Multiple concurrent sessions
// Session cleanup
// Memory management
// Complexity: HIGH
}
```
**Pros:**
- Full protocol compliance
- Supports multiple concurrent users
- Future-proof for scaling
**Cons:**
- **Over-engineered for single-player use case**
- Complex session management unnecessary
- Memory overhead for session storage
- Cleanup logic adds complexity
- Conflicts with "engine" design principle
**Verdict:** ❌ Too complex for our needs
### Option B: Pure Stateless Implementation
```typescript
class StatelessMCPServer {
// New instance per request
// No session tracking
// Complexity: LOW
}
```
**Pros:**
- Very simple implementation
- No memory overhead
- Easy to understand
**Cons:**
- **Breaks MCP protocol expectations**
- Request ID collisions
- No context between calls
- StreamableHTTPServerTransport fights this approach
- The "stream is not readable" error persists
**Verdict:** ❌ Incompatible with protocol
### Option C: Hybrid Single-Session Architecture (Recommended)
```typescript
class SingleSessionMCPServer {
private currentSession: {
transport: StreamableHTTPServerTransport;
server: N8NDocumentationMCPServer;
lastAccess: Date;
} | null = null;
async handleRequest(req: Request, res: Response) {
// Always use/reuse the single session
if (!this.currentSession || this.isExpired()) {
await this.createNewSession();
}
this.currentSession.lastAccess = new Date();
await this.currentSession.transport.handleRequest(req, res);
}
private isExpired(): boolean {
// Simple 30-minute timeout
const thirtyMinutes = 30 * 60 * 1000;
return Date.now() - this.currentSession.lastAccess.getTime() > thirtyMinutes;
}
}
```
**Pros:**
- **Protocol compliant** - Satisfies StreamableHTTPServerTransport expectations
- **Simple** - Only one session to manage
- **Memory efficient** - Single session overhead
- **Perfect for single-player** - Matches use case exactly
- **Clean integration** - Easy to wrap as an engine
**Cons:**
- Not suitable for concurrent users (but that's handled elsewhere)
**Verdict:** ✅ Perfect match for requirements
## Detailed Implementation Strategy
### 1. Console Output Management
```typescript
// Silence console only during transport operations
class ManagedConsole {
silence() {
this.originalLog = console.log;
console.log = () => {};
}
restore() {
console.log = this.originalLog;
}
wrapOperation<T>(fn: () => T): T {
this.silence();
try {
return fn();
} finally {
this.restore();
}
}
}
```
### 2. Single Session Manager
```typescript
export class SingleSessionHTTPServer {
private session: SessionData | null = null;
private console = new ManagedConsole();
async handleRequest(req: Request, res: Response): Promise<void> {
return this.console.wrapOperation(async () => {
// Ensure we have a valid session
if (!this.session || this.shouldReset()) {
await this.resetSession();
}
// Update last access
this.session.lastAccess = new Date();
// Handle the request with existing transport
await this.session.transport.handleRequest(req, res);
});
}
private async resetSession(): Promise<void> {
// Clean up old session
if (this.session) {
await this.session.transport.close();
await this.session.server.close();
}
// Create new session
const server = new N8NDocumentationMCPServer();
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => 'single-session', // Always same ID
});
await server.connect(transport);
this.session = {
server,
transport,
lastAccess: new Date(),
sessionId: 'single-session'
};
}
private shouldReset(): boolean {
// Reset after 30 minutes of inactivity
const inactivityLimit = 30 * 60 * 1000;
return Date.now() - this.session.lastAccess.getTime() > inactivityLimit;
}
}
```
### 3. Integration as Engine
```typescript
// Easy to use in larger service
export class N8NMCPEngine {
private server: SingleSessionHTTPServer;
constructor() {
this.server = new SingleSessionHTTPServer();
}
// Simple interface for service integration
async processRequest(req: Request, res: Response): Promise<void> {
return this.server.handleRequest(req, res);
}
// Clean shutdown for service lifecycle
async shutdown(): Promise<void> {
return this.server.shutdown();
}
}
```
## Why This Architecture Wins
### 1. **Protocol Compliance**
- StreamableHTTPServerTransport gets the session it expects
- No fighting against the SDK design
- Fixes "stream is not readable" error
### 2. **Simplicity**
- One session = one user
- No complex session management
- Clear lifecycle (create, use, expire, recreate)
### 3. **Engine-Ready**
- Clean interface for integration
- No leaked complexity
- Service wrapper handles multi-user concerns
### 4. **Resource Efficient**
- Single session in memory
- Automatic cleanup after inactivity
- No accumulating sessions
### 5. **Maintainable**
- Easy to understand code
- Clear separation of concerns
- No hidden complexity
## Migration Path
### Phase 1: Fix Console Output (1 day)
- Implement ManagedConsole wrapper
- Wrap all transport operations
### Phase 2: Implement Single Session (2 days)
- Create SingleSessionHTTPServer
- Handle session lifecycle
- Test with Claude Desktop
### Phase 3: Polish and Document (1 day)
- Add error handling
- Performance metrics
- Usage documentation
## Testing Strategy
```typescript
describe('Single Session MCP Server', () => {
it('should reuse session for multiple requests', async () => {
const server = new SingleSessionHTTPServer();
const req1 = createMockRequest();
const req2 = createMockRequest();
await server.handleRequest(req1, mockRes);
await server.handleRequest(req2, mockRes);
// Should use same session
expect(server.getSessionCount()).toBe(1);
});
it('should reset expired sessions', async () => {
const server = new SingleSessionHTTPServer();
// First request
await server.handleRequest(req1, res1);
// Simulate 31 minutes passing
jest.advanceTimersByTime(31 * 60 * 1000);
// Second request should create new session
await server.handleRequest(req2, res2);
expect(server.wasSessionReset()).toBe(true);
});
});
```
## Conclusion
The **Hybrid Single-Session Architecture** is the optimal solution for n8n-MCP because it:
1. **Respects the protocol** - Works with MCP's stateful design
2. **Matches the use case** - Perfect for single-player repository
3. **Simplifies implementation** - No unnecessary complexity
4. **Integrates cleanly** - Ready to be an engine for larger service
5. **Fixes the core issue** - Eliminates "stream is not readable" error
This architecture provides the best balance of simplicity, correctness, and maintainability for our specific requirements.

View File

@@ -1,456 +0,0 @@
# MCP "Stream is not readable" Error Fix Implementation Plan
## Executive Summary
This document outlines a comprehensive plan to fix the "InternalServerError: stream is not readable" error in the n8n-MCP HTTP server implementation. The error stems from multiple architectural and implementation issues that need systematic resolution.
**Chosen Solution**: After thorough analysis, we will implement a **Hybrid Single-Session Architecture** that provides protocol compliance while optimizing for the single-player use case. This approach balances simplicity with correctness, making it ideal for use as an engine in larger services.
## Problem Analysis
### Root Causes
1. **Stream Contamination**
- Console output during server initialization interferes with StreamableHTTPServerTransport
- The transport expects clean stdin/stdout/stderr streams
- Any console.log/error before or during request handling corrupts the stream
2. **Architectural Mismatch**
- Current implementation: Stateless (new server instance per request)
- StreamableHTTPServerTransport design: Stateful (expects session persistence)
- Passing `sessionIdGenerator: undefined` doesn't make it truly stateless
3. **Protocol Implementation Gap**
- Missing proper SSE (Server-Sent Events) support
- Not handling the dual-mode nature of Streamable HTTP (JSON-RPC + SSE)
- Accept header validation but no actual SSE implementation
4. **Version Inconsistency**
- Multiple MCP SDK versions in dependency tree (1.12.1, 1.11.0)
- Potential API incompatibilities between versions
## Implementation Strategy
### Phase 1: Dependency Consolidation (Priority: Critical)
#### 1.1 Update MCP SDK
```json
{
"dependencies": {
"@modelcontextprotocol/sdk": "^1.12.1"
},
"overrides": {
"@modelcontextprotocol/sdk": "^1.12.1"
}
}
```
#### 1.2 Remove Conflicting Dependencies
- Audit n8n packages that bundle older MCP versions
- Consider isolating MCP server from n8n dependencies
### Phase 2: Console Output Isolation (Priority: Critical)
#### 2.1 Create Environment-Aware Logging
```typescript
// src/utils/console-manager.ts
export class ConsoleManager {
private originalConsole = {
log: console.log,
error: console.error,
warn: console.warn
};
public silence() {
if (process.env.MCP_MODE === 'http') {
console.log = () => {};
console.error = () => {};
console.warn = () => {};
}
}
public restore() {
console.log = this.originalConsole.log;
console.error = this.originalConsole.error;
console.warn = this.originalConsole.warn;
}
}
```
#### 2.2 Refactor All Console Usage
- Replace console.* with logger.* throughout codebase
- Add initialization flag to prevent startup logs in HTTP mode
- Ensure no third-party libraries write to console
### Phase 3: Transport Architecture - Hybrid Single-Session (Priority: High)
#### 3.1 Chosen Architecture: Single-Session Implementation
Based on architectural analysis, we will implement a hybrid single-session approach that:
- Maintains protocol compliance with StreamableHTTPServerTransport
- Optimizes for single-player use case (one user at a time)
- Simplifies implementation while fixing the core issues
- Provides clean interface for future service integration
```typescript
// src/http-server-single-session.ts
export class SingleSessionHTTPServer {
private session: {
server: N8NDocumentationMCPServer;
transport: StreamableHTTPServerTransport;
lastAccess: Date;
} | null = null;
private consoleManager = new ConsoleManager();
async handleRequest(req: Request, res: Response): Promise<void> {
// Wrap all operations to prevent console interference
return this.consoleManager.wrapOperation(async () => {
// Ensure we have a valid session
if (!this.session || this.isExpired()) {
await this.resetSession();
}
// Update last access time
this.session.lastAccess = new Date();
// Handle request with existing transport
await this.session.transport.handleRequest(req, res);
});
}
private async resetSession(): Promise<void> {
// Clean up old session if exists
if (this.session) {
try {
await this.session.transport.close();
await this.session.server.close();
} catch (error) {
logger.warn('Error closing previous session:', error);
}
}
// Create new session
const server = new N8NDocumentationMCPServer();
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => 'single-session', // Always same ID
});
await server.connect(transport);
this.session = {
server,
transport,
lastAccess: new Date()
};
logger.info('Created new single session');
}
private isExpired(): boolean {
const thirtyMinutes = 30 * 60 * 1000;
return Date.now() - this.session.lastAccess.getTime() > thirtyMinutes;
}
async shutdown(): Promise<void> {
if (this.session) {
await this.session.transport.close();
await this.session.server.close();
this.session = null;
}
}
}
```
#### 3.2 Console Wrapper Implementation
```typescript
// src/utils/console-manager.ts
export class ConsoleManager {
private originalConsole = {
log: console.log,
error: console.error,
warn: console.warn
};
public wrapOperation<T>(operation: () => T | Promise<T>): T | Promise<T> {
this.silence();
try {
const result = operation();
if (result instanceof Promise) {
return result.finally(() => this.restore());
}
this.restore();
return result;
} catch (error) {
this.restore();
throw error;
}
}
private silence() {
if (process.env.MCP_MODE === 'http') {
console.log = () => {};
console.error = () => {};
console.warn = () => {};
}
}
private restore() {
console.log = this.originalConsole.log;
console.error = this.originalConsole.error;
console.warn = this.originalConsole.warn;
}
}
```
### Phase 4: Engine Integration Interface (Priority: Medium)
#### 4.1 Clean API for Service Integration
```typescript
// src/mcp-engine.ts
export class N8NMCPEngine {
private server: SingleSessionHTTPServer;
constructor() {
this.server = new SingleSessionHTTPServer();
}
/**
* Process a single MCP request
* The wrapping service handles authentication, multi-tenancy, etc.
*/
async processRequest(req: Request, res: Response): Promise<void> {
return this.server.handleRequest(req, res);
}
/**
* Health check for service monitoring
*/
async healthCheck(): Promise<{ status: string; uptime: number }> {
return {
status: 'healthy',
uptime: process.uptime()
};
}
/**
* Graceful shutdown for service lifecycle
*/
async shutdown(): Promise<void> {
return this.server.shutdown();
}
}
// Usage in multi-tenant service:
// const engine = new N8NMCPEngine();
// app.post('/api/users/:userId/mcp', authenticate, (req, res) => {
// engine.processRequest(req, res);
// });
```
### Phase 5: SSE Support Implementation (Priority: Low)
Note: Basic SSE support may be added later if needed, but the single-session architecture handles most use cases through standard request-response.
#### 4.1 Dual-Mode Response Handler
```typescript
class DualModeHandler {
async handleRequest(req: Request, res: Response) {
const acceptsSSE = req.headers.accept?.includes('text/event-stream');
if (acceptsSSE && this.isStreamableMethod(req.body.method)) {
// Handle as SSE stream
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
});
await this.handleSSEStream(req, res);
} else {
// Handle as single JSON-RPC response
await this.handleJSONRPC(req, res);
}
}
}
```
### Phase 5: Testing Strategy (Priority: High)
#### 5.1 Unit Tests
- Test console output isolation
- Test session management
- Test SSE vs JSON-RPC response handling
#### 5.2 Integration Tests
```typescript
describe('Single Session MCP Server', () => {
it('should handle JSON-RPC requests without console interference', async () => {
const server = new SingleSessionHTTPServer();
const mockReq = createMockRequest({ method: 'tools/list' });
const mockRes = createMockResponse();
await server.handleRequest(mockReq, mockRes);
expect(mockRes.statusCode).toBe(200);
expect(console.log).not.toHaveBeenCalled();
});
it('should reuse single session for multiple requests', async () => {
const server = new SingleSessionHTTPServer();
// First request creates session
await server.handleRequest(req1, res1);
const firstSessionId = server.getSessionId();
// Second request reuses session
await server.handleRequest(req2, res2);
const secondSessionId = server.getSessionId();
expect(firstSessionId).toBe(secondSessionId);
expect(firstSessionId).toBe('single-session');
});
it('should reset expired sessions', async () => {
const server = new SingleSessionHTTPServer();
// First request
await server.handleRequest(req1, res1);
// Simulate 31 minutes passing
jest.advanceTimersByTime(31 * 60 * 1000);
// Second request should trigger reset
const resetSpy = jest.spyOn(server, 'resetSession');
await server.handleRequest(req2, res2);
expect(resetSpy).toHaveBeenCalled();
});
it('should handle errors gracefully', async () => {
const server = new SingleSessionHTTPServer();
const badReq = createMockRequest({ invalid: 'data' });
await expect(server.handleRequest(badReq, mockRes))
.resolves.not.toThrow();
});
});
```
#### 5.3 Docker Testing
- Test in isolated Docker environment
- Verify no stream corruption
- Test with actual Claude Desktop client
## Implementation Order
### Phase 1: Foundation (2 days)
1. **Day 1**:
- Update dependencies, consolidate MCP SDK version
- Create ConsoleManager utility class
- Replace console.* calls with logger in HTTP paths
2. **Day 2**:
- Implement and test console output isolation
- Verify no third-party console writes
### Phase 2: Core Fix (3 days)
1. **Day 3-4**:
- Implement SingleSessionHTTPServer class
- Integrate console wrapping
- Handle session lifecycle (create, expire, reset)
2. **Day 5**:
- Update HTTP server to use new architecture
- Test with actual MCP requests
- Verify "stream is not readable" error is resolved
### Phase 3: Polish & Testing (2 days)
1. **Day 6**:
- Comprehensive testing suite
- Error handling improvements
- Performance metrics
2. **Day 7**:
- Docker integration testing
- Documentation updates
- Release preparation
### Total Timeline: 7 days (vs original 15 days)
## Risk Mitigation
### Backward Compatibility
- Keep existing stdio mode unchanged
- Add feature flag for new HTTP implementation
- Gradual rollout with fallback option
### Performance Considerations
- Single session = minimal memory overhead
- Automatic expiry after 30 minutes of inactivity
- No session accumulation or cleanup complexity
- Connection pooling for database access
### Security Implications
- Session timeout configuration
- Rate limiting per session
- Secure session ID generation
## Success Metrics
1. **Zero "stream is not readable" errors** in production
2. **Successful Claude Desktop integration** via mcp-remote
3. **Response time < 100ms** for standard queries
4. **Memory usage stable** over extended periods
5. **Clean logs** without stream corruption
## Alternative Approaches
### Alternative 1: Different Transport
- Use WebSocket instead of HTTP
- Implement custom transport that avoids StreamableHTTP issues
- Direct JSON-RPC without MCP SDK transport layer
### Alternative 2: Process Isolation
- Spawn separate process for each request
- Complete isolation of streams
- Higher overhead but guaranteed clean state
### Alternative 3: Proxy Layer
- Add nginx or similar proxy
- Handle SSE at proxy level
- Simplify Node.js implementation
## Rollback Plan
If issues persist after implementation:
1. Revert to previous version
2. Disable HTTP mode temporarily
3. Focus on stdio mode for Claude Desktop
4. Investigate alternative MCP implementations
## Long-term Considerations
1. **Monitor MCP SDK Development**
- StreamableHTTP is evolving
- May need updates as SDK matures
2. **Consider Official Examples**
- Align with official MCP server implementations
- Contribute fixes back to SDK if needed
3. **Performance Optimization**
- Cache frequently accessed data
- Optimize session management
- Consider clustering for scale
## Conclusion
The "stream is not readable" error is solvable through systematic addressing of console output and implementing the Hybrid Single-Session architecture. This approach provides:
1. **Protocol Compliance**: Works with StreamableHTTPServerTransport's expectations
2. **Simplicity**: Single session eliminates complex state management
3. **Performance**: Minimal overhead, automatic cleanup
4. **Integration Ready**: Clean interface for service wrapper
5. **Reduced Timeline**: 7 days vs original 15 days
The single-session approach is ideal for a single-player repository that will serve as an engine for larger services, maintaining simplicity while ensuring correctness.

View File

@@ -5,64 +5,54 @@ Welcome to the n8n-MCP documentation. This directory contains comprehensive guid
## 📚 Documentation Index
### Getting Started
- **[Installation Guide](./INSTALLATION.md)** - All installation methods including Docker, manual, and development setup
- **[Claude Desktop Setup](./README_CLAUDE_SETUP.md)** - Configure Claude Desktop to use n8n-MCP
- **[Installation Guide](./INSTALLATION.md)** - Comprehensive installation guide covering all methods
- **[Claude Desktop Setup](./README_CLAUDE_SETUP.md)** - Step-by-step guide for Claude Desktop configuration
- **[Quick Start Tutorial](../README.md)** - Basic overview and quick start instructions
### Deployment
- **[HTTP Deployment Guide](./HTTP_DEPLOYMENT.md)** - Deploy n8n-MCP as an HTTP server for remote access
- **[Docker Deployment](./DOCKER_README.md)** - Comprehensive Docker deployment guide
- **[Docker Optimization Guide](./DOCKER_OPTIMIZATION_GUIDE.md)** - Optimized Docker build (200MB vs 2.6GB)
- **[Docker Testing Results](./DOCKER_TESTING_RESULTS.md)** - Docker implementation test results and findings
### Development
- **[Implementation Plan](../IMPLEMENTATION_PLAN.md)** - Technical implementation details
- **[HTTP Implementation Guide](./HTTP_IMPLEMENTATION_GUIDE.md)** - HTTP server implementation details
- **[Development Setup](./INSTALLATION.md#development-setup)** - Set up development environment
- **[Docker Deployment](./DOCKER_README.md)** - Complete Docker deployment and configuration guide
- **[Release Guide](./RELEASE_GUIDE.md)** - How to create releases and manage Docker tags
### Reference
- **[Troubleshooting Guide](./TROUBLESHOOTING.md)** - Solutions for common issues
- **[API Reference](./API_REFERENCE.md)** - MCP tools and API documentation (if available)
- **[Environment Variables](./INSTALLATION.md#environment-configuration)** - Configuration options
- **[Troubleshooting Guide](./TROUBLESHOOTING.md)** - Solutions for common issues and errors
- **[HTTP Server Fix Documentation](./HTTP_SERVER_FINAL_FIX.md)** - Technical details of v2.3.2 HTTP server fixes
- **[Docker Optimization Guide](./DOCKER_OPTIMIZATION_GUIDE.md)** - Reference for optimized Docker builds (~150MB)
- **[Changelog](./CHANGELOG.md)** - Version history and release notes
## 🚀 Quick Links
### For Users
1. **First Time Setup**: Start with the [Installation Guide](./INSTALLATION.md)
2. **Claude Desktop Users**: Follow [Claude Desktop Setup](./README_CLAUDE_SETUP.md)
3. **Remote Deployment**: See [HTTP Deployment Guide](./HTTP_DEPLOYMENT.md)
- [Install n8n-MCP](./INSTALLATION.md)
- [Configure Claude Desktop](./README_CLAUDE_SETUP.md)
- [Deploy with Docker](./DOCKER_README.md)
- [Troubleshoot Issues](./TROUBLESHOOTING.md)
### For Developers
1. **Local Development**: See [Development Setup](./INSTALLATION.md#development-setup)
2. **Docker Development**: Check [Docker README](../DOCKER_README.md)
3. **Contributing**: Read the implementation plans and guides
- [HTTP Server Architecture](./HTTP_SERVER_FINAL_FIX.md)
- [Docker Build Optimization](./DOCKER_OPTIMIZATION_GUIDE.md)
- [Release Process](./RELEASE_GUIDE.md)
## 🐳 Docker Quick Start
## 📋 Environment Variables
```bash
# Quick start with Docker
echo "AUTH_TOKEN=$(openssl rand -base64 32)" > .env
docker compose up -d
Key configuration options:
# Check health
curl http://localhost:3000/health
```
| Variable | Description | Default |
|----------|-------------|---------|
| `MCP_MODE` | Server mode: `stdio` or `http` | `stdio` |
| `USE_FIXED_HTTP` | Use fixed HTTP implementation (v2.3.2+) | `true` |
| `AUTH_TOKEN` | Authentication token for HTTP mode | Required |
| `PORT` | HTTP server port | `3000` |
| `LOG_LEVEL` | Logging verbosity | `info` |
## 📖 Documentation Updates
See [Installation Guide](./INSTALLATION.md#environment-configuration) for complete list.
This documentation is actively maintained. Recent updates include:
- ✅ Docker deployment support (Phase 1 complete)
- ✅ Simplified installation process
- ✅ Enhanced troubleshooting guide
- ✅ Multiple deployment options
## 🆘 Getting Help
## 🤝 Getting Help
- **Issues**: [GitHub Issues](https://github.com/czlonkowski/n8n-mcp/issues)
- **Discussions**: [GitHub Discussions](https://github.com/czlonkowski/n8n-mcp/discussions)
- **Troubleshooting**: [Troubleshooting Guide](./TROUBLESHOOTING.md)
1. Check the [Troubleshooting Guide](./TROUBLESHOOTING.md)
2. Review [HTTP Server Fix Documentation](./HTTP_SERVER_FINAL_FIX.md) for v2.3.2 issues
3. Open an issue on [GitHub](https://github.com/czlonkowski/n8n-mcp/issues)
## 📝 License
This project is licensed under the Sustainable Use License. See [LICENSE](../LICENSE) for details.
This project uses the Sustainable Use License. See [LICENSE](../LICENSE) for details.

View File

@@ -31,7 +31,10 @@ The easiest way to get started is using Docker:
**Setup steps:**
1. Create a `.env` file:
```bash
echo "AUTH_TOKEN=$(openssl rand -base64 32)" > .env
cat > .env << EOF
AUTH_TOKEN=$(openssl rand -base64 32)
USE_FIXED_HTTP=true
EOF
```
2. Start the server:
```bash

View File

@@ -1,199 +0,0 @@
# n8n-MCP Setup Guide
This guide will help you set up n8n-MCP with Claude Desktop.
## Prerequisites
- Node.js (any version - the project handles compatibility automatically)
- npm (comes with Node.js)
- Git
- Claude Desktop app
## Step 1: Install Node.js
### Using nvm (recommended for development)
```bash
# Install nvm if you haven't already
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# Install latest Node.js
nvm install node
nvm use node
```
### Direct installation
Download and install the latest Node.js from [nodejs.org](https://nodejs.org/)
> **Note**: Version 2.3+ includes automatic database adapter fallback. If your Node.js version doesn't match the native SQLite module, it will automatically use a pure JavaScript implementation.
## Step 2: Clone the Repository
```bash
# Clone n8n-mcp
git clone https://github.com/yourusername/n8n-mcp.git
cd n8n-mcp
# Clone n8n documentation (required)
git clone https://github.com/n8n-io/n8n-docs.git ../n8n-docs
```
## Step 3: Install and Build
```bash
# Install dependencies
npm install
# Build the project
npm run build
# Initialize the database
npm run rebuild
# Verify installation
npm run test-nodes
```
Expected output:
```
🧪 Running node tests...
✅ nodes-base.httpRequest passed all checks
✅ nodes-base.slack passed all checks
✅ nodes-base.code passed all checks
📊 Test Results: 3 passed, 0 failed
```
## Step 4: Configure Claude Desktop
### macOS
1. Edit the Claude Desktop configuration:
```bash
nano ~/Library/Application\ Support/Claude/claude_desktop_config.json
```
2. Add the n8n-documentation server:
```json
{
"mcpServers": {
"n8n-documentation": {
"command": "node",
"args": [
"/Users/yourusername/path/to/n8n-mcp/dist/mcp/index.js"
]
}
}
}
```
### Windows
1. Edit the configuration:
```bash
notepad %APPDATA%\Claude\claude_desktop_config.json
```
2. Add the n8n-documentation server with your full path:
```json
{
"mcpServers": {
"n8n-documentation": {
"command": "node",
"args": [
"C:\\Users\\yourusername\\path\\to\\n8n-mcp\\dist\\mcp\\index.js"
]
}
}
}
```
## Step 5: Verify Installation
Run the validation script to ensure everything is working:
```bash
npm run validate
```
## Step 6: Restart Claude Desktop
1. Quit Claude Desktop completely
2. Start Claude Desktop again
3. You should see "n8n-documentation" in the MCP tools menu
## Troubleshooting
### Node version mismatch
**This is now handled automatically!** If you see messages about NODE_MODULE_VERSION:
- The system will automatically fall back to sql.js (pure JavaScript)
- No manual intervention required
- Both adapters provide identical functionality
- Check logs to see which adapter is active
### Database not found
```bash
# Rebuild the database
npm run rebuild
```
### Permission denied
```bash
# Make the wrapper script executable
chmod +x mcp-server-v20.sh
```
### Claude Desktop doesn't see the MCP server
1. Check the config file location:
- macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
2. Verify the path in the config is absolute and correct
3. Check Claude Desktop logs:
- macOS: `~/Library/Logs/Claude/mcp.log`
## Testing the Integration
Once configured, you can test the integration in Claude Desktop:
1. Open a new conversation
2. Ask: "What MCP tools are available?"
3. You should see the n8n documentation tools listed
Example queries to test:
- "List all n8n trigger nodes"
- "Show me the properties of the HTTP Request node"
- "Search for nodes that work with Slack"
- "What AI tools are available in n8n?"
## Updating
To update to the latest version:
```bash
git pull
npm install
npm run build
npm run rebuild
```
## Development Mode
For development with hot reloading:
```bash
# Run in development mode
npm run dev
```
### Database Adapter Information
When the server starts, you'll see one of these messages:
- `Successfully initialized better-sqlite3 adapter` - Using native SQLite (faster)
- `Successfully initialized sql.js adapter` - Using pure JavaScript (compatible with any Node.js version)
Both adapters provide identical functionality, so the user experience is the same regardless of which one is used.

View File

@@ -1,172 +0,0 @@
# Single-Session HTTP Server Implementation
## Overview
This document describes the implementation of the Hybrid Single-Session architecture that fixes the "stream is not readable" error in the n8n-MCP HTTP server.
## Architecture
The Single-Session architecture maintains one persistent MCP session that is reused across all requests, providing:
- Protocol compliance with StreamableHTTPServerTransport
- Simple state management (one session only)
- Automatic session expiry after 30 minutes of inactivity
- Clean console output management
## Key Components
### 1. ConsoleManager (`src/utils/console-manager.ts`)
Prevents console output from interfering with the StreamableHTTPServerTransport:
- Silences all console methods during MCP request handling
- Automatically restores console after request completion
- Only active in HTTP mode
### 2. SingleSessionHTTPServer (`src/http-server-single-session.ts`)
Core implementation of the single-session architecture:
- Maintains one persistent session with StreamableHTTPServerTransport
- Automatically creates/resets session as needed
- Wraps all operations with ConsoleManager
- Handles authentication and request routing
### 3. N8NMCPEngine (`src/mcp-engine.ts`)
Clean interface for service integration:
- Simple API for processing MCP requests
- Health check capabilities
- Graceful shutdown support
- Ready for multi-tenant wrapper services
## Usage
### Standalone Mode
```bash
# Start the single-session HTTP server
MCP_MODE=http npm start
# Or use the legacy stateless server
npm run start:http:legacy
```
### As a Library
```typescript
import { N8NMCPEngine } from 'n8n-mcp';
const engine = new N8NMCPEngine();
// In your Express app
app.post('/api/mcp', authenticate, async (req, res) => {
await engine.processRequest(req, res);
});
// Health check
app.get('/health', async (req, res) => {
const health = await engine.healthCheck();
res.json(health);
});
```
### Docker Deployment
```yaml
services:
n8n-mcp:
image: ghcr.io/czlonkowski/n8n-mcp:latest
environment:
- MCP_MODE=http
- AUTH_TOKEN=${AUTH_TOKEN}
ports:
- "3000:3000"
```
## Testing
### Manual Testing
```bash
# Run the test script
npm run test:single-session
```
### Unit Tests
```bash
# Run Jest tests
npm test -- single-session.test.ts
```
### Health Check
```bash
curl http://localhost:3000/health
```
Response includes session information:
```json
{
"status": "ok",
"mode": "single-session",
"version": "2.3.1",
"sessionActive": true,
"sessionAge": 45,
"uptime": 120,
"memory": {
"used": 45,
"total": 128,
"unit": "MB"
}
}
```
## Configuration
### Environment Variables
- `AUTH_TOKEN` - Required authentication token (min 32 chars recommended)
- `MCP_MODE` - Set to "http" for HTTP mode
- `PORT` - Server port (default: 3000)
- `HOST` - Server host (default: 0.0.0.0)
- `CORS_ORIGIN` - CORS allowed origin (default: *)
### Session Timeout
The session automatically expires after 30 minutes of inactivity. This is configurable in the SingleSessionHTTPServer constructor.
## Migration from Stateless
The single-session implementation is backward compatible:
1. Same API endpoints
2. Same authentication mechanism
3. Same request/response format
4. Only internal architecture changed
To migrate:
1. Update to latest version
2. No configuration changes needed
3. Monitor logs for any issues
4. Session management is automatic
## Performance
The single-session architecture provides:
- Lower memory usage (one session vs many)
- Faster response times (no session creation overhead)
- Automatic cleanup (session expiry)
- No session accumulation issues
## Troubleshooting
### "Stream is not readable" error
This error should no longer occur with the single-session implementation. If it does:
1. Check console output isn't being written during requests
2. Verify ConsoleManager is properly wrapping operations
3. Check for third-party libraries writing to console
### Session expiry issues
If sessions are expiring too quickly:
1. Increase the timeout in SingleSessionHTTPServer
2. Monitor session age in health endpoint
3. Check for long gaps between requests
### Authentication failures
1. Verify AUTH_TOKEN is set correctly
2. Check authorization header format: `Bearer <token>`
3. Monitor logs for auth failures
## Future Enhancements
1. **Configurable session timeout** - Allow timeout configuration via environment variable
2. **Session metrics** - Track session lifetime, request count, etc.
3. **Graceful session migration** - Handle session updates without dropping requests
4. **Multi-session support** - For future scaling needs (separate repository)

View File

@@ -1,78 +0,0 @@
# Stream Fix v2.3.2 - Critical Fix for "stream is not readable" Error
## Problem
The "stream is not readable" error was persisting even after implementing the Single-Session architecture in v2.3.1. The error occurred when StreamableHTTPServerTransport tried to read the request stream.
## Root Cause
Express.js middleware `express.json()` was consuming the request body stream before StreamableHTTPServerTransport could read it. In Node.js, streams can only be read once - after consumption, they cannot be read again.
### Code Issue
```javascript
// OLD CODE - This consumed the stream!
app.use(express.json({
limit: '1mb',
strict: true
}));
```
When StreamableHTTPServerTransport later tried to read the request stream, it was already consumed, resulting in "stream is not readable" error.
## Solution
Remove all body parsing middleware for the `/mcp` endpoint, allowing StreamableHTTPServerTransport to read the raw stream directly.
### Fix Applied
```javascript
// NEW CODE - No body parsing for /mcp endpoint
// DON'T use any body parser globally - StreamableHTTPServerTransport needs raw stream
// Only use JSON parser for specific endpoints that need it
```
## Changes Made
1. **Removed global `express.json()` middleware** from both:
- `src/http-server-single-session.ts`
- `src/http-server.ts`
2. **Removed `req.body` access** in logging since body is no longer parsed
3. **Updated version** to 2.3.2 to reflect this critical fix
## Technical Details
### Why This Happens
1. Express middleware runs in order
2. `express.json()` reads the entire request stream and parses it
3. The stream position is at the end after reading
4. StreamableHTTPServerTransport expects to read from position 0
5. Node.js streams are not seekable - once consumed, they're done
### Why StreamableHTTPServerTransport Needs Raw Streams
The transport implements its own request handling and needs to:
- Read the raw JSON-RPC request
- Handle streaming responses via Server-Sent Events (SSE)
- Manage its own parsing and validation
## Testing
After this fix:
1. The MCP server should accept requests without "stream is not readable" errors
2. Authentication still works (uses headers, not body)
3. Health endpoint continues to function (GET request, no body)
## Lessons Learned
1. **Be careful with middleware order** - Body parsing middleware consumes streams
2. **StreamableHTTPServerTransport has specific requirements** - It needs raw access to the request stream
3. **Not all MCP transports are the same** - StreamableHTTP has different needs than stdio transport
## Future Considerations
If we need to log request methods or validate requests before passing to StreamableHTTPServerTransport, we would need to:
1. Implement a custom middleware that buffers the stream
2. Create a new readable stream from the buffer
3. Attach the new stream to the request object
For now, the simplest solution is to not parse the body at all for the `/mcp` endpoint.

View File

@@ -4,6 +4,7 @@ This guide helps resolve common issues with n8n-MCP.
## Table of Contents
- [HTTP Server Issues](#http-server-issues)
- [Docker Issues](#docker-issues)
- [Installation Issues](#installation-issues)
- [Runtime Errors](#runtime-errors)
@@ -12,6 +13,73 @@ This guide helps resolve common issues with n8n-MCP.
- [Network and Authentication](#network-and-authentication)
- [Performance Issues](#performance-issues)
## HTTP Server Issues
### "Stream is not readable" Error
#### Symptoms
- Error: `InternalServerError: stream is not readable`
- HTTP 400 Bad Request responses
- Server works locally but fails in HTTP mode
#### Solution (v2.3.2+)
This issue has been fixed in v2.3.2. Ensure you're using the fixed implementation:
```bash
# Set the environment variable
export USE_FIXED_HTTP=true
# Or in your .env file
USE_FIXED_HTTP=true
```
#### Technical Details
- **Cause**: Express.json() middleware was consuming the request stream
- **Fix**: Removed body parsing middleware for MCP endpoints
- **See**: [HTTP Server Fix Documentation](./HTTP_SERVER_FINAL_FIX.md)
### "Server not initialized" Error
#### Symptoms
- Error: `Bad Request: Server not initialized`
- Error code: -32000
- Occurs when using StreamableHTTPServerTransport
#### Solution
Use the fixed HTTP implementation (v2.3.2+):
```bash
# Use the fixed server
MCP_MODE=http USE_FIXED_HTTP=true npm start
# Or with Docker
docker run -e MCP_MODE=http -e USE_FIXED_HTTP=true ...
```
### Authentication Failed
#### Symptoms
- 401 Unauthorized responses
- "Authentication failed" in logs
#### Solutions
1. **Check AUTH_TOKEN format:**
```bash
# Should be at least 32 characters
echo -n "$AUTH_TOKEN" | wc -c
```
2. **Verify token in requests:**
```bash
curl -H "Authorization: Bearer $AUTH_TOKEN" ...
```
3. **Check .env file:**
```bash
# No quotes needed in .env
AUTH_TOKEN=your-token-here
```
## Docker Issues
### Container Won't Start

View File

@@ -1,768 +0,0 @@
# n8n-MCP Enhancement Implementation Plan v2.2
## Executive Summary
This revised plan addresses the core issues discovered during testing: empty properties/operations arrays and missing AI tool detection. We focus on fixing the data extraction and storage pipeline while maintaining the simplicity of v2.1.
## Key Issues Found & Solutions
### 1. Empty Properties/Operations Arrays
**Problem**: The MCP service returns empty arrays for properties, operations, and credentials despite nodes having this data.
**Root Cause**: The parser is correctly extracting data, but either:
- The data isn't being properly serialized to the database
- The MCP server isn't deserializing it correctly
- The property structure is more complex than expected
**Solution**: Enhanced property extraction and proper JSON handling
### 2. AI Tools Not Detected
**Problem**: No nodes are flagged as AI tools despite having `usableAsTool` property.
**Root Cause**: The property might be nested or named differently in the actual node classes.
**Solution**: Deep property search and multiple detection strategies
### 3. Missing Versioned Node Support
**Problem**: Versioned nodes aren't properly handled, leading to incomplete data.
**Solution**: Explicit version handling for nodes like HTTPRequest and Code
## Updated Architecture
```
n8n-mcp/
├── src/
│ ├── loaders/
│ │ └── node-loader.ts # Enhanced with better error handling
│ ├── parsers/
│ │ ├── property-extractor.ts # NEW: Dedicated property extraction
│ │ └── node-parser.ts # Updated parser with deep inspection
│ ├── mappers/
│ │ └── docs-mapper.ts # Existing (working fine)
│ ├── database/
│ │ └── node-repository.ts # NEW: Proper data serialization
│ ├── scripts/
│ │ └── rebuild.ts # Enhanced with validation
│ └── mcp/
│ └── server.ts # Fixed data retrieval
└── data/
└── nodes.db # Same schema
```
## Week 1: Core Fixes
### Day 1-2: Property Extractor
**NEW File**: `src/parsers/property-extractor.ts`
```typescript
export class PropertyExtractor {
/**
* Extract properties with proper handling of n8n's complex structures
*/
extractProperties(nodeClass: any): any[] {
const properties = [];
// Handle versioned nodes
if (nodeClass.nodeVersions) {
const versions = Object.keys(nodeClass.nodeVersions);
const latestVersion = Math.max(...versions.map(Number));
const versionedNode = nodeClass.nodeVersions[latestVersion];
if (versionedNode.description?.properties) {
return this.normalizeProperties(versionedNode.description.properties);
}
}
// Handle regular nodes
if (nodeClass.description?.properties) {
return this.normalizeProperties(nodeClass.description.properties);
}
return properties;
}
/**
* Extract operations from both declarative and programmatic nodes
*/
extractOperations(nodeClass: any): any[] {
const operations = [];
// Declarative nodes (with routing)
if (nodeClass.description?.routing) {
const routing = nodeClass.description.routing;
// Extract from request.resource and request.operation
if (routing.request?.resource) {
const resources = routing.request.resource.options || [];
const operationOptions = routing.request.operation?.options || {};
resources.forEach(resource => {
const resourceOps = operationOptions[resource.value] || [];
resourceOps.forEach(op => {
operations.push({
resource: resource.value,
operation: op.value,
name: `${resource.name} - ${op.name}`,
action: op.action
});
});
});
}
}
// Programmatic nodes - look for operation property
const props = this.extractProperties(nodeClass);
const operationProp = props.find(p => p.name === 'operation' || p.name === 'action');
if (operationProp?.options) {
operationProp.options.forEach(op => {
operations.push({
operation: op.value,
name: op.name,
description: op.description
});
});
}
return operations;
}
/**
* Deep search for AI tool capability
*/
detectAIToolCapability(nodeClass: any): boolean {
// Direct property check
if (nodeClass.description?.usableAsTool === true) return true;
// Check in actions for declarative nodes
if (nodeClass.description?.actions?.some(a => a.usableAsTool === true)) return true;
// Check versioned nodes
if (nodeClass.nodeVersions) {
for (const version of Object.values(nodeClass.nodeVersions)) {
if ((version as any).description?.usableAsTool === true) return true;
}
}
// Check for specific AI-related properties
const aiIndicators = ['openai', 'anthropic', 'huggingface', 'cohere', 'ai'];
const nodeName = nodeClass.description?.name?.toLowerCase() || '';
return aiIndicators.some(indicator => nodeName.includes(indicator));
}
/**
* Extract credential requirements with proper structure
*/
extractCredentials(nodeClass: any): any[] {
const credentials = [];
// Handle versioned nodes
if (nodeClass.nodeVersions) {
const versions = Object.keys(nodeClass.nodeVersions);
const latestVersion = Math.max(...versions.map(Number));
const versionedNode = nodeClass.nodeVersions[latestVersion];
if (versionedNode.description?.credentials) {
return versionedNode.description.credentials;
}
}
// Regular nodes
if (nodeClass.description?.credentials) {
return nodeClass.description.credentials;
}
return credentials;
}
private normalizeProperties(properties: any[]): any[] {
// Ensure all properties have consistent structure
return properties.map(prop => ({
displayName: prop.displayName,
name: prop.name,
type: prop.type,
default: prop.default,
description: prop.description,
options: prop.options,
required: prop.required,
displayOptions: prop.displayOptions,
typeOptions: prop.typeOptions,
noDataExpression: prop.noDataExpression
}));
}
}
```
### Day 3: Updated Parser
**Updated File**: `src/parsers/node-parser.ts`
```typescript
import { PropertyExtractor } from './property-extractor';
export class NodeParser {
private propertyExtractor = new PropertyExtractor();
parse(nodeClass: any, packageName: string): ParsedNode {
// Get base description (handles versioned nodes)
const description = this.getNodeDescription(nodeClass);
return {
style: this.detectStyle(nodeClass),
nodeType: this.extractNodeType(description, packageName),
displayName: description.displayName || description.name,
description: description.description,
category: this.extractCategory(description),
properties: this.propertyExtractor.extractProperties(nodeClass),
credentials: this.propertyExtractor.extractCredentials(nodeClass),
isAITool: this.propertyExtractor.detectAIToolCapability(nodeClass),
isTrigger: this.detectTrigger(description),
isWebhook: this.detectWebhook(description),
operations: this.propertyExtractor.extractOperations(nodeClass),
version: this.extractVersion(nodeClass),
isVersioned: !!nodeClass.nodeVersions
};
}
private getNodeDescription(nodeClass: any): any {
// For versioned nodes, get the latest version's description
if (nodeClass.baseDescription) {
return nodeClass.baseDescription;
}
if (nodeClass.nodeVersions) {
const versions = Object.keys(nodeClass.nodeVersions);
const latestVersion = Math.max(...versions.map(Number));
return nodeClass.nodeVersions[latestVersion].description || {};
}
return nodeClass.description || {};
}
private detectStyle(nodeClass: any): 'declarative' | 'programmatic' {
const desc = this.getNodeDescription(nodeClass);
return desc.routing ? 'declarative' : 'programmatic';
}
private extractNodeType(description: any, packageName: string): string {
// Ensure we have the full node type including package prefix
const name = description.name;
if (name.includes('.')) {
return name;
}
// Add package prefix if missing
const packagePrefix = packageName.replace('@n8n/', '').replace('n8n-', '');
return `${packagePrefix}.${name}`;
}
private extractCategory(description: any): string {
return description.group?.[0] ||
description.categories?.[0] ||
description.category ||
'misc';
}
private detectTrigger(description: any): boolean {
return description.polling === true ||
description.trigger === true ||
description.eventTrigger === true ||
description.name?.toLowerCase().includes('trigger');
}
private detectWebhook(description: any): boolean {
return (description.webhooks?.length > 0) ||
description.webhook === true ||
description.name?.toLowerCase().includes('webhook');
}
private extractVersion(nodeClass: any): string {
if (nodeClass.baseDescription?.defaultVersion) {
return nodeClass.baseDescription.defaultVersion.toString();
}
if (nodeClass.nodeVersions) {
const versions = Object.keys(nodeClass.nodeVersions);
return Math.max(...versions.map(Number)).toString();
}
return nodeClass.description?.version || '1';
}
}
```
### Day 4: Node Repository
**NEW File**: `src/database/node-repository.ts`
```typescript
import Database from 'better-sqlite3';
export class NodeRepository {
constructor(private db: Database.Database) {}
/**
* Save node with proper JSON serialization
*/
saveNode(node: ParsedNode): void {
const stmt = this.db.prepare(`
INSERT OR REPLACE INTO nodes (
node_type, package_name, display_name, description,
category, development_style, is_ai_tool, is_trigger,
is_webhook, is_versioned, version, documentation,
properties_schema, operations, credentials_required
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`);
stmt.run(
node.nodeType,
node.packageName,
node.displayName,
node.description,
node.category,
node.style,
node.isAITool ? 1 : 0,
node.isTrigger ? 1 : 0,
node.isWebhook ? 1 : 0,
node.isVersioned ? 1 : 0,
node.version,
node.documentation || null,
JSON.stringify(node.properties, null, 2),
JSON.stringify(node.operations, null, 2),
JSON.stringify(node.credentials, null, 2)
);
}
/**
* Get node with proper JSON deserialization
*/
getNode(nodeType: string): any {
const row = this.db.prepare(`
SELECT * FROM nodes WHERE node_type = ?
`).get(nodeType);
if (!row) return null;
return {
nodeType: row.node_type,
displayName: row.display_name,
description: row.description,
category: row.category,
developmentStyle: row.development_style,
package: row.package_name,
isAITool: !!row.is_ai_tool,
isTrigger: !!row.is_trigger,
isWebhook: !!row.is_webhook,
isVersioned: !!row.is_versioned,
version: row.version,
properties: this.safeJsonParse(row.properties_schema, []),
operations: this.safeJsonParse(row.operations, []),
credentials: this.safeJsonParse(row.credentials_required, []),
hasDocumentation: !!row.documentation
};
}
/**
* Get AI tools with proper filtering
*/
getAITools(): any[] {
const rows = this.db.prepare(`
SELECT node_type, display_name, description, package_name
FROM nodes
WHERE is_ai_tool = 1
ORDER BY display_name
`).all();
return rows.map(row => ({
nodeType: row.node_type,
displayName: row.display_name,
description: row.description,
package: row.package_name
}));
}
private safeJsonParse(json: string, defaultValue: any): any {
try {
return JSON.parse(json);
} catch {
return defaultValue;
}
}
}
```
### Day 5: Enhanced Rebuild Script
**Updated File**: `src/scripts/rebuild.ts`
```typescript
#!/usr/bin/env node
import Database from 'better-sqlite3';
import { N8nNodeLoader } from '../loaders/node-loader';
import { NodeParser } from '../parsers/node-parser';
import { DocsMapper } from '../mappers/docs-mapper';
import { NodeRepository } from '../database/node-repository';
import * as fs from 'fs';
import * as path from 'path';
async function rebuild() {
console.log('🔄 Rebuilding n8n node database...\n');
const db = new Database('./data/nodes.db');
const loader = new N8nNodeLoader();
const parser = new NodeParser();
const mapper = new DocsMapper();
const repository = new NodeRepository(db);
// Initialize database
const schema = fs.readFileSync(path.join(__dirname, '../database/schema.sql'), 'utf8');
db.exec(schema);
// Clear existing data
db.exec('DELETE FROM nodes');
console.log('🗑️ Cleared existing data\n');
// Load all nodes
const nodes = await loader.loadAllNodes();
console.log(`📦 Loaded ${nodes.length} nodes from packages\n`);
// Statistics
const stats = {
successful: 0,
failed: 0,
aiTools: 0,
triggers: 0,
webhooks: 0,
withProperties: 0,
withOperations: 0,
withDocs: 0
};
// Process each node
for (const { packageName, nodeName, NodeClass } of nodes) {
try {
// Parse node
const parsed = parser.parse(NodeClass, packageName);
// Validate parsed data
if (!parsed.nodeType || !parsed.displayName) {
throw new Error('Missing required fields');
}
// Get documentation
const docs = await mapper.fetchDocumentation(parsed.nodeType);
parsed.documentation = docs;
// Save to database
repository.saveNode(parsed);
// Update statistics
stats.successful++;
if (parsed.isAITool) stats.aiTools++;
if (parsed.isTrigger) stats.triggers++;
if (parsed.isWebhook) stats.webhooks++;
if (parsed.properties.length > 0) stats.withProperties++;
if (parsed.operations.length > 0) stats.withOperations++;
if (docs) stats.withDocs++;
console.log(`✅ ${parsed.nodeType} [Props: ${parsed.properties.length}, Ops: ${parsed.operations.length}]`);
} catch (error) {
stats.failed++;
console.error(`❌ Failed to process ${nodeName}: ${error.message}`);
}
}
// Validation check
console.log('\n🔍 Running validation checks...');
const validationResults = validateDatabase(repository);
// Summary
console.log('\n📊 Summary:');
console.log(` Total nodes: ${nodes.length}`);
console.log(` Successful: ${stats.successful}`);
console.log(` Failed: ${stats.failed}`);
console.log(` AI Tools: ${stats.aiTools}`);
console.log(` Triggers: ${stats.triggers}`);
console.log(` Webhooks: ${stats.webhooks}`);
console.log(` With Properties: ${stats.withProperties}`);
console.log(` With Operations: ${stats.withOperations}`);
console.log(` With Documentation: ${stats.withDocs}`);
if (!validationResults.passed) {
console.log('\n⚠ Validation Issues:');
validationResults.issues.forEach(issue => console.log(` - ${issue}`));
}
console.log('\n✨ Rebuild complete!');
db.close();
}
function validateDatabase(repository: NodeRepository): { passed: boolean; issues: string[] } {
const issues = [];
// Check critical nodes
const criticalNodes = ['httpRequest', 'code', 'webhook', 'slack'];
for (const nodeType of criticalNodes) {
const node = repository.getNode(nodeType);
if (!node) {
issues.push(`Critical node ${nodeType} not found`);
continue;
}
if (node.properties.length === 0) {
issues.push(`Node ${nodeType} has no properties`);
}
}
// Check AI tools
const aiTools = repository.getAITools();
if (aiTools.length === 0) {
issues.push('No AI tools found - check detection logic');
}
return {
passed: issues.length === 0,
issues
};
}
// Run if called directly
if (require.main === module) {
rebuild().catch(console.error);
}
```
## Week 2: Testing and MCP Updates
### Day 6-7: Enhanced MCP Server
**Updated File**: `src/mcp/server.ts`
```typescript
import { NodeRepository } from '../database/node-repository';
// In the get_node_info handler
async function getNodeInfo(nodeType: string) {
const repository = new NodeRepository(db);
const node = repository.getNode(nodeType);
if (!node) {
// Try alternative formats
const alternatives = [
nodeType,
nodeType.replace('n8n-nodes-base.', ''),
`n8n-nodes-base.${nodeType}`,
nodeType.toLowerCase()
];
for (const alt of alternatives) {
const found = repository.getNode(alt);
if (found) {
node = found;
break;
}
}
if (!node) {
throw new Error(`Node ${nodeType} not found`);
}
}
return node;
}
// In the list_ai_tools handler
async function listAITools() {
const repository = new NodeRepository(db);
const tools = repository.getAITools();
return {
tools,
totalCount: tools.length,
requirements: {
environmentVariable: 'N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true',
nodeProperty: 'usableAsTool: true'
}
};
}
```
### Day 8-9: Test Suite
**NEW File**: `src/scripts/test-nodes.ts`
```typescript
#!/usr/bin/env node
import Database from 'better-sqlite3';
import { NodeRepository } from '../database/node-repository';
const TEST_CASES = [
{
nodeType: 'httpRequest',
checks: {
hasProperties: true,
minProperties: 5,
hasDocumentation: true,
isVersioned: true
}
},
{
nodeType: 'slack',
checks: {
hasOperations: true,
minOperations: 10,
style: 'declarative'
}
},
{
nodeType: 'code',
checks: {
hasProperties: true,
properties: ['mode', 'language', 'jsCode']
}
}
];
async function runTests() {
const db = new Database('./data/nodes.db');
const repository = new NodeRepository(db);
console.log('🧪 Running node tests...\n');
let passed = 0;
let failed = 0;
for (const testCase of TEST_CASES) {
console.log(`Testing ${testCase.nodeType}...`);
try {
const node = repository.getNode(testCase.nodeType);
if (!node) {
throw new Error('Node not found');
}
// Run checks
for (const [check, expected] of Object.entries(testCase.checks)) {
switch (check) {
case 'hasProperties':
if (expected && node.properties.length === 0) {
throw new Error('No properties found');
}
break;
case 'minProperties':
if (node.properties.length < expected) {
throw new Error(`Expected at least ${expected} properties, got ${node.properties.length}`);
}
break;
case 'hasOperations':
if (expected && node.operations.length === 0) {
throw new Error('No operations found');
}
break;
case 'minOperations':
if (node.operations.length < expected) {
throw new Error(`Expected at least ${expected} operations, got ${node.operations.length}`);
}
break;
case 'properties':
const propNames = node.properties.map(p => p.name);
for (const prop of expected as string[]) {
if (!propNames.includes(prop)) {
throw new Error(`Missing property: ${prop}`);
}
}
break;
}
}
console.log(`✅ ${testCase.nodeType} passed all checks\n`);
passed++;
} catch (error) {
console.error(`❌ ${testCase.nodeType} failed: ${error.message}\n`);
failed++;
}
}
console.log(`\n📊 Test Results: ${passed} passed, ${failed} failed`);
db.close();
}
if (require.main === module) {
runTests().catch(console.error);
}
```
## Key Improvements in v2.2
1. **Dedicated Property Extraction**
- Handles versioned nodes properly
- Extracts operations from both declarative and programmatic nodes
- Deep search for AI tool capabilities
2. **Proper Data Serialization**
- NodeRepository ensures JSON is properly stored and retrieved
- Safe JSON parsing with defaults
- Consistent data structure
3. **Enhanced Validation**
- Validation checks in rebuild script
- Test suite for critical nodes
- Statistics tracking for better visibility
4. **Better Error Handling**
- Alternative node type lookups
- Graceful fallbacks
- Detailed error messages
5. **AI Tool Detection**
- Multiple detection strategies
- Check in versioned nodes
- Name-based heuristics as fallback
## Success Metrics Update
1. **Properties/Operations**: >90% of nodes should have non-empty arrays
2. **AI Tools**: Should detect at least 10-20 AI-capable nodes
3. **Critical Nodes**: 100% pass rate on test suite
4. **Documentation**: Maintain existing 89% coverage
5. **Performance**: Rebuild in <60 seconds (allowing for validation)
## Deployment Steps
```bash
# 1. Update code with v2.2 changes
npm install
# 2. Build TypeScript
npm run build
# 3. Run rebuild with validation
npm run rebuild
# 4. Run test suite
npm run test-nodes
# 5. Verify AI tools
npm run list-ai-tools
# 6. Start MCP server
npm start
```
## Summary
Version 2.2 focuses on fixing the core data extraction issues while maintaining the simplicity of the MVP approach. The key insight is that n8n's node structure is more complex than initially assumed, especially for versioned nodes and AI tool detection. By adding dedicated extraction logic and proper data handling, we can deliver accurate node information while keeping the implementation straightforward.</document_content>
</invoke>

View File

@@ -1,74 +0,0 @@
# n8n-MCP v2.2 Implementation Summary
## Successfully Implemented All Fixes from implementation_plan2.md
### Key Issues Resolved
1. **Empty Properties/Operations Arrays**
- Created dedicated PropertyExtractor class
- Properly handles versioned nodes by instantiating them
- Extracts properties from latest version of versioned nodes
- Result: 452/458 nodes now have properties (98.7%)
2. **AI Tools Detection**
- Deep search for usableAsTool property
- Checks in actions and versioned nodes
- Name-based heuristics as fallback
- Result: 35 AI tools detected
3. **Versioned Node Support**
- Proper detection of VersionedNodeType pattern
- Extracts data from instance.nodeVersions
- HTTPRequest and Code nodes correctly identified as versioned
- Result: All versioned nodes properly handled
4. **Operations Extraction**
- Handles both declarative (routing-based) and programmatic nodes
- Extracts from routing.request for declarative nodes
- Finds operation properties in programmatic nodes
- Result: 265/458 nodes have operations (57.9%)
### Final Metrics
```
Total nodes: 458
Successful: 458 (100%)
Failed: 0
AI Tools: 35
Triggers: 93
Webhooks: 71
With Properties: 452 (98.7%)
With Operations: 265 (57.9%)
With Documentation: 406 (88.6%)
```
### Critical Node Tests
All critical nodes pass validation:
- ✅ HTTP Request: 29 properties, versioned, has documentation
- ✅ Slack: 17 operations, declarative style
- ✅ Code: 11 properties including mode, language, jsCode
### Architecture Improvements
1. **PropertyExtractor** - Dedicated class for complex property/operation extraction
2. **NodeRepository** - Proper JSON serialization/deserialization
3. **Enhanced Parser** - Better versioned node handling
4. **Validation** - Built-in validation in rebuild script
5. **Test Suite** - Automated testing for critical nodes
### MCP Server Ready
The MCP server now correctly:
- Returns non-empty properties arrays
- Returns non-empty operations arrays
- Detects AI tools
- Handles alternative node name formats
- Uses NodeRepository for consistent data access
### Next Steps
1. The implementation is complete and ready for Claude Desktop
2. Use `mcp-server-v20.sh` wrapper script for Node v20 compatibility
3. All success metrics from v2.2 plan have been achieved
4. The system is ready for production use