Files
n8n-mcp/.env.example
czlonkowski 2305aaab9e feat: implement integration testing foundation (Phase 1)
Complete implementation of Phase 1 foundation for n8n API integration tests.
Establishes core utilities, fixtures, and infrastructure for testing all 17 n8n API handlers against real n8n instance.

Changes:
- Add integration test environment configuration to .env.example
- Create comprehensive test utilities infrastructure:
  * credentials.ts: Environment-aware credential management (local .env vs CI secrets)
  * n8n-client.ts: Singleton API client wrapper with health checks
  * test-context.ts: Resource tracking and automatic cleanup
  * cleanup-helpers.ts: Multi-level cleanup strategies (orphaned, age-based, tag-based)
  * fixtures.ts: 6 pre-built workflow templates (webhook, HTTP, multi-node, error handling, AI, expressions)
  * factories.ts: Dynamic node/workflow builders with 15+ factory functions
  * webhook-workflows.ts: Webhook workflow configs and setup instructions

- Add npm scripts:
  * test:integration:n8n: Run n8n API integration tests
  * test:cleanup:orphans: Clean up orphaned test resources

- Create cleanup script for CI/manual use

Documentation:
- Add comprehensive integration testing plan (550 lines)
- Add Phase 1 completion summary with lessons learned

Key Features:
- Automatic credential detection (CI vs local)
- Multi-level cleanup (test, suite, CI, orphan)
- 6 workflow fixtures covering common scenarios
- 15+ factory functions for dynamic test data
- Support for 4 HTTP methods (GET, POST, PUT, DELETE) via pre-activated webhook workflows
- TypeScript-first with full type safety
- Comprehensive error handling with helpful messages

Total: ~1,520 lines of production-ready code + 650 lines of documentation

Ready for Phase 2: Workflow creation tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-03 13:12:42 +02:00

167 lines
5.6 KiB
Plaintext

# n8n Documentation MCP Server Configuration
# ====================
# COMMON CONFIGURATION
# ====================
# Database Configuration
# For local development: ./data/nodes.db
# For Docker: /app/data/nodes.db
# Custom paths supported in v2.7.16+ (must end with .db)
NODE_DB_PATH=./data/nodes.db
# Logging Level (debug, info, warn, error)
MCP_LOG_LEVEL=info
# Node Environment (development, production)
NODE_ENV=development
# Rebuild database on startup (true/false)
REBUILD_ON_START=false
# =========================
# LOCAL MODE CONFIGURATION
# =========================
# Used when running: npm run start:v2 or npm run dev:v2
# Local MCP Server Configuration
MCP_SERVER_PORT=3000
MCP_SERVER_HOST=localhost
# MCP_AUTH_TOKEN=optional-for-local-development
# =========================
# SIMPLE HTTP MODE
# =========================
# Used for private single-user deployments
# Server mode: stdio (local) or http (remote)
MCP_MODE=stdio
# Use fixed HTTP implementation (recommended for stability)
# Set to true to bypass StreamableHTTPServerTransport issues
USE_FIXED_HTTP=true
# HTTP Server Configuration (only used when MCP_MODE=http)
PORT=3000
HOST=0.0.0.0
# Base URL Configuration (optional)
# Set this when running behind a proxy or when the server is accessed via a different URL
# than what it binds to. If not set, URLs will be auto-detected from proxy headers (if TRUST_PROXY is set)
# or constructed from HOST and PORT.
# Examples:
# BASE_URL=https://n8n-mcp.example.com
# BASE_URL=https://your-domain.com:8443
# PUBLIC_URL=https://n8n-mcp.mydomain.com (alternative to BASE_URL)
# Authentication token for HTTP mode (REQUIRED)
# Generate with: openssl rand -base64 32
AUTH_TOKEN=your-secure-token-here
# CORS origin for HTTP mode (optional)
# Default: * (allow all origins)
# For production, set to your specific domain
# CORS_ORIGIN=https://your-client-domain.com
# Trust proxy configuration for correct IP logging (0=disabled, 1=trust first proxy)
# Set to 1 when running behind a reverse proxy (Nginx, Traefik, etc.)
# Set to the number of proxy hops if behind multiple proxies
# Default: 0 (disabled)
# TRUST_PROXY=0
# =========================
# MULTI-TENANT CONFIGURATION
# =========================
# Enable multi-tenant mode for dynamic instance support
# When enabled, n8n API tools will be available for all sessions,
# and instance configuration will be determined from HTTP headers
# Default: false (single-tenant mode using environment variables)
ENABLE_MULTI_TENANT=false
# Session isolation strategy for multi-tenant mode
# - "instance": Create separate sessions per instance ID (recommended)
# - "shared": Share sessions but switch contexts (advanced)
# Default: instance
# MULTI_TENANT_SESSION_STRATEGY=instance
# =========================
# N8N API CONFIGURATION
# =========================
# Optional: Enable n8n management tools by providing API credentials
# These tools allow creating, updating, and executing workflows
# n8n instance API URL (without /api/v1 suffix)
# Example: https://your-n8n-instance.com
# N8N_API_URL=
# n8n API Key (get from Settings > API in your n8n instance)
# N8N_API_KEY=
# n8n API request timeout in milliseconds (default: 30000)
# N8N_API_TIMEOUT=30000
# Maximum number of API request retries (default: 3)
# N8N_API_MAX_RETRIES=3
# =========================
# CACHE CONFIGURATION
# =========================
# Optional: Configure instance cache settings for flexible instance support
# Maximum number of cached instances (default: 100, min: 1, max: 10000)
# INSTANCE_CACHE_MAX=100
# Cache TTL in minutes (default: 30, min: 1, max: 1440/24 hours)
# INSTANCE_CACHE_TTL_MINUTES=30
# =========================
# OPENAI API CONFIGURATION
# =========================
# Optional: Enable AI-powered template metadata generation
# Provides structured metadata for improved template discovery
# OpenAI API Key (get from https://platform.openai.com/api-keys)
# OPENAI_API_KEY=
# OpenAI Model for metadata generation (default: gpt-4o-mini)
# OPENAI_MODEL=gpt-4o-mini
# Batch size for metadata generation (default: 100)
# Templates are processed in batches using OpenAI's Batch API for 50% cost savings
# OPENAI_BATCH_SIZE=100
# Enable metadata generation during template fetch (default: false)
# Set to true to automatically generate metadata when running fetch:templates
# METADATA_GENERATION_ENABLED=false
# ========================================
# INTEGRATION TESTING CONFIGURATION
# ========================================
# Configuration for integration tests that call real n8n instance API
# n8n API Configuration for Integration Tests
# For local development: Use your local n8n instance
# For CI: These will be provided by GitHub secrets
# N8N_API_URL=http://localhost:5678
# N8N_API_KEY=
# Pre-activated Webhook Workflows for Testing
# These workflows must be created manually in n8n and activated
# because n8n API doesn't support workflow activation.
#
# Setup Instructions:
# 1. Create 4 workflows in n8n UI (one for each HTTP method)
# 2. Each workflow should have a single Webhook node
# 3. Configure webhook paths: mcp-test-get, mcp-test-post, mcp-test-put, mcp-test-delete
# 4. ACTIVATE each workflow in n8n UI
# 5. Copy the workflow IDs here
#
# N8N_TEST_WEBHOOK_GET_ID= # Workflow ID for GET method webhook
# N8N_TEST_WEBHOOK_POST_ID= # Workflow ID for POST method webhook
# N8N_TEST_WEBHOOK_PUT_ID= # Workflow ID for PUT method webhook
# N8N_TEST_WEBHOOK_DELETE_ID= # Workflow ID for DELETE method webhook
# Test Configuration
N8N_TEST_CLEANUP_ENABLED=true # Enable automatic cleanup of test workflows
N8N_TEST_TAG=mcp-integration-test # Tag applied to all test workflows
N8N_TEST_NAME_PREFIX=[MCP-TEST] # Name prefix for test workflows