Commit Graph

12 Commits

Author SHA1 Message Date
czlonkowski
f508d9873b fix: resolve SplitInBatches output confusion for AI assistants (#97)
## Problem
AI assistants were consistently connecting SplitInBatches node outputs backwards because:
- Output index 0 = "done" (runs after loop completes)
- Output index 1 = "loop" (processes items inside loop)
This counterintuitive ordering caused incorrect workflow connections.

## Solution
Enhanced the n8n-mcp system to expose and clarify output information:

### Database & Schema
- Added `outputs` and `output_names` columns to nodes table
- Updated NodeRepository to store/retrieve output information

### Node Parsing
- Enhanced NodeParser to extract outputs and outputNames from nodes
- Properly handles versioned nodes like SplitInBatchesV3

### MCP Server
- Modified getNodeInfo to return detailed output descriptions
- Added connection guidance for each output
- Special handling for loop nodes (SplitInBatches, IF, Switch)

### Documentation
- Enhanced DocsMapper to inject critical output guidance
- Added warnings about counterintuitive output ordering
- Provides correct connection patterns for loop nodes

### Workflow Validation
- Added validateSplitInBatchesConnection method
- Detects reversed connections and provides specific errors
- Added checkForLoopBack with depth limit to prevent stack overflow
- Smart heuristics to identify likely connection mistakes

## Testing
- Created comprehensive test suite (81 tests)
- Unit tests for all modified components
- Edge case handling for malformed data
- Performance testing with large workflows

## Impact
AI assistants will now:
- See explicit output indices and names (e.g., "Output 0: done")
- Receive clear connection guidance
- Get validation errors when connections are reversed
- Have enhanced documentation explaining the correct pattern

Fixes #97

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-07 15:58:40 +02:00
czlonkowski
b5210e5963 feat: add comprehensive performance benchmark tracking system
- Create benchmark test suites for critical operations:
  - Node loading performance
  - Database query performance
  - Search operations performance
  - Validation performance
  - MCP tool execution performance

- Add GitHub Actions workflow for benchmark tracking:
  - Runs on push to main and PRs
  - Uses github-action-benchmark for historical tracking
  - Comments on PRs with performance results
  - Alerts on >10% performance regressions
  - Stores results in GitHub Pages

- Create benchmark infrastructure:
  - Custom Vitest benchmark configuration
  - JSON reporter for CI results
  - Result formatter for github-action-benchmark
  - Performance threshold documentation

- Add supporting utilities:
  - SQLiteStorageService for benchmark database setup
  - MCPEngine wrapper for testing MCP tools
  - Test factories for generating benchmark data
  - Enhanced NodeRepository with benchmark methods

- Document benchmark system:
  - Comprehensive benchmark guide in docs/BENCHMARKS.md
  - Performance thresholds in .github/BENCHMARK_THRESHOLDS.md
  - README for benchmarks directory
  - Integration with existing test suite

The benchmark system will help monitor performance over time and catch regressions before they reach production.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-28 22:45:09 +02:00
czlonkowski
1170ad27a6 fix: resolve WASM file loading issue for npx execution (closes #31)
- Enhanced database adapter to support multiple WASM file resolution strategies
- Added require.resolve() for reliable package location in npm environments
- Made better-sqlite3 an optional dependency
- Improved error handling with clear messages
- Updated version to 2.7.13
- Updated CHANGELOG and README badges
2025-07-11 08:48:37 +02:00
czlonkowski
e8f6b684f0 fix: make FTS5 optional for template search (fixes Claude Desktop compatibility)
- Added runtime FTS5 detection in database adapters
- Removed FTS5 from required schema to prevent "no such module" errors
- FTS5 tables/triggers created conditionally only if supported
- Template search automatically falls back to LIKE when FTS5 unavailable
- Works in ALL SQLite environments (Claude Desktop, restricted envs, etc.)

This ensures search_templates() works correctly regardless of SQLite build,
while still providing optimal performance when FTS5 is available.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-10 12:13:08 +02:00
czlonkowski
19b9b5ca2d fix: webhook and 4 other nodes incorrectly marked as non-triggers
Fixed issue where Docker images using sql.js adapter returned boolean fields
as strings, causing is_trigger=0 to evaluate as true instead of false.

Changes:
- Added convertIntegerColumns() to sql.js adapter to convert SQLite integers
- Updated server.ts and node-repository.ts to use Number() conversion as backup
- Added test script to verify fix works with sql.js adapter

This fixes webhook, cron, interval, and emailReadImap nodes showing
isTrigger: false in Docker deployments.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-06 17:01:05 +02:00
czlonkowski
f038f72049 fix: remove non-deterministic CHECK constraint from templates table
- SQLite doesn't allow datetime('now') in CHECK constraints
- Drop tables before recreating to ensure clean schema
- 6-month filtering is already handled in application logic
- Successfully fetched and stored 199 templates

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-20 00:08:28 +02:00
czlonkowski
08f9d1ad30 feat: add n8n workflow templates as MCP tools
- Add 4 new MCP tools for workflow templates
- Integrate with n8n.io API to fetch community templates
- Filter templates to last 6 months only
- Store templates in SQLite with full workflow JSON
- Manual fetch system (not part of regular rebuild)
- Support search by nodes, keywords, and task categories
- Add fetch:templates and test:templates npm scripts
- Update to v2.4.1

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-20 00:02:09 +02:00
czlonkowski
a688ad3d14 fix: resolve Docker stdio initialization timeout issue
- Add InitializeRequestSchema handler to MCP server
- Implement stdout flushing for Docker environments
- Create stdio-wrapper for clean JSON-RPC communication
- Update docker-entrypoint.sh to prevent stdout pollution
- Fix logger to check MCP_MODE before level check

These changes ensure the MCP server responds to initialization requests
within Claude Desktop's 60-second timeout when running in Docker.
2025-06-17 09:12:01 +02:00
czlonkowski
3ab8fbd60b feat: implement Docker image optimization - reduces size from 2.6GB to ~200MB
- Add optimized database schema with embedded source code storage
- Create optimized rebuild script that extracts source at build time
- Implement optimized MCP server reading from pre-built database
- Add Dockerfile.optimized with multi-stage build process
- Create comprehensive documentation and testing scripts
- Demonstrate 92% size reduction by removing runtime n8n dependencies

The optimization works by:
1. Building complete database at Docker build time
2. Extracting all node source code into the database
3. Creating minimal runtime image without n8n packages
4. Serving everything from pre-built SQLite database

This makes n8n-MCP suitable for resource-constrained production deployments.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-14 10:36:54 +02:00
czlonkowski
b476d36275 feat: implement universal Node.js compatibility with automatic database adapter fallback 2025-06-12 23:51:47 +02:00
czlonkowski
66f5d74e42 feat: implement node parser and property extractor with versioned node support 2025-06-12 18:45:20 +02:00
czlonkowski
8bf670c31e feat: Implement n8n-MCP Enhancement Plan v2.1 Final
- Implement simple node loader supporting n8n-nodes-base and langchain packages
- Create parser handling declarative, programmatic, and versioned nodes
- Build documentation mapper with 89% coverage (405/457 nodes)
- Setup SQLite database with minimal schema
- Create rebuild script for one-command database updates
- Implement validation script for critical nodes
- Update MCP server with documentation-focused tools
- Add npm scripts for streamlined workflow

Successfully loads 457/458 nodes with accurate documentation mapping.
Versioned node detection working (46 nodes detected).
3/4 critical nodes pass validation tests.

Known limitations:
- Slack operations extraction incomplete for some versioned nodes
- One langchain node fails due to missing dependency
- No AI tools detected (none have usableAsTool flag)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-12 14:18:19 +02:00