Complete implementation of Phase 1 foundation for n8n API integration tests. Establishes core utilities, fixtures, and infrastructure for testing all 17 n8n API handlers against real n8n instance. Changes: - Add integration test environment configuration to .env.example - Create comprehensive test utilities infrastructure: * credentials.ts: Environment-aware credential management (local .env vs CI secrets) * n8n-client.ts: Singleton API client wrapper with health checks * test-context.ts: Resource tracking and automatic cleanup * cleanup-helpers.ts: Multi-level cleanup strategies (orphaned, age-based, tag-based) * fixtures.ts: 6 pre-built workflow templates (webhook, HTTP, multi-node, error handling, AI, expressions) * factories.ts: Dynamic node/workflow builders with 15+ factory functions * webhook-workflows.ts: Webhook workflow configs and setup instructions - Add npm scripts: * test:integration:n8n: Run n8n API integration tests * test:cleanup:orphans: Clean up orphaned test resources - Create cleanup script for CI/manual use Documentation: - Add comprehensive integration testing plan (550 lines) - Add Phase 1 completion summary with lessons learned Key Features: - Automatic credential detection (CI vs local) - Multi-level cleanup (test, suite, CI, orphan) - 6 workflow fixtures covering common scenarios - 15+ factory functions for dynamic test data - Support for 4 HTTP methods (GET, POST, PUT, DELETE) via pre-activated webhook workflows - TypeScript-first with full type safety - Comprehensive error handling with helpful messages Total: ~1,520 lines of production-ready code + 650 lines of documentation Ready for Phase 2: Workflow creation tests 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
7.5 KiB
N8N-MCP Deep Dive Analysis - October 2, 2025
Overview
This directory contains a comprehensive deep-dive analysis of n8n-mcp usage data from September 26 - October 2, 2025.
Data Volume Analyzed:
- 212,375 telemetry events
- 5,751 workflow creations
- 2,119 unique users
- 6 days of usage data
Report Structure
###: DEEP_DIVE_ANALYSIS_2025-10-02.md (Main Report)
Sections Covered:
- Executive Summary - Key findings and recommendations
- Tool Performance Analysis - Success rates, performance metrics, critical findings
- Validation Catastrophe - The node type prefix disaster analysis
- Usage Patterns & User Segmentation - User distribution, daily trends
- Tool Sequence Analysis - How AI agents use tools together
- Workflow Creation Patterns - Complexity distribution, popular nodes
- Platform & Version Distribution - OS, architecture, version adoption
- Error Patterns & Root Causes - TypeErrors, validation errors, discovery failures
- P0-P1 Refactoring Recommendations - Detailed implementation guides
Sections Covered:
- Remaining P1 and P2 recommendations
- Architectural refactoring suggestions
- Telemetry enhancements
- CHANGELOG integration
- Final recommendations summary
Key Findings Summary
Critical Issues (P0 - Fix Immediately)
-
Node Type Prefix Validation Catastrophe
- 5,000+ validation errors from single root cause
nodes-base.Xvsn8n-nodes-base.Xconfusion- Solution: Auto-normalize prefixes (2-4 hours effort)
-
TypeError in Node Information Tools
- 10-18% failure rate in get_node_essentials/info
- 1,000+ failures affecting hundreds of users
- Solution: Complete null-safety audit (1 day effort)
-
Task Discovery Failures
get_node_for_taskfailing 28% of the time- Worst-performing tool in entire system
- Solution: Expand task library + fuzzy matching (3 days effort)
Performance Metrics
Excellent Reliability (96-100% success):
- n8n_update_partial_workflow: 98.7%
- search_nodes: 99.8%
- n8n_create_workflow: 96.1%
- All workflow management tools: 100%
User Distribution:
- Power Users (12): 2,112 events/user, 33 workflows
- Heavy Users (47): 673 events/user, 18 workflows
- Regular Users (516): 199 events/user, 7 workflows (CORE AUDIENCE)
- Active Users (919): 52 events/user, 2 workflows
- Casual Users (625): 8 events/user, 1 workflow
Usage Insights
Most Used Tools:
- n8n_update_partial_workflow: 10,177 calls (iterative refinement)
- search_nodes: 8,839 calls (node discovery)
- n8n_create_workflow: 6,046 calls (workflow creation)
Most Common Tool Sequences:
- update → update → update (549x) - Iterative refinement pattern
- create → update (297x) - Create then refine
- update → get_workflow (265x) - Update then verify
Most Popular Nodes:
- code (53% of workflows) - AI agents love programmatic control
- httpRequest (47%) - Integration-heavy usage
- webhook (32%) - Event-driven automation
SQL Analytical Views Created
15 comprehensive views were created in Supabase for ongoing analysis:
vw_tool_performance- Performance metrics per toolvw_error_analysis- Error patterns and frequenciesvw_validation_analysis- Validation failure detailsvw_tool_sequences- Tool-to-tool transition patternsvw_workflow_creation_patterns- Workflow characteristicsvw_node_usage_analysis- Node popularity and complexityvw_node_cooccurrence- Which nodes are used togethervw_user_activity- Per-user activity metricsvw_session_analysis- Platform/version distributionvw_workflow_validation_failures- Workflow validation issuesvw_temporal_patterns- Time-based usage patternsvw_tool_funnel- User progression through toolsvw_search_analysis- Search behaviorvw_tool_success_summary- Success/failure ratesvw_user_journeys- Complete user session reconstruction
Priority Recommendations
Immediate Actions (This Week)
✅ P0-R1: Auto-normalize node type prefixes → Eliminate 4,800 errors ✅ P0-R2: Complete null-safety audit → Fix 10-18% TypeError failures ✅ P0-R3: Expand get_node_for_task library → 72% → 95% success rate
Expected Impact: Reduce error rate from 5-10% to <2% overall
Next Release (2-3 Weeks)
✅ P1-R4: Batch workflow operations → Save 30-50% tokens ✅ P1-R5: Proactive node suggestions → Reduce search iterations ✅ P1-R6: Auto-fix suggestions in errors → Self-service recovery
Expected Impact: 40% faster workflow creation, better UX
Future Roadmap (1-3 Months)
✅ A1: Service layer consolidation → Cleaner architecture ✅ A2: Repository caching → 50% faster node operations ✅ R10: Workflow template library from usage → 80% coverage ✅ T1-T3: Enhanced telemetry → Better observability
Expected Impact: Scalable foundation for 10x growth
Methodology
Data Sources
-
Supabase Telemetry Database
telemetry_eventstable: 212,375 rowstelemetry_workflowstable: 5,751 rows
-
Analytical Views
- Created 15 SQL views for multi-dimensional analysis
- Enabled complex queries and pattern recognition
-
CHANGELOG Review
- Analyzed recent changes (v2.14.0 - v2.14.6)
- Correlated fixes with error patterns
Analysis Approach
-
Quantitative Analysis
- Success/failure rates per tool
- Performance metrics (avg, median, p95, p99)
- User segmentation and cohort analysis
- Temporal trends and growth patterns
-
Pattern Recognition
- Tool sequence analysis (Markov chains)
- Node co-occurrence patterns
- Workflow complexity distribution
- Error clustering and root cause analysis
-
Qualitative Insights
- CHANGELOG integration
- Error message analysis
- User journey reconstruction
- Best practice identification
How to Use This Analysis
For Development Priorities
- Review P0 Critical Recommendations (Section 8)
- Check estimated effort and impact
- Prioritize based on ROI (impact/effort ratio)
- Follow implementation guides with code examples
For Architecture Decisions
- Review Architectural Recommendations (Section 9)
- Consider service layer consolidation
- Evaluate repository caching opportunities
- Plan for 10x scale
For Product Strategy
- Review Usage Patterns (Section 3 & 5)
- Understand user segments (power vs casual)
- Identify high-value features (most-used tools)
- Focus on reliability over features (96% success rate target)
For Telemetry Enhancement
- Review Telemetry Enhancements (Section 10)
- Add fine-grained timing metrics
- Track workflow creation funnels
- Monitor node-level analytics
Contact & Feedback
For questions about this analysis or to request additional insights:
- Data Analyst: Claude Code with Supabase MCP
- Analysis Date: October 2, 2025
- Data Period: September 26 - October 2, 2025
Change Log
- 2025-10-02: Initial comprehensive analysis completed
- 15 SQL analytical views created
- 13 sections of detailed findings
- P0/P1/P2 recommendations with implementation guides
- Code examples and effort estimates provided
Next Steps
- ✅ Review findings with development team
- ✅ Prioritize P0 recommendations for immediate implementation
- ✅ Plan P1 features for next release cycle
- ✅ Set up monitoring for key metrics
- ✅ Schedule follow-up analysis (weekly recommended)
This analysis represents a snapshot of n8n-mcp usage during early adoption phase. Patterns may evolve as the user base grows and matures.