Compare commits

..

1 Commits

Author SHA1 Message Date
czlonkowski
c7da0a2430 fix: resolve YAML syntax error in release.yml workflow
Fixed invalid multi-line string syntax at line 148 by converting to heredoc.
The quoted multi-line string was breaking YAML parsing. Using heredoc (cat <<EOF)
is the proper way to handle multi-line strings in bash within GitHub Actions.

This resolves the CI failure on main branch.

Concieved by Romuald Członkowski - www.aiadvisors.pl/en

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-24 13:45:53 +02:00
77 changed files with 2134 additions and 16684 deletions

View File

@@ -26,8 +26,4 @@ USE_NGINX=false
# N8N_API_URL=https://your-n8n-instance.com
# N8N_API_KEY=your-api-key-here
# N8N_API_TIMEOUT=30000
# N8N_API_MAX_RETRIES=3
# Optional: Disable specific tools (comma-separated list)
# Example: DISABLED_TOOLS=n8n_diagnostic,n8n_health_check
# DISABLED_TOOLS=
# N8N_API_MAX_RETRIES=3

View File

@@ -103,23 +103,6 @@ AUTH_TOKEN=your-secure-token-here
# For local development with local n8n:
# WEBHOOK_SECURITY_MODE=moderate
# Disabled Tools Configuration
# Filter specific tools from registration at startup
# Useful for multi-tenant deployments, security hardening, or feature flags
#
# Format: Comma-separated list of tool names
# Example: DISABLED_TOOLS=n8n_diagnostic,n8n_health_check,custom_tool
#
# Common use cases:
# - Multi-tenant: Hide tools that check env vars instead of instance context
# Example: DISABLED_TOOLS=n8n_diagnostic,n8n_health_check
# - Security: Disable management tools in production for certain users
# - Feature flags: Gradually roll out new tools
# - Deployment-specific: Different tool sets for cloud vs self-hosted
#
# Default: (empty - all tools enabled)
# DISABLED_TOOLS=
# =========================
# MULTI-TENANT CONFIGURATION
# =========================

View File

@@ -142,13 +142,19 @@ jobs:
if [ -z "$PREVIOUS_TAG" ]; then
echo " No previous tag found, this might be the first release"
# Generate initial release notes using script
if NOTES=$(node scripts/generate-initial-release-notes.js "$CURRENT_VERSION" 2>/dev/null); then
echo "✅ Successfully generated initial release notes for version $CURRENT_VERSION"
else
echo "⚠️ Could not generate initial release notes for version $CURRENT_VERSION"
NOTES="Initial release v$CURRENT_VERSION"
fi
# Get all commits up to current commit - use heredoc for multiline
NOTES=$(cat <<EOF
### 🎉 Initial Release
This is the initial release of n8n-mcp v$CURRENT_VERSION.
---
**Release Statistics:**
- Commit count: $(git rev-list --count HEAD)
- First release setup
EOF
)
echo "has-notes=true" >> $GITHUB_OUTPUT

View File

@@ -1,209 +0,0 @@
# N8N-MCP Validation Analysis: Quick Reference
**Analysis Date**: November 8, 2025 | **Data Period**: 90 days | **Sample Size**: 29,218 events
---
## The Core Finding
**Validation is working perfectly. Guidance is the problem.**
- 29,218 validation events successfully prevented bad deployments
- 100% of agents fix errors same-day (proving feedback works)
- 12.6% error rate for advanced users (who attempt complex workflows)
- High error volume = high usage, not broken system
---
## Top 3 Problem Areas (75% of errors)
| Area | Errors | Root Cause | Quick Fix |
|------|--------|-----------|-----------|
| **Workflow Structure** | 1,268 (26%) | JSON malformation | Better error messages with examples |
| **Connections** | 676 (14%) | Syntax unintuitive | Create connections guide with diagrams |
| **Required Fields** | 378 (8%) | Not marked upfront | Add "⚠️ REQUIRED" to tool responses |
---
## Problem Nodes (By Frequency)
```
Webhook/Trigger ......... 127 failures (40 users)
Slack .................. 73 failures (2 users)
AI Agent ............... 36 failures (20 users)
HTTP Request ........... 31 failures (13 users)
OpenAI ................. 35 failures (8 users)
```
---
## Top 5 Validation Errors
1. **"Duplicate node ID: undefined"** (179)
- Fix: Point to exact location + show example format
2. **"Single-node workflows only valid for webhooks"** (58)
- Fix: Create webhook guide explaining rule
3. **"responseNode requires onError: continueRegularOutput"** (57)
- Fix: Same guide + inline error context
4. **"Required property X cannot be empty"** (25)
- Fix: Mark required fields before validation
5. **"Duplicate node name: undefined"** (61)
- Fix: Related to structural issues, same solution as #1
---
## Success Indicators
**Agents learn from errors**: 100% same-day correction rate
**Validation catches issues**: Prevents bad deployments
**Feedback is clear**: Quick fixes show error messages work
**No systemic failures**: No "unfixable" errors
---
## What Works Well
- Error messages lead to immediate corrections
- Agents retry and succeed same-day
- Validation prevents broken workflows
- 9,021 users actively using system
---
## What Needs Improvement
1. Required fields not marked in tool responses
2. Error messages don't show valid options for enums
3. Workflow structure documentation lacks examples
4. Connection syntax unintuitive/undocumented
5. Some error messages too generic
---
## Implementation Plan
### Phase 1 (2 weeks): Quick Wins
- Enhanced error messages (location + example)
- Required field markers in tools
- Webhook configuration guide
- **Expected Impact**: 25-30% failure reduction
### Phase 2 (2 weeks): Documentation
- Enum value suggestions in validation
- Workflow connections guide
- Error handler configuration guide
- AI Agent validation improvements
- **Expected Impact**: Additional 15-20% reduction
### Phase 3 (2 weeks): Advanced Features
- Improved search with config hints
- Node type fuzzy matching
- KPI tracking setup
- Test coverage
- **Expected Impact**: Additional 10-15% reduction
**Total Impact**: 50-65% failure reduction (target: 6-7% error rate)
---
## Key Metrics
| Metric | Current | Target | Timeline |
|--------|---------|--------|----------|
| Validation failure rate | 12.6% | 6-7% | 6 weeks |
| First-attempt success | ~77% | 85%+ | 6 weeks |
| Retry success | 100% | 100% | N/A |
| Webhook failures | 127 | <30 | Week 2 |
| Connection errors | 676 | <270 | Week 4 |
---
## Files Delivered
1. **VALIDATION_ANALYSIS_REPORT.md** (27KB)
- Complete analysis with 16 SQL queries
- Detailed findings by category
- 8 actionable recommendations
2. **VALIDATION_ANALYSIS_SUMMARY.md** (13KB)
- Executive summary (one-page)
- Key metrics scorecard
- Top recommendations with ROI
3. **IMPLEMENTATION_ROADMAP.md** (4.3KB)
- 6-week implementation plan
- Phase-by-phase breakdown
- Code locations and effort estimates
4. **ANALYSIS_QUICK_REFERENCE.md** (this file)
- Quick lookup reference
- Top problems at a glance
- Decision-making summary
---
## Next Steps
1. **Week 1**: Review analysis + get team approval
2. **Week 2**: Start Phase 1 (error messages + markers)
3. **Week 4**: Deploy Phase 1 + start Phase 2
4. **Week 6**: Deploy Phase 2 + start Phase 3
5. **Week 8**: Deploy Phase 3 + measure impact
6. **Week 9+**: Monitor KPIs + iterate
---
## Key Recommendations Priority
### HIGH (Do First - Week 1-2)
1. Enhance structure error messages
2. Add required field markers to tools
3. Create webhook configuration guide
### MEDIUM (Do Next - Week 3-4)
4. Add enum suggestions to validation responses
5. Create workflow connections guide
6. Add AI Agent node validation
### LOW (Do Later - Week 5-6)
7. Enhance search with config hints
8. Build fuzzy node matcher
9. Setup KPI tracking
---
## Discussion Points
**Q: Why don't we just weaken validation?**
A: Validation prevents 29,218 bad deployments. That's its job. We improve guidance instead.
**Q: Are agents really learning from errors?**
A: Yes, 100% same-day recovery across 661 user-date pairs with errors.
**Q: Why do documentation readers have higher error rates?**
A: They attempt more complex workflows (6.8x more attempts). Success rate is still 87.4%.
**Q: Which node needs the most help?**
A: Webhook/Trigger configuration (127 failures). Most urgent fix.
**Q: Can we hit 50% reduction in 6 weeks?**
A: Yes, analysis shows 50-65% reduction is achievable with these changes.
---
## Contact & Questions
For detailed information:
- Full analysis: `VALIDATION_ANALYSIS_REPORT.md`
- Executive summary: `VALIDATION_ANALYSIS_SUMMARY.md`
- Implementation plan: `IMPLEMENTATION_ROADMAP.md`
---
**Report Status**: Complete and Ready for Action
**Confidence Level**: High (9,021 users, 29,218 events, comprehensive analysis)
**Generated**: November 8, 2025

File diff suppressed because it is too large Load Diff

View File

@@ -1,441 +0,0 @@
# DISABLED_TOOLS Feature Test Coverage Analysis (Issue #410)
## Executive Summary
**Current Status:** Good unit test coverage (21 test scenarios), but missing integration-level validation
**Overall Grade:** B+ (85/100)
**Coverage Gaps:** Integration tests, real-world deployment verification
**Recommendation:** Add targeted test cases for complete coverage
---
## 1. Current Test Coverage Assessment
### 1.1 Unit Tests (tests/unit/mcp/disabled-tools.test.ts)
**Strengths:**
- ✅ Comprehensive environment variable parsing tests (8 scenarios)
- ✅ Disabled tool guard in executeTool() (3 scenarios)
- ✅ Tool filtering for both documentation and management tools (6 scenarios)
- ✅ Edge cases: special characters, whitespace, empty values
- ✅ Real-world use case scenarios (3 scenarios)
- ✅ Invalid tool name handling
**Code Path Coverage:**
- ✅ getDisabledTools() method - FULLY COVERED
- ✅ executeTool() guard (lines 909-913) - FULLY COVERED
- ⚠️ ListToolsRequestSchema handler filtering (lines 403-449) - PARTIALLY COVERED
- ⚠️ CallToolRequestSchema handler rejection (lines 491-505) - PARTIALLY COVERED
---
## 2. Missing Test Coverage
### 2.1 Critical Gaps
#### A. Handler-Level Integration Tests
**Issue:** Unit tests verify internal methods but not the actual MCP protocol handler responses.
**Missing Scenarios:**
1. Verify ListToolsRequestSchema returns filtered tool list via MCP protocol
2. Verify CallToolRequestSchema returns proper error structure for disabled tools
3. Test interaction with makeToolsN8nFriendly() transformation (line 458)
4. Verify multi-tenant mode respects DISABLED_TOOLS (lines 420-442)
**Impact:** Medium-High
**Reason:** These are the actual code paths executed by MCP clients
#### B. Error Response Format Validation
**Issue:** No tests verify the exact error structure returned to clients.
**Missing Scenarios:**
```javascript
// Expected error structure from lines 495-504:
{
error: 'TOOL_DISABLED',
message: 'Tool \'X\' is not available...',
disabledTools: ['tool1', 'tool2']
}
```
**Impact:** Medium
**Reason:** Breaking changes to error format would not be caught
#### C. Logging Behavior
**Issue:** No verification that logger.info/logger.warn are called appropriately.
**Missing Scenarios:**
1. Verify logging on line 344: "Disabled tools configured: X, Y, Z"
2. Verify logging on line 448: "Filtered N disabled tools..."
3. Verify warning on line 494: "Attempted to call disabled tool: X"
**Impact:** Low
**Reason:** Logging is important for debugging production issues
### 2.2 Edge Cases Not Covered
#### A. Environment Variable Edge Cases
**Missing Tests:**
- DISABLED_TOOLS with unicode characters
- DISABLED_TOOLS with very long tool names (>100 chars)
- DISABLED_TOOLS with thousands of tool names (performance)
- DISABLED_TOOLS containing regex special characters: `.*[]{}()`
#### B. Concurrent Access Scenarios
**Missing Tests:**
- Multiple clients connecting simultaneously with same DISABLED_TOOLS
- Changing DISABLED_TOOLS between server instantiations (not expected to work, but should be documented)
#### C. Defense in Depth Verification
**Issue:** Line 909-913 is a "safety check" but not explicitly tested in isolation.
**Missing Test:**
```typescript
it('should prevent execution even if handler check is bypassed', async () => {
// Test that executeTool() throws even if somehow called directly
process.env.DISABLED_TOOLS = 'test_tool';
const server = new TestableN8NMCPServer();
await expect(async () => {
await server.testExecuteTool('test_tool', {});
}).rejects.toThrow('disabled via DISABLED_TOOLS');
});
```
**Status:** Actually IS tested (lines 112-119 in current tests) ✅
---
## 3. Coverage Metrics
### 3.1 Current Coverage by Code Section
| Code Section | Lines | Unit Tests | Integration Tests | Overall |
|--------------|-------|------------|-------------------|---------|
| getDisabledTools() (326-348) | 23 | 100% | N/A | ✅ 100% |
| ListTools handler filtering (403-449) | 47 | 40% | 0% | ⚠️ 40% |
| CallTool handler rejection (491-505) | 15 | 60% | 0% | ⚠️ 60% |
| executeTool() guard (909-913) | 5 | 100% | 0% | ✅ 100% |
| **Total for Feature** | 90 | 65% | 0% | **⚠️ 65%** |
### 3.2 Test Type Distribution
| Test Type | Count | Percentage |
|-----------|-------|------------|
| Unit Tests | 21 | 100% |
| Integration Tests | 0 | 0% |
| E2E Tests | 0 | 0% |
**Recommended Distribution:**
- Unit Tests: 15-18 (current: 21 ✅)
- Integration Tests: 8-12 (current: 0 ❌)
- E2E Tests: 0-2 (current: 0 ✅)
---
## 4. Recommendations
### 4.1 High Priority (Must Add)
#### Test 1: Handler Response Structure Validation
```typescript
describe('CallTool Handler - Error Response Structure', () => {
it('should return properly structured error for disabled tools', () => {
process.env.DISABLED_TOOLS = 'test_tool';
const server = new TestableN8NMCPServer();
// Mock the CallToolRequestSchema handler to capture response
const mockRequest = {
params: { name: 'test_tool', arguments: {} }
};
const response = await server.handleCallTool(mockRequest);
expect(response.content).toHaveLength(1);
expect(response.content[0].type).toBe('text');
const errorData = JSON.parse(response.content[0].text);
expect(errorData).toEqual({
error: 'TOOL_DISABLED',
message: expect.stringContaining('test_tool'),
message: expect.stringContaining('disabled via DISABLED_TOOLS'),
disabledTools: ['test_tool']
});
});
});
```
#### Test 2: Logging Verification
```typescript
import { vi } from 'vitest';
import * as logger from '../../../src/utils/logger';
describe('Disabled Tools - Logging Behavior', () => {
beforeEach(() => {
vi.spyOn(logger, 'info');
vi.spyOn(logger, 'warn');
});
it('should log disabled tools on server initialization', () => {
process.env.DISABLED_TOOLS = 'tool1,tool2,tool3';
const server = new TestableN8NMCPServer();
server.testGetDisabledTools(); // Trigger getDisabledTools()
expect(logger.info).toHaveBeenCalledWith(
expect.stringContaining('Disabled tools configured: tool1, tool2, tool3')
);
});
it('should log when filtering disabled tools', () => {
process.env.DISABLED_TOOLS = 'tool1';
const server = new TestableN8NMCPServer();
// Trigger ListToolsRequestSchema handler
// ...
expect(logger.info).toHaveBeenCalledWith(
expect.stringMatching(/Filtered \d+ disabled tools/)
);
});
it('should warn when disabled tool is called', async () => {
process.env.DISABLED_TOOLS = 'test_tool';
const server = new TestableN8NMCPServer();
await server.testExecuteTool('test_tool', {}).catch(() => {});
expect(logger.warn).toHaveBeenCalledWith(
'Attempted to call disabled tool: test_tool'
);
});
});
```
### 4.2 Medium Priority (Should Add)
#### Test 3: Multi-Tenant Mode Interaction
```typescript
describe('Multi-Tenant Mode with DISABLED_TOOLS', () => {
it('should show management tools but respect DISABLED_TOOLS', () => {
process.env.ENABLE_MULTI_TENANT = 'true';
process.env.DISABLED_TOOLS = 'n8n_delete_workflow';
delete process.env.N8N_API_URL;
delete process.env.N8N_API_KEY;
const server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
// Should still filter disabled management tools
expect(disabledTools.has('n8n_delete_workflow')).toBe(true);
});
});
```
#### Test 4: makeToolsN8nFriendly Interaction
```typescript
describe('n8n Client Compatibility', () => {
it('should apply n8n-friendly descriptions after filtering', () => {
// This verifies that the order of operations is correct:
// 1. Filter disabled tools
// 2. Apply n8n-friendly transformations
// This prevents a disabled tool from appearing with n8n-friendly description
process.env.DISABLED_TOOLS = 'validate_node_operation';
const server = new TestableN8NMCPServer();
// Mock n8n client detection
server.clientInfo = { name: 'n8n-workflow-tool' };
// Get tools list
// Verify validate_node_operation is NOT in the list
// Verify other validation tools ARE in the list with n8n-friendly descriptions
});
});
```
### 4.3 Low Priority (Nice to Have)
#### Test 5: Performance with Many Disabled Tools
```typescript
describe('Performance', () => {
it('should handle large DISABLED_TOOLS list efficiently', () => {
const manyTools = Array.from({ length: 1000 }, (_, i) => `tool_${i}`);
process.env.DISABLED_TOOLS = manyTools.join(',');
const start = Date.now();
const server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
const duration = Date.now() - start;
expect(disabledTools.size).toBe(1000);
expect(duration).toBeLessThan(100); // Should be fast
});
});
```
#### Test 6: Unicode and Special Characters
```typescript
describe('Edge Cases - Special Characters', () => {
it('should handle unicode tool names', () => {
process.env.DISABLED_TOOLS = 'tool_测试,tool_🎯,tool_münchen';
const server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has('tool_测试')).toBe(true);
expect(disabledTools.has('tool_🎯')).toBe(true);
expect(disabledTools.has('tool_münchen')).toBe(true);
});
it('should handle regex special characters literally', () => {
process.env.DISABLED_TOOLS = 'tool.*,tool[0-9],tool{a,b}';
const server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
// These should be treated as literal strings, not regex
expect(disabledTools.has('tool.*')).toBe(true);
expect(disabledTools.has('tool[0-9]')).toBe(true);
expect(disabledTools.has('tool{a,b}')).toBe(true);
});
});
```
---
## 5. Coverage Goals
### 5.1 Current Status
- **Line Coverage:** ~65% for DISABLED_TOOLS feature code
- **Branch Coverage:** ~70% (good coverage of conditionals)
- **Function Coverage:** 100% (all functions tested)
### 5.2 Target Coverage (After Recommendations)
- **Line Coverage:** >90% (add handler tests)
- **Branch Coverage:** >85% (add multi-tenant edge cases)
- **Function Coverage:** 100% (maintain)
---
## 6. Testing Strategy Recommendations
### 6.1 Short Term (Before Merge)
1. ✅ Add Test 2 (Logging Verification) - Easy to implement, high value
2. ✅ Add Test 1 (Handler Response Structure) - Critical for API contract
3. ✅ Add Test 3 (Multi-Tenant Mode) - Important for deployment scenarios
### 6.2 Medium Term (Next Sprint)
1. Add Test 4 (makeToolsN8nFriendly) - Ensures feature ordering is correct
2. Add Test 6 (Unicode/Special Chars) - Important for international deployments
### 6.3 Long Term (Future Enhancements)
1. Add E2E test with real MCP client connection
2. Add performance benchmarks (Test 5)
3. Add deployment smoke tests (verify in Docker container)
---
## 7. Integration Test Challenges
### 7.1 Why Integration Tests Are Difficult Here
**Problem:** The TestableN8NMCPServer in test-helpers.ts creates its own handlers that don't include the DISABLED_TOOLS logic.
**Root Cause:**
- Test helper setupHandlers() (line 56-70) hardcodes tool list assembly
- Doesn't call the actual server's ListToolsRequestSchema handler
- This was designed for testing tool execution, not tool filtering
**Options:**
1. **Modify test-helpers.ts** to use actual server handlers (breaking change for other tests)
2. **Create a new test helper** specifically for DISABLED_TOOLS feature
3. **Test via unit tests + mocking** (current approach, sufficient for now)
**Recommendation:** Option 3 for now, Option 2 if integration tests become critical
---
## 8. Requirements Verification (Issue #410)
### Original Requirements:
1. ✅ Parse DISABLED_TOOLS env var (comma-separated list)
2. ✅ Filter tools in ListToolsRequestSchema handler
3. ✅ Reject calls to disabled tools with clear error message
4. ✅ Filter from both n8nDocumentationToolsFinal and n8nManagementTools
### Test Coverage Against Requirements:
1. **Parsing:** ✅ 8 test scenarios (excellent)
2. **Filtering:** ⚠️ Partially tested via unit tests, needs handler-level verification
3. **Rejection:** ⚠️ Error throwing tested, error structure not verified
4. **Both tool types:** ✅ 6 test scenarios (excellent)
---
## 9. Final Recommendations
### Immediate Actions:
1.**Add logging verification tests** (Test 2) - 30 minutes
2.**Add error response structure test** (Test 1 simplified version) - 20 minutes
3.**Add multi-tenant interaction test** (Test 3) - 15 minutes
### Before Production Deployment:
1. Manual testing: Set DISABLED_TOOLS in production config
2. Verify error messages are clear to end users
3. Document the feature in deployment guides
### Future Enhancements:
1. Add integration tests when test infrastructure supports it
2. Add performance tests if >100 tools need to be disabled
3. Consider adding CLI tool to validate DISABLED_TOOLS syntax
---
## 10. Conclusion
**Overall Assessment:** The current test suite provides solid unit test coverage (21 scenarios) but lacks integration-level validation. The implementation is sound and the core functionality is well-tested.
**Confidence Level:** 85/100
- Core logic: 95/100 ✅
- Edge cases: 80/100 ⚠️
- Integration: 40/100 ❌
- Real-world validation: 75/100 ⚠️
**Recommendation:** The feature is ready for merge with the addition of 3 high-priority tests (Tests 1, 2, 3). Integration tests can be added later when test infrastructure is enhanced.
**Risk Level:** Low
- Well-isolated feature
- Clear error messages
- Defense in depth with multiple checks
- Easy to disable if issues arise (unset DISABLED_TOOLS)
---
## Appendix: Test Execution Results
### Current Test Suite:
```bash
$ npm test -- tests/unit/mcp/disabled-tools.test.ts
✓ tests/unit/mcp/disabled-tools.test.ts (21 tests) 44ms
Test Files 1 passed (1)
Tests 21 passed (21)
Duration 1.09s
```
### All Tests Passing: ✅
**Test Breakdown:**
- Environment variable parsing: 8 tests
- executeTool() guard: 3 tests
- Tool filtering (doc tools): 2 tests
- Tool filtering (mgmt tools): 2 tests
- Tool filtering (mixed): 1 test
- Invalid tool names: 2 tests
- Real-world use cases: 3 tests
**Total: 21 tests, all passing**
---
**Report Generated:** 2025-11-09
**Feature:** DISABLED_TOOLS environment variable (Issue #410)
**Version:** n8n-mcp v2.22.13
**Author:** Test Coverage Analysis Tool

View File

@@ -1,272 +0,0 @@
# DISABLED_TOOLS Feature - Test Coverage Summary
## Overview
**Feature:** DISABLED_TOOLS environment variable support (Issue #410)
**Implementation Files:**
- `src/mcp/server.ts` (lines 326-348, 403-449, 491-505, 909-913)
**Test Files:**
- `tests/unit/mcp/disabled-tools.test.ts` (21 tests)
- `tests/unit/mcp/disabled-tools-additional.test.ts` (24 tests)
**Total Test Count:** 45 tests (all passing ✅)
---
## Test Coverage Breakdown
### Original Tests (21 scenarios)
#### 1. Environment Variable Parsing (8 tests)
- ✅ Empty/undefined DISABLED_TOOLS
- ✅ Single disabled tool
- ✅ Multiple disabled tools
- ✅ Whitespace trimming
- ✅ Empty entries filtering
- ✅ Single/multiple commas handling
#### 2. ExecuteTool Guard (3 tests)
- ✅ Throws error when calling disabled tool
- ✅ Allows calling enabled tools
- ✅ Throws error for all disabled tools in list
#### 3. Tool Filtering - Documentation Tools (2 tests)
- ✅ Filters single disabled documentation tool
- ✅ Filters multiple disabled documentation tools
#### 4. Tool Filtering - Management Tools (2 tests)
- ✅ Filters single disabled management tool
- ✅ Filters multiple disabled management tools
#### 5. Tool Filtering - Mixed Tools (1 test)
- ✅ Filters disabled tools from both lists
#### 6. Invalid Tool Names (2 tests)
- ✅ Handles non-existent tool names gracefully
- ✅ Handles special characters in tool names
#### 7. Real-World Use Cases (3 tests)
- ✅ Multi-tenant deployment (disable diagnostic tools)
- ✅ Security hardening (disable management tools)
- ✅ Feature flags (disable experimental tools)
---
### Additional Tests (24 scenarios)
#### 1. Error Response Structure (3 tests)
- ✅ Throws error with specific message format
- ✅ Includes tool name in error message
- ✅ Consistent error format for all disabled tools
#### 2. Multi-Tenant Mode Interaction (3 tests)
- ✅ Respects DISABLED_TOOLS in multi-tenant mode
- ✅ Parses DISABLED_TOOLS regardless of N8N_API_URL
- ✅ Works when only ENABLE_MULTI_TENANT is set
#### 3. Edge Cases - Special Characters & Unicode (5 tests)
- ✅ Handles unicode tool names (Chinese, German, Arabic)
- ✅ Handles emoji in tool names
- ✅ Treats regex special characters as literals
- ✅ Handles dots and colons in tool names
- ✅ Handles @ symbols in tool names
#### 4. Performance and Scale (3 tests)
- ✅ Handles 100 disabled tools efficiently (<50ms)
- Handles 1000 disabled tools efficiently (<100ms)
- Efficient membership checks (Set.has() is O(1))
#### 5. Environment Variable Edge Cases (4 tests)
- Handles very long tool names (500+ chars)
- Handles newlines in tool names (after trim)
- Handles tabs in tool names (after trim)
- Handles mixed whitespace correctly
#### 6. Defense in Depth (3 tests)
- Prevents execution at executeTool level
- Case-sensitive tool name matching
- Checks disabled status on every call
#### 7. Real-World Deployment Verification (3 tests)
- Common security hardening scenario
- Staging environment scenario
- Development environment scenario
---
## Code Coverage Metrics
### Feature-Specific Coverage
| Code Section | Lines | Coverage | Status |
|--------------|-------|----------|---------|
| getDisabledTools() | 23 | 100% | Excellent |
| ListTools handler filtering | 47 | 75% | Good (unit level) |
| CallTool handler rejection | 15 | 80% | Good (unit level) |
| executeTool() guard | 5 | 100% | Excellent |
| **Overall** | **90** | **~90%** | ** Excellent** |
### Test Type Distribution
| Test Type | Count | Percentage |
|-----------|-------|------------|
| Unit Tests | 45 | 100% |
| Integration Tests | 0 | 0% |
| E2E Tests | 0 | 0% |
---
## Requirements Verification (Issue #410)
### Requirement 1: Parse DISABLED_TOOLS env var ✅
**Status:** Fully Implemented & Tested
**Tests:** 8 parsing tests + 4 edge case tests = 12 tests
**Coverage:** 100%
### Requirement 2: Filter tools in ListToolsRequestSchema handler ✅
**Status:** Fully Implemented & Tested (unit level)
**Tests:** 7 filtering tests
**Coverage:** 75% (unit level, integration level would be 100%)
### Requirement 3: Reject calls to disabled tools ✅
**Status:** Fully Implemented & Tested
**Tests:** 6 rejection tests + 3 error structure tests = 9 tests
**Coverage:** 100%
### Requirement 4: Filter from both tool types ✅
**Status:** Fully Implemented & Tested
**Tests:** 5 tests covering both documentation and management tools
**Coverage:** 100%
---
## Test Execution Results
```bash
$ npm test -- tests/unit/mcp/disabled-tools
✓ tests/unit/mcp/disabled-tools.test.ts (21 tests)
✓ tests/unit/mcp/disabled-tools-additional.test.ts (24 tests)
Test Files 2 passed (2)
Tests 45 passed (45)
Duration 1.17s
```
**All tests passing:** 45/45
---
## Gaps and Future Enhancements
### Known Gaps
1. **Integration Tests** (Low Priority)
- Testing via actual MCP protocol handler responses
- Verification of makeToolsN8nFriendly() interaction
- **Reason for deferring:** Test infrastructure doesn't easily support this
- **Mitigation:** Comprehensive unit tests provide high confidence
2. **Logging Verification** (Low Priority)
- Verification that logger.info/warn are called appropriately
- **Reason for deferring:** Complex to mock logger properly
- **Mitigation:** Manual testing confirms logging works correctly
### Future Enhancements (Optional)
1. **E2E Tests**
- Test with real MCP client connection
- Verify in actual deployment scenarios
2. **Performance Benchmarks**
- Formal benchmarks for large disabled tool lists
- Current tests show <100ms for 1000 tools, which is excellent
3. **Deployment Smoke Tests**
- Verify feature works in Docker container
- Test with various environment configurations
---
## Recommendations
### Before Merge ✅
The test suite is complete and ready for merge:
- All requirements covered
- 45 tests passing
- ~90% coverage of feature code
- Edge cases handled
- Performance verified
- Real-world scenarios tested
### After Merge (Optional)
1. **Manual Testing Checklist:**
- [ ] Set DISABLED_TOOLS in production config
- [ ] Verify error messages are clear to end users
- [ ] Test with Claude Desktop client
- [ ] Test with n8n AI Agent
2. **Documentation:**
- [ ] Add DISABLED_TOOLS to deployment guide
- [ ] Add examples to environment variable documentation
- [ ] Update multi-tenant documentation
3. **Monitoring:**
- [ ] Monitor logs for "Disabled tools configured" messages
- [ ] Track "Attempted to call disabled tool" warnings
- [ ] Alert on unexpected tool disabling
---
## Test Quality Assessment
### Strengths
- Comprehensive coverage (45 tests)
- Real-world scenarios tested
- Performance validated
- Edge cases covered
- Error handling verified
- All tests passing consistently
### Areas of Excellence
- **Edge Case Coverage:** Unicode, special chars, whitespace, empty values
- **Performance Testing:** Up to 1000 tools tested
- **Error Validation:** Message format and consistency verified
- **Real-World Scenarios:** Security, multi-tenant, feature flags
### Confidence Level
**95/100** - Production Ready
**Breakdown:**
- Core Functionality: 100/100
- Edge Cases: 95/100
- Error Handling: 100/100
- Performance: 95/100
- Integration: 70/100 (deferred, not critical)
---
## Conclusion
The DISABLED_TOOLS feature has **excellent test coverage** with 45 passing tests covering all requirements and edge cases. The implementation is robust, well-tested, and ready for production deployment.
**Recommendation:** APPROVED for merge
**Risk Level:** Low
- Well-isolated feature with clear boundaries
- Multiple layers of protection (defense in depth)
- Comprehensive error messages
- Easy to disable if issues arise (unset DISABLED_TOOLS)
- No breaking changes to existing functionality
---
**Report Date:** 2025-11-09
**Test Suite Version:** v2.22.13
**Feature:** DISABLED_TOOLS environment variable (Issue #410)
**Test Files:** 2
**Total Tests:** 45
**Pass Rate:** 100%

View File

@@ -82,7 +82,7 @@ ENV IS_DOCKER=true
# To opt-out, uncomment the following line:
# ENV N8N_MCP_TELEMETRY_DISABLED=true
# Expose HTTP port (default 3000, configurable via PORT environment variable at runtime)
# Expose HTTP port
EXPOSE 3000
# Set stop signal to SIGTERM (default, but explicit is better)
@@ -90,7 +90,7 @@ STOPSIGNAL SIGTERM
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD sh -c 'curl -f http://127.0.0.1:${PORT:-3000}/health || exit 1'
CMD curl -f http://127.0.0.1:3000/health || exit 1
# Optimized entrypoint
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]

View File

@@ -1,170 +0,0 @@
# N8N-MCP Validation Improvement: Implementation Roadmap
**Start Date**: Week of November 11, 2025
**Target Completion**: Week of December 23, 2025 (6 weeks)
**Expected Impact**: 50-65% reduction in validation failures
---
## Summary
Based on analysis of 29,218 validation events across 9,021 users, this roadmap identifies concrete technical improvements to reduce validation failures through better documentation and guidance—without weakening validation itself.
---
## Phase 1: Quick Wins (Weeks 1-2) - 14-20 hours
### Task 1.1: Enhance Structure Error Messages
- **File**: `/src/services/workflow-validator.ts`
- **Problem**: "Duplicate node ID: undefined" (179 failures) provides no context
- **Solution**: Add node index, example format, field suggestions
- **Effort**: 4-6 hours
### Task 1.2: Mark Required Fields in Tool Responses
- **File**: `/src/services/property-filter.ts`
- **Problem**: "Required property X cannot be empty" (378 failures) - not marked upfront
- **Solution**: Add `requiredLabel: "⚠️ REQUIRED"` to get_node_essentials output
- **Effort**: 6-8 hours
### Task 1.3: Create Webhook Configuration Guide
- **File**: New `/docs/WEBHOOK_CONFIGURATION_GUIDE.md`
- **Problem**: Webhook errors (127 failures) from unclear config rules
- **Solution**: Document three core rules + examples
- **Effort**: 4-6 hours
**Phase 1 Impact**: 25-30% failure reduction
---
## Phase 2: Documentation & Validation (Weeks 3-4) - 20-28 hours
### Task 2.1: Enhance validate_node_operation() Enum Suggestions
- **File**: `/src/services/enhanced-config-validator.ts`
- **Problem**: Invalid enum errors lack valid options
- **Solution**: Include validOptions array in response
- **Effort**: 6-8 hours
### Task 2.2: Create Workflow Connections Guide
- **File**: New `/docs/WORKFLOW_CONNECTIONS_GUIDE.md`
- **Problem**: Connection syntax errors (676 failures)
- **Solution**: Document syntax with examples
- **Effort**: 6-8 hours
### Task 2.3: Create Error Handler Guide
- **File**: New `/docs/ERROR_HANDLING_GUIDE.md`
- **Problem**: Error handler config (148 failures)
- **Solution**: Explain options, positioning, patterns
- **Effort**: 4-6 hours
### Task 2.4: Add AI Agent Node Validation
- **File**: `/src/services/node-specific-validators.ts`
- **Problem**: AI Agent requires LLM (22 failures)
- **Solution**: Detect missing LLM, suggest required nodes
- **Effort**: 4-6 hours
**Phase 2 Impact**: Additional 15-20% failure reduction
---
## Phase 3: Advanced Features (Weeks 5-6) - 16-22 hours
### Task 3.1: Enhance Search Results
- Effort: 4-6 hours
### Task 3.2: Fuzzy Matcher for Node Types
- Effort: 3-4 hours
### Task 3.3: KPI Tracking Dashboard
- Effort: 3-4 hours
### Task 3.4: Comprehensive Test Coverage
- Effort: 6-8 hours
**Phase 3 Impact**: Additional 10-15% failure reduction
---
## Timeline
```
Week 1-2: Phase 1 - Error messages & marks
Week 3-4: Phase 2 - Documentation & validation
Week 5-6: Phase 3 - Advanced features
Total: ~60-80 developer-hours
Target: 50-65% failure reduction
```
---
## Key Changes
### Required Field Markers
**Before**:
```json
{ "properties": { "channel": { "type": "string" } } }
```
**After**:
```json
{
"properties": {
"channel": {
"type": "string",
"required": true,
"requiredLabel": "⚠️ REQUIRED",
"examples": ["#general"]
}
}
}
```
### Enum Suggestions
**Before**: `"Invalid value 'sendMsg' for operation"`
**After**:
```json
{
"field": "operation",
"validOptions": ["sendMessage", "deleteMessage"],
"suggestion": "Did you mean 'sendMessage'?"
}
```
### Error Message Examples
**Structure Error**:
```
Node at index 1 missing required 'id' field.
Expected: { "id": "node_1", "name": "HTTP Request", ... }
```
**Webhook Config**:
```
Webhook in responseNode mode requires onError: "continueRegularOutput"
See: [Webhook Configuration Guide]
```
---
## Success Metrics
- [ ] Phase 1: Webhook errors 127→35 (-72%)
- [ ] Phase 2: Connection errors 676→270 (-60%)
- [ ] Phase 3: Total failures reduced 50-65%
- [ ] All phases: Retry success stays 100%
- [ ] Target: First-attempt success 77%→85%+
---
## Next Steps
1. Review and approve roadmap
2. Create GitHub issues for each phase
3. Assign to team members
4. Schedule Phase 1 sprint (Nov 11)
5. Weekly status sync
**Status**: Ready for Review and Approval
**Estimated Completion**: December 23, 2025

View File

@@ -1,87 +1,5 @@
# n8n Update Process - Quick Reference
## ⚡ Recommended Fast Workflow (2025-11-04)
**CRITICAL FIRST STEP**: Check existing releases to avoid version conflicts!
```bash
# 1. CHECK EXISTING RELEASES FIRST (prevents version conflicts!)
gh release list | head -5
# Look at the latest version - your new version must be higher!
# 2. Switch to main and pull
git checkout main && git pull
# 3. Check for updates (dry run)
npm run update:n8n:check
# 4. Run update and skip tests (we'll test in CI)
yes y | npm run update:n8n
# 5. Create feature branch
git checkout -b update/n8n-X.X.X
# 6. Update version in package.json (must be HIGHER than latest release!)
# Edit: "version": "2.XX.X" (not the version from the release list!)
# 7. Update CHANGELOG.md
# - Change version number to match package.json
# - Update date to today
# - Update dependency versions
# 8. Update README badge
# Edit line 8: Change n8n version badge to new n8n version
# 9. Commit and push
git add -A
git commit -m "chore: update n8n to X.X.X and bump version to 2.XX.X
- Updated n8n from X.X.X to X.X.X
- Updated n8n-core from X.X.X to X.X.X
- Updated n8n-workflow from X.X.X to X.X.X
- Updated @n8n/n8n-nodes-langchain from X.X.X to X.X.X
- Rebuilt node database with XXX nodes (XXX from n8n-nodes-base, XXX from @n8n/n8n-nodes-langchain)
- Updated README badge with new n8n version
- Updated CHANGELOG with dependency changes
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
git push -u origin update/n8n-X.X.X
# 10. Create PR
gh pr create --title "chore: update n8n to X.X.X" --body "Updates n8n and all related dependencies to the latest versions..."
# 11. After PR is merged, verify release triggered
gh release list | head -1
# If the new version appears, you're done!
# If not, the version might have already been released - bump version again and create new PR
```
### Why This Workflow?
**Fast**: Skip local tests (2-3 min saved) - CI runs them anyway
**Safe**: Unit tests in CI verify compatibility
**Clean**: All changes in one PR with proper tracking
**Automatic**: Release workflow triggers on merge if version is new
### Common Issues
**Problem**: Release workflow doesn't trigger after merge
**Cause**: Version number was already released (check `gh release list`)
**Solution**: Create new PR bumping version by one patch number
**Problem**: Integration tests fail in CI with "unauthorized"
**Cause**: n8n test instance credentials expired (infrastructure issue)
**Solution**: Ignore if unit tests pass - this is not a code problem
**Problem**: CI takes 8+ minutes
**Reason**: Integration tests need live n8n instance (slow)
**Normal**: Unit tests (~2 min) + integration tests (~6 min) = ~8 min total
## Quick One-Command Update
For a complete update with tests and publish preparation:
@@ -181,14 +99,12 @@ This command:
## Important Notes
1. **ALWAYS check existing releases first** - Use `gh release list` to see what versions are already released. Your new version must be higher!
2. **Release workflow only triggers on version CHANGE** - If you merge a PR with an already-released version (e.g., 2.22.8), the workflow won't run. You'll need to bump to a new version (e.g., 2.22.9) and create another PR.
3. **Integration test failures in CI are usually infrastructure issues** - If unit tests pass but integration tests fail with "unauthorized", this is typically because the test n8n instance credentials need updating. The code itself is fine.
4. **Skip local tests - let CI handle them** - Running tests locally adds 2-3 minutes with no benefit since CI runs them anyway. The fast workflow skips local tests.
5. **The update script is smart** - It automatically syncs all n8n dependencies to compatible versions
6. **Database rebuild is automatic** - The update script handles this for you
7. **Template sanitization is automatic** - Any API tokens in workflow templates are replaced with placeholders
8. **Docker image builds automatically** - Pushing to GitHub triggers the workflow
1. **Always run on main branch** - Make sure you're on main and it's clean
2. **The update script is smart** - It automatically syncs all n8n dependencies to compatible versions
3. **Tests are required** - The publish script now runs tests automatically
4. **Database rebuild is automatic** - The update script handles this for you
5. **Template sanitization is automatic** - Any API tokens in workflow templates are replaced with placeholders
6. **Docker image builds automatically** - Pushing to GitHub triggers the workflow
## GitHub Push Protection
@@ -199,27 +115,11 @@ As of July 2025, GitHub's push protection may block database pushes if they cont
3. If push is still blocked, use the GitHub web interface to review and allow the push
## Time Estimate
### Fast Workflow (Recommended)
- Local work: ~2-3 minutes
- npm install and database rebuild: ~2-3 minutes
- File edits (CHANGELOG, README, package.json): ~30 seconds
- Git operations (commit, push, create PR): ~30 seconds
- CI testing after PR creation: ~8-10 minutes (runs automatically)
- Unit tests: ~2 minutes
- Integration tests: ~6 minutes (may fail with infrastructure issues - ignore if unit tests pass)
- Other checks: ~1 minute
**Total hands-on time: ~3 minutes** (then wait for CI)
### Full Workflow with Local Tests
- Total time: ~5-7 minutes
- Test suite: ~2.5 minutes
- npm install and database rebuild: ~2-3 minutes
- The rest: seconds
**Note**: The fast workflow is recommended since CI runs the same tests anyway.
## Troubleshooting
If tests fail:

View File

@@ -54,10 +54,6 @@ Collected data is used solely to:
- Identify common error patterns
- Improve tool performance and reliability
- Guide development priorities
- Train machine learning models for workflow generation
All ML training uses sanitized, anonymized data only.
Users can opt-out at any time with `npx n8n-mcp telemetry disable`
## Data Retention
- Data is retained for analysis purposes
@@ -70,4 +66,4 @@ We may update this privacy policy from time to time. Updates will be reflected i
For questions about telemetry or privacy, please open an issue on GitHub:
https://github.com/czlonkowski/n8n-mcp/issues
Last updated: 2025-11-06
Last updated: 2025-09-25

View File

@@ -5,23 +5,23 @@
[![npm version](https://img.shields.io/npm/v/n8n-mcp.svg)](https://www.npmjs.com/package/n8n-mcp)
[![codecov](https://codecov.io/gh/czlonkowski/n8n-mcp/graph/badge.svg?token=YOUR_TOKEN)](https://codecov.io/gh/czlonkowski/n8n-mcp)
[![Tests](https://img.shields.io/badge/tests-3336%20passing-brightgreen.svg)](https://github.com/czlonkowski/n8n-mcp/actions)
[![n8n version](https://img.shields.io/badge/n8n-1.119.1-orange.svg)](https://github.com/n8n-io/n8n)
[![n8n version](https://img.shields.io/badge/n8n-^1.116.2-orange.svg)](https://github.com/n8n-io/n8n)
[![Docker](https://img.shields.io/badge/docker-ghcr.io%2Fczlonkowski%2Fn8n--mcp-green.svg)](https://github.com/czlonkowski/n8n-mcp/pkgs/container/n8n-mcp)
[![Deploy on Railway](https://railway.com/button.svg)](https://railway.com/deploy/n8n-mcp?referralCode=n8n-mcp)
A Model Context Protocol (MCP) server that provides AI assistants with comprehensive access to n8n node documentation, properties, and operations. Deploy in minutes to give Claude and other AI assistants deep knowledge about n8n's 543 workflow automation nodes.
A Model Context Protocol (MCP) server that provides AI assistants with comprehensive access to n8n node documentation, properties, and operations. Deploy in minutes to give Claude and other AI assistants deep knowledge about n8n's 525+ workflow automation nodes.
## Overview
n8n-MCP serves as a bridge between n8n's workflow automation platform and AI models, enabling them to understand and work with n8n nodes effectively. It provides structured access to:
- 📚 **543 n8n nodes** from both n8n-nodes-base and @n8n/n8n-nodes-langchain
- 📚 **536 n8n nodes** from both n8n-nodes-base and @n8n/n8n-nodes-langchain
- 🔧 **Node properties** - 99% coverage with detailed schemas
-**Node operations** - 63.6% coverage of available actions
- 📄 **Documentation** - 87% coverage from official n8n docs (including AI nodes)
- 🤖 **AI tools** - 271 AI-capable nodes detected with full documentation
- 📄 **Documentation** - 90% coverage from official n8n docs (including AI nodes)
- 🤖 **AI tools** - 263 AI-capable nodes detected with full documentation
- 💡 **Real-world examples** - 2,646 pre-extracted configurations from popular templates
- 🎯 **Template library** - 2,709 workflow templates with 100% metadata coverage
- 🎯 **Template library** - 2,500+ workflow templates with smart filtering
## ⚠️ Important Safety Warning
@@ -51,8 +51,6 @@ npx n8n-mcp
Add to Claude Desktop config:
> ⚠️ **Important**: The `MCP_MODE: "stdio"` environment variable is **required** for Claude Desktop. Without it, you will see JSON parsing errors like `"Unexpected token..."` in the UI. This variable ensures that only JSON-RPC messages are sent to stdout, preventing debug logs from interfering with the protocol.
**Basic configuration (documentation tools only):**
```json
{
@@ -533,7 +531,7 @@ When operations are independent, execute them in parallel for maximum performanc
❌ BAD: Sequential tool calls (await each one before the next)
### 3. Templates First
ALWAYS check templates before building from scratch (2,709 available).
ALWAYS check templates before building from scratch (2,500+ available).
### 4. Multi-Level Validation
Use validate_node_minimal → validate_node_operation → validate_workflow pattern.
@@ -842,7 +840,7 @@ n8n_update_partial_workflow({
### Core Behavior
1. **Silent execution** - No commentary between tools
2. **Parallel by default** - Execute independent operations simultaneously
3. **Templates first** - Always check before building (2,709 available)
3. **Templates first** - Always check before building (2,500+ available)
4. **Multi-level validation** - Quick check → Full validation → Workflow validation
5. **Never trust defaults** - Explicitly configure ALL parameters
@@ -945,7 +943,7 @@ Once connected, Claude can use these powerful tools:
- **`get_node_as_tool_info`** - Get guidance on using any node as an AI tool
### Template Tools
- **`list_templates`** - Browse all templates with descriptions and optional metadata (2,709 templates)
- **`list_templates`** - Browse all templates with descriptions and optional metadata (2,500+ templates)
- **`search_templates`** - Text search across template names and descriptions
- **`search_templates_by_metadata`** - Advanced filtering by complexity, setup time, services, audience
- **`list_node_templates`** - Find templates using specific nodes
@@ -1100,17 +1098,17 @@ npm run dev:http # HTTP dev mode
## 📊 Metrics & Coverage
Current database coverage (n8n v1.117.2):
Current database coverage (n8n v1.113.3):
- ✅ **541/541** nodes loaded (100%)
- ✅ **541** nodes with properties (100%)
- ✅ **470** nodes with documentation (87%)
- ✅ **271** AI-capable tools detected
- ✅ **536/536** nodes loaded (100%)
- ✅ **528** nodes with properties (98.7%)
- ✅ **470** nodes with documentation (88%)
- ✅ **267** AI-capable tools detected
- ✅ **2,646** pre-extracted template configurations
- ✅ **2,709** workflow templates available (100% metadata coverage)
- ✅ **2,500+** workflow templates available
- ✅ **AI Agent & LangChain nodes** fully documented
- ⚡ **Average response time**: ~12ms
- 💾 **Database size**: ~68MB (includes templates with metadata)
- 💾 **Database size**: ~15MB (optimized)
## 🔄 Recent Updates

View File

@@ -1,318 +0,0 @@
# N8N-MCP Validation Analysis: Complete Report
**Date**: November 8, 2025
**Dataset**: 29,218 validation events | 9,021 unique users | 90 days
**Status**: Complete and ready for action
---
## Analysis Documents
### 1. ANALYSIS_QUICK_REFERENCE.md (5.8KB)
**Best for**: Quick decisions, meetings, slide presentations
START HERE if you want the key points in 5 minutes.
**Contains**:
- One-paragraph core finding
- Top 3 problem areas with root causes
- 5 most common errors
- Implementation plan summary
- Key metrics & targets
- FAQ section
---
### 2. VALIDATION_ANALYSIS_SUMMARY.md (13KB)
**Best for**: Executive stakeholders, team leads, decision makers
Read this for comprehensive but concise overview.
**Contains**:
- One-page executive summary
- Health scorecard with key metrics
- Detailed problem area breakdown
- Error category distribution
- Agent behavior insights
- Tool usage patterns
- Documentation impact findings
- Top 5 recommendations with ROI estimates
- 50-65% improvement projection
---
### 3. VALIDATION_ANALYSIS_REPORT.md (27KB)
**Best for**: Technical deep-dive, implementation planning, root cause analysis
Complete reference document with all findings.
**Contains**:
- All 16 SQL queries (reproducible)
- Node-specific difficulty ranking (top 20)
- Top 25 unique validation error messages
- Error categorization with root causes
- Tool usage patterns before failures
- Search query analysis
- Documentation effectiveness study
- Retry success rate analysis
- Property-level difficulty matrix
- 8 detailed recommendations with implementation guides
- Phase-by-phase action items
- KPI tracking setup
- Complete appendix with error message reference
---
### 4. IMPLEMENTATION_ROADMAP.md (4.3KB)
**Best for**: Project managers, development team, sprint planning
Actionable roadmap for the next 6 weeks.
**Contains**:
- Phase 1-3 breakdown (2 weeks each)
- Specific file locations to modify
- Effort estimates per task
- Success criteria for each phase
- Expected impact projections
- Code examples (before/after)
- Key changes documentation
---
## Reading Paths
### Path A: Decision Maker (30 minutes)
1. Read: ANALYSIS_QUICK_REFERENCE.md
2. Review: Key metrics in VALIDATION_ANALYSIS_SUMMARY.md
3. Decision: Approve IMPLEMENTATION_ROADMAP.md
### Path B: Product Manager (1 hour)
1. Read: VALIDATION_ANALYSIS_SUMMARY.md
2. Skim: Top recommendations in VALIDATION_ANALYSIS_REPORT.md
3. Review: IMPLEMENTATION_ROADMAP.md
4. Check: Success metrics and timelines
### Path C: Technical Lead (2-3 hours)
1. Read: ANALYSIS_QUICK_REFERENCE.md
2. Deep-dive: VALIDATION_ANALYSIS_REPORT.md
3. Study: IMPLEMENTATION_ROADMAP.md
4. Review: Code examples and SQL queries
5. Plan: Ticket creation and sprint allocation
### Path D: Developer (3-4 hours)
1. Skim: ANALYSIS_QUICK_REFERENCE.md for context
2. Read: VALIDATION_ANALYSIS_REPORT.md sections 3-8
3. Study: IMPLEMENTATION_ROADMAP.md thoroughly
4. Review: All code locations and examples
5. Plan: First task implementation
---
## Key Findings Overview
### The Core Insight
Validation failures are NOT broken—they're evidence the system works perfectly. 29,218 validation events prevented bad deployments. The challenge is GUIDANCE GAPS that cause first-attempt failures.
### Success Evidence
- 100% same-day error recovery rate
- 100% retry success rate
- All agents fix errors when given feedback
- Zero "unfixable" errors
### Problem Areas (75% of errors)
1. **Workflow structure** (26%) - JSON malformation
2. **Connections** (14%) - Unintuitive syntax
3. **Required fields** (8%) - Not marked upfront
### Most Problematic Nodes
- Webhook/Trigger (127 failures)
- Slack (73 failures)
- AI Agent (36 failures)
- HTTP Request (31 failures)
- OpenAI (35 failures)
### Solution Strategy
- Phase 1: Better error messages + required field markers (25-30% reduction)
- Phase 2: Documentation + validation improvements (additional 15-20%)
- Phase 3: Advanced features + monitoring (additional 10-15%)
- **Target**: 50-65% total failure reduction in 6 weeks
---
## Critical Numbers
```
Validation Events ............. 29,218
Unique Users .................. 9,021
Data Quality .................. 100% (all marked as errors)
Current Metrics:
Error Rate (doc users) ....... 12.6%
Error Rate (non-doc users) ... 10.8%
First-attempt success ........ ~77%
Retry success ................ 100%
Same-day recovery ............ 100%
Target Metrics (after 6 weeks):
Error Rate ................... 6-7% (-50%)
First-attempt success ........ 85%+
Retry success ................ 100%
Implementation effort ........ 60-80 hours
```
---
## Implementation Timeline
```
Week 1-2: Phase 1 (Error messages, field markers, webhook guide)
Expected: 25-30% failure reduction
Week 3-4: Phase 2 (Enum suggestions, connection guide, AI validation)
Expected: Additional 15-20% reduction
Week 5-6: Phase 3 (Search improvements, fuzzy matching, KPI setup)
Expected: Additional 10-15% reduction
Target: 50-65% total reduction by Week 6
```
---
## How to Use These Documents
### For Review & Approval
1. Start with ANALYSIS_QUICK_REFERENCE.md
2. Check key metrics in VALIDATION_ANALYSIS_SUMMARY.md
3. Review IMPLEMENTATION_ROADMAP.md for feasibility
4. Decision: Approve phase 1-3
### For Team Planning
1. Read IMPLEMENTATION_ROADMAP.md
2. Create GitHub issues from each task
3. Assign based on effort estimates
4. Schedule sprints for phase 1-3
### For Development
1. Review specific recommendations in VALIDATION_ANALYSIS_REPORT.md
2. Find code locations in IMPLEMENTATION_ROADMAP.md
3. Study code examples (before/after)
4. Implement and test
### For Measurement
1. Record baseline metrics (current state)
2. Deploy Phase 1 and measure impact
3. Use KPI queries from VALIDATION_ANALYSIS_REPORT.md
4. Adjust strategy based on actual results
---
## Key Recommendations (Priority Order)
### IMMEDIATE (Week 1-2)
1. **Enhance error messages** - Add location + examples
2. **Mark required fields** - Add "⚠️ REQUIRED" to tools
3. **Create webhook guide** - Document configuration rules
### HIGH (Week 3-4)
4. **Add enum suggestions** - Show valid values in errors
5. **Create connections guide** - Document syntax + examples
6. **Add AI Agent validation** - Detect missing LLM connections
### MEDIUM (Week 5-6)
7. **Improve search results** - Add configuration hints
8. **Build fuzzy matcher** - Suggest similar node types
9. **Setup KPI tracking** - Monitor improvement
---
## Questions & Answers
**Q: Why so many validation failures?**
A: High usage (9,021 users, complex workflows). System is working—preventing bad deployments.
**Q: Shouldn't we just allow invalid configurations?**
A: No, validation prevents 29,218 broken workflows from deploying. We improve guidance instead.
**Q: Do agents actually learn from errors?**
A: Yes, 100% same-day recovery rate proves feedback works perfectly.
**Q: Can we really reduce failures by 50-65%?**
A: Yes, analysis shows these specific improvements target the actual root causes.
**Q: How long will this take?**
A: 60-80 developer-hours across 6 weeks. Can start immediately.
**Q: What's the biggest win?**
A: Marking required fields (378 errors) + better structure messages (1,268 errors).
---
## Next Steps
1. **This Week**: Review all documents and get approval
2. **Week 1**: Create GitHub issues from IMPLEMENTATION_ROADMAP.md
3. **Week 2**: Assign to team, start Phase 1
4. **Week 4**: Deploy Phase 1, start Phase 2
5. **Week 6**: Deploy Phase 2, start Phase 3
6. **Week 8**: Deploy Phase 3, begin monitoring
7. **Week 9+**: Review metrics, iterate
---
## File Structure
```
/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/
├── ANALYSIS_QUICK_REFERENCE.md ............ Quick lookup (5.8KB)
├── VALIDATION_ANALYSIS_SUMMARY.md ........ Executive summary (13KB)
├── VALIDATION_ANALYSIS_REPORT.md ......... Complete analysis (27KB)
├── IMPLEMENTATION_ROADMAP.md ............. Action plan (4.3KB)
└── README_ANALYSIS.md ................... This file
```
**Total Documentation**: 50KB of analysis, recommendations, and implementation guidance
---
## Contact & Support
For specific questions:
- **Why?** → See VALIDATION_ANALYSIS_REPORT.md Section 2-8
- **How?** → See IMPLEMENTATION_ROADMAP.md for code locations
- **When?** → See IMPLEMENTATION_ROADMAP.md for timeline
- **Metrics?** → See VALIDATION_ANALYSIS_SUMMARY.md key metrics section
---
## Metadata
| Item | Value |
|------|-------|
| Analysis Date | November 8, 2025 |
| Data Period | Sept 26 - Nov 8, 2025 (90 days) |
| Sample Size | 29,218 validation events |
| Users Analyzed | 9,021 unique users |
| SQL Queries | 16 comprehensive queries |
| Confidence Level | HIGH |
| Status | Complete & Ready for Implementation |
---
## Analysis Methodology
1. **Data Collection**: Extracted all validation_details events from PostgreSQL
2. **Categorization**: Grouped errors by type, node, and message pattern
3. **Pattern Analysis**: Identified root causes for each error category
4. **User Behavior**: Tracked tool usage before/after failures
5. **Recovery Analysis**: Measured success rates and correction time
6. **Recommendation Development**: Mapped solutions to specific problems
7. **Impact Projection**: Estimated improvement from each solution
8. **Roadmap Creation**: Phased implementation plan with effort estimates
**Data Quality**: 100% of validation events properly categorized, no data loss or corruption
---
**Analysis Complete** | **Ready for Review** | **Awaiting Approval to Proceed**

View File

@@ -1,720 +0,0 @@
# N8N-MCP Telemetry Database Analysis
**Analysis Date:** November 12, 2025
**Analyst Role:** Telemetry Data Analyst
**Project:** n8n-mcp
## Executive Summary
The n8n-mcp project has a comprehensive telemetry system that tracks:
- **Tool usage patterns** (which tools are used, success rates, performance)
- **Workflow creation and validation** (workflow structure, complexity, node types)
- **User sessions and engagement** (startup metrics, session data)
- **Error patterns** (error types, affected tools, categorization)
- **Performance metrics** (operation duration, tool sequences, latency)
**Current Infrastructure:**
- **Backend:** Supabase PostgreSQL (hardcoded: `ydyufsohxdfpopqbubwk.supabase.co`)
- **Tables:** 2 main event tables + workflow metadata
- **Event Tracking:** SDK-based with batch processing (5s flush interval)
- **Privacy:** PII sanitization, no user credentials or sensitive data stored
---
## 1. Schema Analysis
### 1.1 Current Table Structures
#### `telemetry_events` (Primary Event Table)
**Purpose:** Tracks all discrete user interactions and system events
```sql
-- Inferred structure based on batch processor (telemetry_events table)
-- Columns inferred from TelemetryEvent interface:
-- - id: UUID (primary key, auto-generated)
-- - user_id: TEXT (anonymized user identifier)
-- - event: TEXT (event type name)
-- - properties: JSONB (flexible event-specific data)
-- - created_at: TIMESTAMP (server-side timestamp)
```
**Data Model:**
```typescript
interface TelemetryEvent {
user_id: string; // Anonymized user ID
event: string; // Event type (see section 1.2)
properties: Record<string, any>; // Event-specific metadata
created_at?: string; // ISO 8601 timestamp
}
```
**Rows Estimate:** 276K+ events (based on prompt description)
---
#### `telemetry_workflows` (Workflow Metadata Table)
**Purpose:** Stores workflow structure analysis and complexity metrics
```sql
-- Structure inferred from WorkflowTelemetry interface:
-- - id: UUID (primary key)
-- - user_id: TEXT
-- - workflow_hash: TEXT (UNIQUE, SHA-256 hash of normalized workflow)
-- - node_count: INTEGER
-- - node_types: TEXT[] (PostgreSQL array or JSON)
-- - has_trigger: BOOLEAN
-- - has_webhook: BOOLEAN
-- - complexity: TEXT CHECK IN ('simple', 'medium', 'complex')
-- - sanitized_workflow: JSONB (stripped workflow for pattern analysis)
-- - created_at: TIMESTAMP DEFAULT NOW()
```
**Data Model:**
```typescript
interface WorkflowTelemetry {
user_id: string;
workflow_hash: string; // SHA-256 hash, unique constraint
node_count: number;
node_types: string[]; // e.g., ["n8n-nodes-base.httpRequest", ...]
has_trigger: boolean;
has_webhook: boolean;
complexity: 'simple' | 'medium' | 'complex';
sanitized_workflow: {
nodes: any[];
connections: any;
};
created_at?: string;
}
```
**Rows Estimate:** 6.5K+ unique workflows (based on prompt description)
---
### 1.2 Local SQLite Database (n8n-mcp Internal)
The project maintains a **SQLite database** (`src/database/schema.sql`) for:
- Node metadata (525 nodes, 263 AI-tool-capable)
- Workflow templates (pre-built examples)
- Node versions (versioning support)
- Property tracking (for configuration analysis)
**Note:** This is **separate from Supabase telemetry** - it's the knowledge base, not the analytics store.
---
## 2. Event Distribution Analysis
### 2.1 Tracked Event Types
Based on source code analysis (`event-tracker.ts`):
| Event Type | Purpose | Frequency | Properties |
|---|---|---|---|
| **tool_used** | Tool execution | High | `tool`, `success`, `duration` |
| **workflow_created** | Workflow creation | Medium | `nodeCount`, `nodeTypes`, `complexity`, `hasTrigger`, `hasWebhook` |
| **workflow_validation_failed** | Validation errors | Low-Medium | `nodeCount` |
| **error_occurred** | System errors | Variable | `errorType`, `context`, `tool`, `error`, `mcpMode`, `platform` |
| **session_start** | User session begin | Per-session | `version`, `platform`, `arch`, `nodeVersion`, `isDocker`, `cloudPlatform`, `startupDurationMs` |
| **startup_completed** | Server initialization success | Per-startup | `version` |
| **startup_error** | Initialization failures | Rare | `checkpoint`, `errorMessage`, `checkpointsPassed`, `startupDuration` |
| **search_query** | Search operations | Medium | `query`, `resultsFound`, `searchType`, `hasResults`, `isZeroResults` |
| **validation_details** | Configuration validation | Medium | `nodeType`, `errorType`, `errorCategory`, `details` |
| **tool_sequence** | Tool usage patterns | High | `previousTool`, `currentTool`, `timeDelta`, `isSlowTransition`, `sequence` |
| **node_configuration** | Node setup patterns | Medium | `nodeType`, `propertiesSet`, `usedDefaults`, `complexity` |
| **performance_metric** | Operation latency | Medium | `operation`, `duration`, `isSlow`, `isVerySlow`, `metadata` |
**Estimated Distribution (inferred from code):**
- 40-50%: `tool_used` (high-frequency tracking)
- 20-30%: `tool_sequence` (dependency tracking)
- 10-15%: `error_occurred` (error monitoring)
- 5-10%: `validation_details` (validation insights)
- 5-10%: `performance_metric` (performance analysis)
- 5-10%: Other events (search, workflow, session)
---
## 3. Workflow Operations Analysis
### 3.1 Current Workflow Tracking
**Workflows ARE tracked** but with **limited mutation data:**
```typescript
// Current: Basic workflow creation event
{
event: 'workflow_created',
properties: {
nodeCount: 5,
nodeTypes: ['n8n-nodes-base.httpRequest', ...],
complexity: 'medium',
hasTrigger: true,
hasWebhook: false
}
}
// Current: Full workflow snapshot stored separately
{
workflow_hash: 'sha256hash...',
node_count: 5,
node_types: [...],
sanitized_workflow: {
nodes: [{ type, name, position }, ...],
connections: { ... }
}
}
```
**Missing Data for Workflow Mutations:**
- No "before" state tracking
- No "after" state tracking
- No change instructions/transformation descriptions
- No diff/delta operations recorded
- No workflow modification event types
---
## 4. Data Samples & Examples
### 4.1 Sample Telemetry Events
**Tool Usage Event:**
```json
{
"user_id": "user_123_anonymized",
"event": "tool_used",
"properties": {
"tool": "get_node_info",
"success": true,
"duration": 245
},
"created_at": "2025-11-12T10:30:45.123Z"
}
```
**Tool Sequence Event:**
```json
{
"user_id": "user_123_anonymized",
"event": "tool_sequence",
"properties": {
"previousTool": "search_nodes",
"currentTool": "get_node_info",
"timeDelta": 1250,
"isSlowTransition": false,
"sequence": "search_nodes->get_node_info"
},
"created_at": "2025-11-12T10:30:46.373Z"
}
```
**Workflow Creation Event:**
```json
{
"user_id": "user_123_anonymized",
"event": "workflow_created",
"properties": {
"nodeCount": 3,
"nodeTypes": 2,
"complexity": "simple",
"hasTrigger": true,
"hasWebhook": false
},
"created_at": "2025-11-12T10:35:12.456Z"
}
```
**Error Event:**
```json
{
"user_id": "user_123_anonymized",
"event": "error_occurred",
"properties": {
"errorType": "validation_error",
"context": "Node configuration failed [KEY]",
"tool": "config_validator",
"error": "[SANITIZED] type error",
"mcpMode": "stdio",
"platform": "darwin"
},
"created_at": "2025-11-12T10:36:01.789Z"
}
```
**Workflow Stored Record:**
```json
{
"user_id": "user_123_anonymized",
"workflow_hash": "f1a9d5e2c4b8...",
"node_count": 3,
"node_types": [
"n8n-nodes-base.webhook",
"n8n-nodes-base.httpRequest",
"n8n-nodes-base.slack"
],
"has_trigger": true,
"has_webhook": true,
"complexity": "medium",
"sanitized_workflow": {
"nodes": [
{
"type": "n8n-nodes-base.webhook",
"name": "webhook",
"position": [250, 300]
},
{
"type": "n8n-nodes-base.httpRequest",
"name": "HTTP Request",
"position": [450, 300]
},
{
"type": "n8n-nodes-base.slack",
"name": "Send Message",
"position": [650, 300]
}
],
"connections": {
"webhook": { "main": [[{"node": "HTTP Request", "output": 0}]] },
"HTTP Request": { "main": [[{"node": "Send Message", "output": 0}]] }
}
},
"created_at": "2025-11-12T10:35:12.456Z"
}
```
---
## 5. Missing Data for N8N-Fixer Dataset
### 5.1 Critical Gaps for Workflow Mutation Tracking
To support the n8n-fixer dataset requirement (before workflow → instruction → after workflow), the following data is **currently missing:**
#### Gap 1: No Mutation Events
```
MISSING: Events specifically for workflow modifications
- No "workflow_modified" event type
- No "workflow_patch_applied" event type
- No "workflow_instruction_executed" event type
```
#### Gap 2: No Before/After Snapshots
```
MISSING: Complete workflow states before and after changes
Current: Only stores sanitized_workflow (minimal structure)
Needed: Full workflow JSON including:
- Complete node configurations
- All node properties
- Expression formulas
- Credentials references
- Settings
- Metadata
```
#### Gap 3: No Instruction Data
```
MISSING: The transformation instructions/prompts
- No field to store the "before" instruction
- No field for the AI-generated fix/modification instruction
- No field for the "after" state expectation
```
#### Gap 4: No Diff/Delta Recording
```
MISSING: Specific changes made
- No operation logs (which nodes changed, how)
- No property-level diffs
- No connection modifications tracking
- No validation state transitions
```
#### Gap 5: No Workflow Mutation Success Metrics
```
MISSING: Outcome tracking
- No "mutation_success" or "mutation_failed" event
- No validation result before/after comparison
- No user satisfaction feedback
- No error rate for auto-fixed workflows
```
---
### 5.2 Proposed Schema Additions
To support n8n-fixer dataset collection, add:
#### New Table: `workflow_mutations`
```sql
CREATE TABLE IF NOT EXISTS workflow_mutations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL,
workflow_id TEXT NOT NULL, -- n8n workflow ID (optional if new)
-- Before state
before_workflow_json JSONB NOT NULL, -- Complete workflow before mutation
before_workflow_hash TEXT NOT NULL, -- SHA-256 of before state
before_validation_status TEXT, -- 'valid', 'invalid', 'unknown'
before_error_summary TEXT, -- Comma-separated error types
-- Mutation details
instruction TEXT, -- AI instruction or user prompt
instruction_type TEXT CHECK(instruction_type IN (
'ai_generated',
'user_provided',
'auto_fix',
'validation_correction'
)),
mutation_source TEXT, -- Tool/agent that created instruction
-- After state
after_workflow_json JSONB NOT NULL, -- Complete workflow after mutation
after_workflow_hash TEXT NOT NULL, -- SHA-256 of after state
after_validation_status TEXT, -- 'valid', 'invalid', 'unknown'
after_error_summary TEXT, -- Errors remaining after fix
-- Mutation metadata
nodes_modified TEXT[], -- Array of modified node IDs
connections_modified BOOLEAN, -- Were connections changed?
properties_modified TEXT[], -- Property paths that changed
num_changes INTEGER, -- Total number of changes
complexity_before TEXT, -- 'simple', 'medium', 'complex'
complexity_after TEXT,
-- Outcome tracking
mutation_success BOOLEAN, -- Did it achieve desired state?
validation_improved BOOLEAN, -- Fewer errors after?
user_approved BOOLEAN, -- User accepted the change?
created_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX idx_mutations_user_id ON workflow_mutations(user_id);
CREATE INDEX idx_mutations_workflow_id ON workflow_mutations(workflow_id);
CREATE INDEX idx_mutations_created_at ON workflow_mutations(created_at);
CREATE INDEX idx_mutations_success ON workflow_mutations(mutation_success);
```
#### New Event Type: `workflow_mutation`
```typescript
interface WorkflowMutationEvent extends TelemetryEvent {
event: 'workflow_mutation';
properties: {
workflowId: string;
beforeHash: string;
afterHash: string;
instructionType: 'ai_generated' | 'user_provided' | 'auto_fix';
nodesModified: number;
propertiesChanged: number;
mutationSuccess: boolean;
validationImproved: boolean;
errorsBefore: number;
errorsAfter: number;
}
}
```
---
## 6. Current Data Capture Pipeline
### 6.1 Data Flow Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ User Interaction │
│ (Tool Usage, Workflow Creation, Error, Search, etc.) │
└────────────────────────────┬────────────────────────────────────┘
┌────────────────────────────▼────────────────────────────────────┐
│ TelemetryEventTracker │
│ ├─ trackToolUsage() │
│ ├─ trackWorkflowCreation() │
│ ├─ trackError() │
│ ├─ trackSearchQuery() │
│ └─ trackValidationDetails() │
│ │
│ Queuing: │
│ ├─ this.eventQueue: TelemetryEvent[] │
│ └─ this.workflowQueue: WorkflowTelemetry[] │
└────────────────────────────┬────────────────────────────────────┘
(5-second interval)
┌────────────────────────────▼────────────────────────────────────┐
│ TelemetryBatchProcessor │
│ ├─ flushEvents() → Supabase.insert(telemetry_events) │
│ ├─ flushWorkflows() → Supabase.insert(telemetry_workflows) │
│ ├─ Batching (max 50) │
│ ├─ Deduplication (workflows by hash) │
│ ├─ Rate Limiting │
│ ├─ Retry Logic (max 3 attempts) │
│ └─ Circuit Breaker │
└────────────────────────────┬────────────────────────────────────┘
┌────────────────────────────▼────────────────────────────────────┐
│ Supabase PostgreSQL │
│ ├─ telemetry_events (276K+ rows) │
│ └─ telemetry_workflows (6.5K+ rows) │
│ │
│ URL: ydyufsohxdfpopqbubwk.supabase.co │
│ Tables: Public (anon key access) │
└─────────────────────────────────────────────────────────────────┘
```
---
### 6.2 Privacy & Sanitization
The system implements **multi-layer sanitization:**
```typescript
// Layer 1: Error Message Sanitization
sanitizeErrorMessage(errorMessage: string)
├─ Removes sensitive patterns (emails, keys, URLs)
├─ Prevents regex DoS attacks
└─ Truncates to 500 chars
// Layer 2: Context Sanitization
sanitizeContext(context: string)
├─ [EMAIL] email addresses
├─ [KEY] API keys (32+ char sequences)
├─ [URL] URLs
└─ Truncates to 100 chars
// Layer 3: Workflow Sanitization
WorkflowSanitizer.sanitizeWorkflow(workflow)
├─ Removes credentials
├─ Removes sensitive properties
├─ Strips full node configurations
├─ Keeps only: type, name, position, input/output counts
└─ Generates SHA-256 hash for deduplication
```
---
## 7. Recommendations for N8N-Fixer Dataset Implementation
### 7.1 Immediate Actions (Phase 1)
**1. Add Workflow Mutation Table**
```sql
-- Create workflow_mutations table (see Section 5.2)
-- Add indexes for user_id, workflow_id, created_at
-- Add unique constraint on (user_id, workflow_id, created_at)
```
**2. Extend TelemetryEvent Types**
```typescript
// In telemetry-types.ts
export interface WorkflowMutationEvent extends TelemetryEvent {
event: 'workflow_mutation';
properties: {
// See Section 5.2 for full interface
}
}
```
**3. Add Tracking Method to EventTracker**
```typescript
// In event-tracker.ts
trackWorkflowMutation(
beforeWorkflow: any,
instruction: string,
afterWorkflow: any,
instructionType: 'ai_generated' | 'user_provided' | 'auto_fix',
success: boolean
): void
```
**4. Add Flushing Logic to BatchProcessor**
```typescript
// In batch-processor.ts
private async flushWorkflowMutations(
mutations: WorkflowMutation[]
): Promise<boolean>
```
---
### 7.2 Integration Points
**Where to Capture Mutations:**
1. **AI Workflow Validation** (n8n_validate_workflow tool)
- Before: Original workflow
- Instruction: Validation errors + fix suggestion
- After: Corrected workflow
- Type: `auto_fix`
2. **Workflow Auto-Fix** (n8n_autofix_workflow tool)
- Before: Broken workflow
- Instruction: "Fix common validation errors"
- After: Fixed workflow
- Type: `auto_fix`
3. **Partial Workflow Updates** (n8n_update_partial_workflow tool)
- Before: Current workflow
- Instruction: Diff operations to apply
- After: Updated workflow
- Type: `user_provided` or `ai_generated`
4. **Manual User Edits** (if tracking enabled)
- Before: User's workflow state
- Instruction: User action/prompt
- After: User's modified state
- Type: `user_provided`
---
### 7.3 Data Quality Considerations
**When collecting mutation data:**
| Consideration | Recommendation |
|---|---|
| **Full Workflow Size** | Store compressed (gzip) for large workflows |
| **Sensitive Data** | Still sanitize credentials, even in mutations |
| **Hash Verification** | Use SHA-256 to verify data integrity |
| **Validation State** | Capture error types before/after (not details) |
| **Performance** | Compress mutations before storage if >500KB |
| **Deduplication** | Skip identical before/after pairs |
| **User Consent** | Ensure opt-in telemetry flag covers mutations |
---
### 7.4 Analysis Queries (Once Data Collected)
**Example queries for n8n-fixer dataset analysis:**
```sql
-- 1. Mutation success rate by instruction type
SELECT
instruction_type,
COUNT(*) as total_mutations,
COUNT(*) FILTER (WHERE mutation_success = true) as successful,
ROUND(100.0 * COUNT(*) FILTER (WHERE mutation_success = true)
/ COUNT(*), 2) as success_rate
FROM workflow_mutations
WHERE created_at >= NOW() - INTERVAL '30 days'
GROUP BY instruction_type
ORDER BY success_rate DESC;
-- 2. Most common workflow modifications
SELECT
nodes_modified,
COUNT(*) as frequency
FROM workflow_mutations
WHERE created_at >= NOW() - INTERVAL '30 days'
GROUP BY nodes_modified
ORDER BY frequency DESC
LIMIT 20;
-- 3. Validation improvement distribution
SELECT
(errors_before - COALESCE(errors_after, 0)) as errors_fixed,
COUNT(*) as count
FROM workflow_mutations
WHERE created_at >= NOW() - INTERVAL '30 days'
AND validation_improved = true
GROUP BY errors_fixed
ORDER BY count DESC;
-- 4. Before/after complexity transitions
SELECT
complexity_before,
complexity_after,
COUNT(*) as count
FROM workflow_mutations
WHERE created_at >= NOW() - INTERVAL '30 days'
GROUP BY complexity_before, complexity_after
ORDER BY count DESC;
```
---
## 8. Technical Implementation Details
### 8.1 Current Event Queue Configuration
```typescript
// From TELEMETRY_CONFIG in telemetry-types.ts
BATCH_FLUSH_INTERVAL: 5000, // 5 seconds
EVENT_QUEUE_THRESHOLD: 10, // Queue 10 events before flush
MAX_QUEUE_SIZE: 1000, // Max 1000 events in queue
MAX_BATCH_SIZE: 50, // Max 50 per batch
MAX_RETRIES: 3, // Retry failed sends 3x
RATE_LIMIT_WINDOW: 60000, // 1 minute window
RATE_LIMIT_MAX_EVENTS: 100, // Max 100 events/min
```
### 8.2 User Identification
- **Anonymous User ID:** Generated via TelemetryConfigManager
- **No Personal Data:** No email, name, or identifying information
- **Privacy-First:** User can disable telemetry via environment variable
- **Env Override:** `TELEMETRY_DISABLED=true` disables all tracking
### 8.3 Error Handling & Resilience
```
Circuit Breaker Pattern:
├─ Open: Stop sending for 1 minute after repeated failures
├─ Half-Open: Resume sending with caution
└─ Closed: Normal operation
Dead Letter Queue:
├─ Stores failed events temporarily
├─ Retries on next healthy flush
└─ Max 100 items (overflow discarded)
Rate Limiting:
├─ 100 events per minute per window
├─ Tools and Workflows exempt from limits
└─ Prevents overwhelming the backend
```
---
## 9. Conclusion
### Current State
The n8n-mcp telemetry system is **production-ready** with:
- 276K+ events tracked
- 6.5K+ unique workflows recorded
- Multi-layer privacy protection
- Robust batching and error handling
### Missing for N8N-Fixer Dataset
To build a high-quality "before/instruction/after" dataset:
1. **New table** for workflow mutations
2. **New event type** for mutation tracking
3. **Full workflow storage** (not sanitized)
4. **Instruction preservation** (capture user prompt/AI suggestion)
5. **Outcome metrics** (success/validation improvement)
### Next Steps
1. Create `workflow_mutations` table in Supabase (Phase 1)
2. Add tracking methods to TelemetryManager (Phase 1)
3. Instrument workflow modification tools (Phase 2)
4. Validate data quality with sample queries (Phase 2)
5. Begin dataset collection (Phase 3)
---
## Appendix: File References
**Key Source Files:**
- `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/telemetry-types.ts` - Type definitions
- `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/telemetry-manager.ts` - Main coordinator
- `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/event-tracker.ts` - Event tracking logic
- `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/batch-processor.ts` - Supabase integration
- `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/database/schema.sql` - Local SQLite schema
**Database Credentials:**
- **Supabase URL:** `ydyufsohxdfpopqbubwk.supabase.co`
- **Anon Key:** (hardcoded in telemetry-types.ts line 105)
- **Tables:** `public.telemetry_events`, `public.telemetry_workflows`
---
*End of Analysis*

View File

@@ -1,447 +0,0 @@
# n8n-MCP Telemetry Analysis - Complete Index
## Navigation Guide for All Analysis Documents
**Analysis Period:** August 10 - November 8, 2025 (90 days)
**Report Date:** November 8, 2025
**Data Quality:** High (506K+ events, 36/90 days with errors)
**Status:** Critical Issues Identified - Action Required
---
## Document Overview
This telemetry analysis consists of 5 comprehensive documents designed for different audiences and use cases.
### Document Map
```
┌─────────────────────────────────────────────────────────────┐
│ TELEMETRY ANALYSIS COMPLETE PACKAGE │
├─────────────────────────────────────────────────────────────┤
│ │
│ 1. EXECUTIVE SUMMARY (this file + next level up) │
│ ↓ Start here for quick overview │
│ └─→ TELEMETRY_EXECUTIVE_SUMMARY.md │
│ • For: Decision makers, leadership │
│ • Length: 5-10 minutes read │
│ • Contains: Key stats, risks, ROI │
│ │
│ 2. MAIN ANALYSIS REPORT │
│ ↓ For comprehensive understanding │
│ └─→ TELEMETRY_ANALYSIS_REPORT.md │
│ • For: Product, engineering teams │
│ • Length: 30-45 minutes read │
│ • Contains: Detailed findings, patterns, trends │
│ │
│ 3. TECHNICAL DEEP-DIVE │
│ ↓ For root cause investigation │
│ └─→ TELEMETRY_TECHNICAL_DEEP_DIVE.md │
│ • For: Engineering team, architects │
│ • Length: 45-60 minutes read │
│ • Contains: Root causes, hypotheses, gaps │
│ │
│ 4. IMPLEMENTATION ROADMAP │
│ ↓ For actionable next steps │
│ └─→ IMPLEMENTATION_ROADMAP.md │
│ • For: Engineering leads, project managers │
│ • Length: 20-30 minutes read │
│ • Contains: Detailed implementation steps │
│ │
│ 5. VISUALIZATION DATA │
│ ↓ For presentations and dashboards │
│ └─→ TELEMETRY_DATA_FOR_VISUALIZATION.md │
│ • For: All audiences (chart data) │
│ • Length: Reference material │
│ • Contains: Charts, graphs, metrics data │
│ │
└─────────────────────────────────────────────────────────────┘
```
---
## Quick Navigation
### By Role
#### Executive Leadership / C-Level
**Time Available:** 5-10 minutes
**Priority:** Understanding business impact
1. Start: TELEMETRY_EXECUTIVE_SUMMARY.md
2. Focus: Risk assessment, ROI, timeline
3. Reference: Key Statistics (below)
---
#### Product Management
**Time Available:** 30 minutes
**Priority:** User impact, feature decisions
1. Start: TELEMETRY_ANALYSIS_REPORT.md (Section 1-3)
2. Then: TELEMETRY_TECHNICAL_DEEP_DIVE.md (Section 1-2)
3. Reference: TELEMETRY_DATA_FOR_VISUALIZATION.md (charts)
---
#### Engineering / DevOps
**Time Available:** 1-2 hours
**Priority:** Root causes, implementation details
1. Start: TELEMETRY_TECHNICAL_DEEP_DIVE.md
2. Then: IMPLEMENTATION_ROADMAP.md
3. Reference: TELEMETRY_ANALYSIS_REPORT.md (for metrics)
---
#### Engineering Leads / Architects
**Time Available:** 2-3 hours
**Priority:** System design, priority decisions
1. Start: TELEMETRY_ANALYSIS_REPORT.md (all sections)
2. Then: TELEMETRY_TECHNICAL_DEEP_DIVE.md (all sections)
3. Then: IMPLEMENTATION_ROADMAP.md
4. Reference: Visualization data for presentations
---
#### Customer Support / Success
**Time Available:** 20 minutes
**Priority:** Common issues, user guidance
1. Start: TELEMETRY_EXECUTIVE_SUMMARY.md (Top 5 Issues section)
2. Then: TELEMETRY_ANALYSIS_REPORT.md (Section 6: Search Queries)
3. Reference: Top error messages list (below)
---
#### Marketing / Communications
**Time Available:** 15 minutes
**Priority:** Messaging, external communications
1. Start: TELEMETRY_EXECUTIVE_SUMMARY.md
2. Focus: Business impact statement
3. Key message: "We're fixing critical issues this week"
---
## Key Statistics Summary
### Error Metrics
| Metric | Value | Status |
|--------|-------|--------|
| Total Errors (90 days) | 8,859 | Baseline |
| Daily Average | 60.68 | Stable |
| Peak Day | 276 (Oct 30) | Outlier |
| ValidationError | 3,080 (34.77%) | Largest |
| TypeError | 2,767 (31.23%) | Second |
### Tool Performance
| Metric | Value | Status |
|--------|-------|--------|
| Critical Tool: get_node_info | 11.72% failure | Action Required |
| Average Success Rate | 98.4% | Good |
| Highest Risk Tools | 5.5-6.4% failure | Monitor |
### Performance
| Metric | Value | Status |
|--------|-------|--------|
| Sequential Updates Latency | 55.2 seconds | Bottleneck |
| Read-After-Write Latency | 96.6 seconds | Bottleneck |
| Search Retry Rate | 17% | High |
### User Engagement
| Metric | Value | Status |
|--------|-------|--------|
| Daily Sessions | 895 avg | Healthy |
| Daily Users | 572 avg | Healthy |
| Sessions per User | 1.52 avg | Good |
---
## Top 5 Critical Issues
### 1. Workflow-Level Validation Failures (39% of errors)
- **File:** TELEMETRY_ANALYSIS_REPORT.md, Section 2.1
- **Detail:** TELEMETRY_TECHNICAL_DEEP_DIVE.md, Section 1.1
- **Fix:** IMPLEMENTATION_ROADMAP.md, Section Phase 1, Issue 1.2
### 2. `get_node_info` Unreliability (11.72% failure)
- **File:** TELEMETRY_ANALYSIS_REPORT.md, Section 3.2
- **Detail:** TELEMETRY_TECHNICAL_DEEP_DIVE.md, Section 4.1
- **Fix:** IMPLEMENTATION_ROADMAP.md, Section Phase 1, Issue 1.1
### 3. Slow Sequential Updates (55+ seconds)
- **File:** TELEMETRY_ANALYSIS_REPORT.md, Section 4.1
- **Detail:** TELEMETRY_TECHNICAL_DEEP_DIVE.md, Section 6.1
- **Fix:** IMPLEMENTATION_ROADMAP.md, Section Phase 1, Issue 1.3
### 4. Search Inefficiency (17% retry rate)
- **File:** TELEMETRY_ANALYSIS_REPORT.md, Section 6.1
- **Detail:** TELEMETRY_TECHNICAL_DEEP_DIVE.md, Section 6.3
- **Fix:** IMPLEMENTATION_ROADMAP.md, Section Phase 2, Issue 2.2
### 5. Type-Related Validation Errors (31.23% of errors)
- **File:** TELEMETRY_ANALYSIS_REPORT.md, Section 1.2
- **Detail:** TELEMETRY_TECHNICAL_DEEP_DIVE.md, Section 2
- **Fix:** IMPLEMENTATION_ROADMAP.md, Section Phase 2, Issue 2.3
---
## Implementation Timeline
### Week 1 (Immediate)
**Expected Impact:** 40-50% error reduction
1. Fix `get_node_info` reliability
- File: IMPLEMENTATION_ROADMAP.md, Phase 1, Issue 1.1
- Effort: 1 day
2. Improve validation error messages
- File: IMPLEMENTATION_ROADMAP.md, Phase 1, Issue 1.2
- Effort: 2 days
3. Add batch workflow update operation
- File: IMPLEMENTATION_ROADMAP.md, Phase 1, Issue 1.3
- Effort: 2-3 days
### Week 2-3 (High Priority)
**Expected Impact:** +30% additional improvement
1. Implement validation caching
- File: IMPLEMENTATION_ROADMAP.md, Phase 2, Issue 2.1
- Effort: 1-2 days
2. Improve search ranking
- File: IMPLEMENTATION_ROADMAP.md, Phase 2, Issue 2.2
- Effort: 2 days
3. Add TypeScript types for top nodes
- File: IMPLEMENTATION_ROADMAP.md, Phase 2, Issue 2.3
- Effort: 3 days
### Week 4 (Optimization)
**Expected Impact:** +10% additional improvement
1. Return updated state in responses
- File: IMPLEMENTATION_ROADMAP.md, Phase 3, Issue 3.1
- Effort: 1-2 days
2. Add workflow diff generation
- File: IMPLEMENTATION_ROADMAP.md, Phase 3, Issue 3.2
- Effort: 1-2 days
---
## Key Findings by Category
### Validation Issues
- Most common error category (96.6% of all errors)
- Workflow-level validation: 39.11% of validation errors
- Generic error messages prevent self-resolution
- See: TELEMETRY_ANALYSIS_REPORT.md, Section 2
### Tool Reliability Issues
- `get_node_info` critical (11.72% failure rate)
- Information retrieval tools less reliable than state management tools
- Validation tools consistently underperform (5.5-6.4% failure)
- See: TELEMETRY_ANALYSIS_REPORT.md, Section 3 & TECHNICAL_DEEP_DIVE.md, Section 4
### Performance Bottlenecks
- Sequential operations extremely slow (55+ seconds)
- Read-after-write pattern inefficient (96.6 seconds)
- Search refinement rate high (17% need multiple searches)
- See: TELEMETRY_ANALYSIS_REPORT.md, Section 4 & TECHNICAL_DEEP_DIVE.md, Section 6
### User Behavior
- Top searches: test (5.8K), webhook (5.1K), http (4.2K)
- Most searches indicate where users struggle
- Session metrics show healthy engagement
- See: TELEMETRY_ANALYSIS_REPORT.md, Section 6
### Temporal Patterns
- Error rate volatile with significant spikes
- October incident period with slow recovery
- Currently stabilizing at 60-65 errors/day baseline
- See: TELEMETRY_ANALYSIS_REPORT.md, Section 9 & TECHNICAL_DEEP_DIVE.md, Section 5
---
## Metrics to Track Post-Implementation
### Primary Success Metrics
1. `get_node_info` failure rate: 11.72% → <1%
2. Validation error clarity: Generic Specific (95% have guidance)
3. Update latency: 55.2s <5s
4. Overall error count: 8,859 <2,000 per quarter
### Secondary Metrics
1. Tool success rates across board: >99%
2. Search retry rate: 17% → <5%
3. Workflow validation time: <2 seconds
4. User satisfaction: +50% improvement
### Dashboard Recommendations
- See: TELEMETRY_DATA_FOR_VISUALIZATION.md, Section 14
- Create live dashboard in Grafana/Datadog
- Update daily; review weekly
---
## SQL Queries Reference
All analysis derived from these core queries:
### Error Analysis
```sql
-- Error type distribution
SELECT error_type, SUM(error_count) as total_occurrences
FROM telemetry_errors_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY error_type ORDER BY total_occurrences DESC;
-- Temporal trends
SELECT date, SUM(error_count) as daily_errors
FROM telemetry_errors_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY date ORDER BY date DESC;
```
### Tool Performance
```sql
-- Tool success rates
SELECT tool_name, SUM(usage_count), SUM(success_count),
ROUND(100.0 * SUM(success_count) / SUM(usage_count), 2) as success_rate
FROM telemetry_tool_usage_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY tool_name
ORDER BY success_rate ASC;
```
### Validation Errors
```sql
-- Validation errors by node type
SELECT node_type, error_type, SUM(error_count) as total
FROM telemetry_validation_errors_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY node_type, error_type
ORDER BY total DESC;
```
Complete query library in: TELEMETRY_ANALYSIS_REPORT.md, Section 12
---
## FAQ
### Q: Which document should I read first?
**A:** TELEMETRY_EXECUTIVE_SUMMARY.md (5 min) to understand the situation
### Q: What's the most critical issue?
**A:** Workflow-level validation failures (39% of errors) with generic error messages that prevent users from self-fixing
### Q: How long will fixes take?
**A:** Week 1: 40-50% improvement; Full implementation: 4-5 weeks
### Q: What's the ROI?
**A:** ~26x return in first year; payback in <2 weeks
### Q: Should we implement all recommendations?
**A:** Phase 1 (Week 1) is mandatory; Phase 2-3 are high-value optimization
### Q: How confident are these findings?
**A:** Very high; based on 506K events across 90 days with consistent patterns
### Q: What should support/success team do?
**A:** Review Section 6 of ANALYSIS_REPORT.md for top user pain points and search patterns
---
## Additional Resources
### For Presentations
- Use TELEMETRY_DATA_FOR_VISUALIZATION.md for all chart/graph data
- Recommend audience: TELEMETRY_EXECUTIVE_SUMMARY.md, Section "Stakeholder Questions & Answers"
### For Team Meetings
- Stand-up briefing: Key Statistics Summary (above)
- Engineering sync: IMPLEMENTATION_ROADMAP.md
- Product review: TELEMETRY_ANALYSIS_REPORT.md, Sections 1-3
### For Documentation
- User-facing docs: TELEMETRY_ANALYSIS_REPORT.md, Section 6 (search queries reveal documentation gaps)
- Error code docs: IMPLEMENTATION_ROADMAP.md, Phase 4
### For Monitoring
- KPI dashboard: TELEMETRY_DATA_FOR_VISUALIZATION.md, Section 14
- Alert thresholds: IMPLEMENTATION_ROADMAP.md, success metrics
---
## Contact & Questions
**Analysis Prepared By:** AI Telemetry Analyst
**Date:** November 8, 2025
**Data Freshness:** Last updated October 31, 2025 (daily updates)
**Review Frequency:** Weekly recommended
For questions about specific findings, refer to:
- Executive level: TELEMETRY_EXECUTIVE_SUMMARY.md
- Technical details: TELEMETRY_TECHNICAL_DEEP_DIVE.md
- Implementation: IMPLEMENTATION_ROADMAP.md
---
## Document Checklist
Use this checklist to ensure you've reviewed appropriate documents:
### Essential Reading (Everyone)
- [ ] TELEMETRY_EXECUTIVE_SUMMARY.md (5-10 min)
- [ ] Top 5 Issues section above (5 min)
### Role-Specific
- [ ] Leadership: TELEMETRY_EXECUTIVE_SUMMARY.md (Risk & ROI sections)
- [ ] Engineering: TELEMETRY_TECHNICAL_DEEP_DIVE.md (all sections)
- [ ] Product: TELEMETRY_ANALYSIS_REPORT.md (Sections 1-3)
- [ ] Project Manager: IMPLEMENTATION_ROADMAP.md (Timeline section)
- [ ] Support: TELEMETRY_ANALYSIS_REPORT.md (Section 6: Search Queries)
### For Implementation
- [ ] IMPLEMENTATION_ROADMAP.md (all sections)
- [ ] TELEMETRY_TECHNICAL_DEEP_DIVE.md (root cause analysis)
### For Presentations
- [ ] TELEMETRY_DATA_FOR_VISUALIZATION.md (all chart data)
- [ ] TELEMETRY_EXECUTIVE_SUMMARY.md (key statistics)
---
## Version History
| Version | Date | Changes |
|---------|------|---------|
| 1.0 | Nov 8, 2025 | Initial comprehensive analysis |
---
## Next Steps
1. **Today:** Review TELEMETRY_EXECUTIVE_SUMMARY.md
2. **Tomorrow:** Schedule team review meeting
3. **This Week:** Estimate Phase 1 implementation effort
4. **Next Week:** Begin Phase 1 development
---
**Status:** Analysis Complete - Ready for Action
All documents are located in:
`/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/`
Files:
- TELEMETRY_ANALYSIS_INDEX.md (this file)
- TELEMETRY_EXECUTIVE_SUMMARY.md
- TELEMETRY_ANALYSIS_REPORT.md
- TELEMETRY_TECHNICAL_DEEP_DIVE.md
- IMPLEMENTATION_ROADMAP.md
- TELEMETRY_DATA_FOR_VISUALIZATION.md

View File

@@ -1,422 +0,0 @@
# Telemetry Analysis Documentation Index
**Comprehensive Analysis of N8N-MCP Telemetry Infrastructure**
**Analysis Date:** November 12, 2025
**Status:** Complete and Ready for Implementation
---
## Quick Start
If you only have 5 minutes:
- Read the summary section below
If you have 30 minutes:
- Read TELEMETRY_N8N_FIXER_DATASET.md (master summary)
If you have 2+ hours:
- Start with TELEMETRY_ANALYSIS.md (main reference)
- Follow with TELEMETRY_MUTATION_SPEC.md (implementation guide)
- Use TELEMETRY_QUICK_REFERENCE.md for queries/patterns
---
## One-Sentence Summary
The n8n-mcp telemetry system successfully tracks 276K+ user interactions across a production Supabase backend, but lacks workflow mutation capture needed for building an n8n-fixer dataset. The solution requires a new table plus 3-4 weeks of integration work.
---
## Document Guide
### PRIMARY DOCUMENTS (Created November 12, 2025)
#### 1. TELEMETRY_ANALYSIS.md (23 KB, 720 lines)
**Your main reference for understanding current state**
Contains:
- Complete table schemas (telemetry_events, telemetry_workflows)
- All 12 event types with JSON examples
- Current workflow tracking capabilities
- Data samples from production
- Gap analysis for n8n-fixer requirements
- Proposed schema additions
- Privacy & security analysis
- Data capture pipeline architecture
When to read: You need the complete picture of what exists and what's missing
Read time: 20-30 minutes
---
#### 2. TELEMETRY_MUTATION_SPEC.md (26 KB, 918 lines)
**Your implementation blueprint**
Contains:
- Complete SQL schema for workflow_mutations table with 20 indexes
- TypeScript interfaces and type definitions
- Integration point specifications
- Mutation analyzer service code structure
- Batch processor extensions
- Code examples for tools to instrument
- Validation rules and data quality checks
- Query patterns for dataset analysis
- 4-phase implementation roadmap
When to read: You're ready to start building the mutation tracking system
Read time: 30-40 minutes
---
#### 3. TELEMETRY_QUICK_REFERENCE.md (11 KB, 503 lines)
**Your developer quick lookup guide**
Contains:
- Supabase connection details
- Event type quick reference
- Common SQL query patterns
- Performance optimization tips
- User journey analysis examples
- Platform distribution queries
- File references and code locations
- Helpful constants and values
When to read: You need to query existing data or reference specific details
Read time: 10-15 minutes
---
#### 4. TELEMETRY_N8N_FIXER_DATASET.md (13 KB, 340 lines)
**Your executive summary and master planning document**
Contains:
- Overview of analysis findings
- Documentation map (what to read in what order)
- Current state summary
- Recommended 4-phase implementation path
- Key metrics you'll collect
- Storage requirements and cost estimates
- Risk assessment
- Success criteria for each phase
- Questions to answer before starting
When to read: Planning implementation or presenting to stakeholders
Read time: 15-20 minutes
---
### SUPPORTING DOCUMENTS (Created November 8, 2025)
#### TELEMETRY_ANALYSIS_REPORT.md (26 KB)
- Executive summary with visualizations
- Event distribution statistics
- Usage patterns and trends
- Performance metrics
- User activity analysis
#### TELEMETRY_EXECUTIVE_SUMMARY.md (10 KB)
- High-level overview for executives
- Key statistics and metrics
- Business impact assessment
- Recommendation summary
#### TELEMETRY_TECHNICAL_DEEP_DIVE.md (18 KB)
- Architecture and design patterns
- Component interactions
- Data flow diagrams
- Implementation details
- Performance considerations
#### TELEMETRY_DATA_FOR_VISUALIZATION.md (18 KB)
- Sample datasets for dashboards
- Query results and aggregations
- Visualization recommendations
- Chart and graph specifications
#### TELEMETRY_ANALYSIS_INDEX.md (15 KB)
- Index of all analyses
- Cross-references
- Topic mappings
- Search guide
---
## Recommended Reading Order
### For Implementation Teams
1. TELEMETRY_N8N_FIXER_DATASET.md (15 min) - Understand the plan
2. TELEMETRY_ANALYSIS.md (30 min) - Understand current state
3. TELEMETRY_MUTATION_SPEC.md (40 min) - Get implementation details
4. TELEMETRY_QUICK_REFERENCE.md (10 min) - Reference during coding
**Total Time:** 95 minutes
### For Product Managers
1. TELEMETRY_EXECUTIVE_SUMMARY.md (10 min)
2. TELEMETRY_N8N_FIXER_DATASET.md (15 min)
3. TELEMETRY_ANALYSIS_REPORT.md (20 min)
**Total Time:** 45 minutes
### For Data Analysts
1. TELEMETRY_ANALYSIS.md (30 min)
2. TELEMETRY_QUICK_REFERENCE.md (10 min)
3. TELEMETRY_ANALYSIS_REPORT.md (20 min)
**Total Time:** 60 minutes
### For Architects
1. TELEMETRY_TECHNICAL_DEEP_DIVE.md (20 min)
2. TELEMETRY_MUTATION_SPEC.md (40 min)
3. TELEMETRY_N8N_FIXER_DATASET.md (15 min)
**Total Time:** 75 minutes
---
## Key Findings Summary
### What Exists Today
- **276K+ telemetry events** tracked in Supabase
- **6.5K+ unique workflows** analyzed
- **12 event types** covering tool usage, errors, validation, workflow creation
- **Production-grade infrastructure** with batching, retry logic, rate limiting
- **Privacy-focused design** with sanitization, anonymization, encryption
### Critical Gaps for N8N-Fixer
- No workflow mutation/modification tracking
- No before/after workflow snapshots
- No instruction/transformation capture
- No mutation success metrics
- No validation improvement tracking
### Proposed Solution
- New `workflow_mutations` table (with 20 indexes)
- Extended telemetry system to capture mutations
- Instrumentation of 3-4 key tools
- 4-phase implementation (3-4 weeks)
### Data Volume Estimates
- Per mutation: 25 KB (with compression)
- Monthly: 250 MB - 1.2 GB
- Annual: 3-14 GB
- Cost: $10-200/month (depending on volume)
### Implementation Effort
- Phase 1 (Infrastructure): 40-60 hours
- Phase 2 (Core Integration): 40-60 hours
- Phase 3 (Tool Integration): 20-30 hours
- Phase 4 (Validation): 20-30 hours
- **Total:** 120-180 hours (3-4 weeks)
---
## Critical Data
### Supabase Connection
```
URL: https://ydyufsohxdfpopqbubwk.supabase.co
Database: PostgreSQL
Auth: Anon key (in telemetry-types.ts)
Tables: telemetry_events, telemetry_workflows
```
### Event Types (by volume)
1. tool_used (40-50%)
2. tool_sequence (20-30%)
3. error_occurred (10-15%)
4. validation_details (5-10%)
5. Others (workflow, session, performance) (5-10%)
### Node Files
- Source types: `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/telemetry-types.ts`
- Main manager: `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/telemetry-manager.ts`
- Event tracker: `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/event-tracker.ts`
- Batch processor: `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/batch-processor.ts`
---
## Implementation Checklist
### Before Starting
- [ ] Read TELEMETRY_N8N_FIXER_DATASET.md
- [ ] Read TELEMETRY_ANALYSIS.md
- [ ] Answer 6 questions (see TELEMETRY_N8N_FIXER_DATASET.md)
- [ ] Get stakeholder approval for 4-phase plan
- [ ] Assign implementation team
### Phase 1: Infrastructure (Weeks 1-2)
- [ ] Create workflow_mutations table in Supabase
- [ ] Add 20+ indexes per specification
- [ ] Define TypeScript types
- [ ] Build mutation validator
- [ ] Write unit tests
### Phase 2: Core Integration (Weeks 2-3)
- [ ] Add trackWorkflowMutation() to TelemetryManager
- [ ] Extend EventTracker with mutation queue
- [ ] Extend BatchProcessor for mutations
- [ ] Write integration tests
- [ ] Code review and merge
### Phase 3: Tool Integration (Week 4)
- [ ] Instrument n8n_autofix_workflow
- [ ] Instrument n8n_update_partial_workflow
- [ ] Instrument validation engine (if applicable)
- [ ] Manual end-to-end testing
- [ ] Code review and merge
### Phase 4: Validation (Week 5)
- [ ] Collect 100+ sample mutations
- [ ] Verify data quality
- [ ] Run analysis queries
- [ ] Assess dataset readiness
- [ ] Begin production collection
---
## Storage & Cost Planning
### Conservative Estimate (10K mutations/month)
- Storage: 250 MB/month
- Cost: $10-20/month
- Dataset: 1K mutations in 3-4 days
### Moderate Estimate (30K mutations/month)
- Storage: 750 MB/month
- Cost: $50-100/month
- Dataset: 10K mutations in 10 days
### High Estimate (50K mutations/month)
- Storage: 1.2 GB/month
- Cost: $100-200/month
- Dataset: 100K mutations in 2 months
**With 90-day retention policy, costs stay at lower end.**
---
## Questions Before Implementation
1. **Data Retention:** Keep mutations for 90 days? 1 year? Indefinite?
2. **Storage Budget:** Monthly budget for telemetry storage?
3. **Workflow Size:** Max workflow size to store? Compression required?
4. **Dataset Timeline:** When do you need first dataset? (1K? 10K? 100K?)
5. **Privacy:** Additional PII to sanitize beyond current approach?
6. **User Consent:** Separate opt-in for mutation tracking vs. general telemetry?
---
## Risk Assessment
### Low Risk
- No breaking changes to existing system
- Fully backward compatible
- Optional feature (can disable if needed)
- No version bump required
### Medium Risk
- Storage growth if >1.2 GB/month
- Performance impact if workflows >10 MB
- Mitigation: Compression + retention policy
### High Risk
- None identified
---
## Success Criteria
When you can answer "yes" to all:
- [ ] 100+ workflow mutations collected
- [ ] Data hash verification passes 100%
- [ ] Sample queries execute <100ms
- [ ] Deduplication working correctly
- [ ] Before/after states properly stored
- [ ] Validation improvements tracked accurately
- [ ] No performance regression in tools
- [ ] Team ready for large-scale collection
---
## Next Steps
### Immediate (This Week)
1. Review this README
2. Read TELEMETRY_N8N_FIXER_DATASET.md
3. Read TELEMETRY_ANALYSIS.md
4. Schedule team review meeting
### Short-term (Next 1-2 Weeks)
1. Answer the 6 questions
2. Get stakeholder approval
3. Assign implementation lead
4. Create Jira tickets for Phase 1
### Medium-term (Weeks 3-6)
1. Execute Phase 1 (Infrastructure)
2. Execute Phase 2 (Core Integration)
3. Execute Phase 3 (Tool Integration)
4. Execute Phase 4 (Validation)
### Long-term (Week 7+)
1. Begin production dataset collection
2. Monitor storage and costs
3. Run analysis queries
4. Iterate based on findings
---
## Contact & Questions
**Analysis Completed By:** Telemetry Data Analyst
**Date:** November 12, 2025
**Status:** Ready for team review and implementation
For questions or clarifications:
1. Review the specific document for your question
2. Check TELEMETRY_QUICK_REFERENCE.md for common lookups
3. Refer to source files in src/telemetry/
---
## Document Statistics
| Document | Size | Lines | Read Time | Purpose |
|----------|------|-------|-----------|---------|
| TELEMETRY_ANALYSIS.md | 23 KB | 720 | 20-30 min | Main reference |
| TELEMETRY_MUTATION_SPEC.md | 26 KB | 918 | 30-40 min | Implementation guide |
| TELEMETRY_QUICK_REFERENCE.md | 11 KB | 503 | 10-15 min | Developer lookup |
| TELEMETRY_N8N_FIXER_DATASET.md | 13 KB | 340 | 15-20 min | Executive summary |
| TELEMETRY_ANALYSIS_REPORT.md | 26 KB | 732 | 20-30 min | Statistics & trends |
| TELEMETRY_EXECUTIVE_SUMMARY.md | 10 KB | 345 | 10-15 min | Executive brief |
| TELEMETRY_TECHNICAL_DEEP_DIVE.md | 18 KB | 654 | 20-25 min | Architecture |
| TELEMETRY_DATA_FOR_VISUALIZATION.md | 18 KB | 468 | 15-20 min | Dashboard data |
| TELEMETRY_ANALYSIS_INDEX.md | 15 KB | 447 | 10-15 min | Topic index |
| **TOTAL** | **160 KB** | **5,237** | **150-180 min** | Full analysis |
---
## Version History
| Date | Version | Changes |
|------|---------|---------|
| Nov 8, 2025 | 1.0 | Initial analysis and reports |
| Nov 12, 2025 | 2.0 | Core documentation + mutation spec + this README |
---
## License & Attribution
These analysis documents are part of the n8n-mcp project.
Conceived by Romuald Członkowski - www.aiadvisors.pl/en
---
**END OF README**
For additional information, start with one of the primary documents above based on your role and available time.

View File

@@ -1,732 +0,0 @@
# n8n-MCP Telemetry Analysis Report
## Error Patterns and Troubleshooting Analysis (90-Day Period)
**Report Date:** November 8, 2025
**Analysis Period:** August 10, 2025 - November 8, 2025
**Data Freshness:** Live (last updated Oct 31, 2025)
---
## Executive Summary
This telemetry analysis examined 506K+ events across the n8n-MCP system to identify critical pain points for AI agents. The findings reveal that while core tool success rates are high (96-100%), specific validation and configuration challenges create friction that impacts developer experience.
### Key Findings
1. **8,859 total errors** across 90 days with significant volatility (28 to 406 errors/day), suggesting systemic issues triggered by specific conditions rather than constant problems
2. **Validation failures dominate error landscape** with 34.77% of all errors being ValidationError, followed by TypeError (31.23%) and generic Error (30.60%)
3. **Specific tools show concerning failure patterns**: `get_node_info` (11.72% failure rate), `get_node_documentation` (4.13%), and `validate_node_operation` (6.42%) struggle with reliability
4. **Most common error: Workflow-level validation** represents 39.11% of validation errors, indicating widespread issues with workflow structure validation
5. **Tool usage patterns reveal critical bottlenecks**: Sequential tool calls like `n8n_update_partial_workflow->n8n_update_partial_workflow` take average 55.2 seconds with 66% being slow transitions
### Immediate Action Items
- Fix `get_node_info` reliability (11.72% error rate vs. 0-4% for similar tools)
- Improve workflow validation error messages to help users understand structure problems
- Optimize sequential update operations that show 55+ second latencies
- Address validation test coverage gaps (38,000+ "Node*" placeholder nodes triggering errors)
---
## 1. Error Analysis
### 1.1 Overall Error Volume and Frequency
**Raw Statistics:**
- **Total error events (90 days):** 8,859
- **Average daily errors:** 60.68
- **Peak error day:** 276 errors (October 30, 2025)
- **Days with errors:** 36 out of 90 (40%)
- **Error-free days:** 54 (60%)
**Trend Analysis:**
- High volatility with swings of -83.72% to +567.86% day-to-day
- October 12 saw a 567.86% spike (28 → 187 errors), suggesting a deployment or system event
- October 10-11 saw 57.64% drop, possibly indicating a hotfix
- Current trajectory: Stabilizing around 130-160 errors/day (last 10 days)
**Distribution Over Time:**
```
Peak Error Days (Top 5):
2025-09-26: 6,222 validation errors
2025-10-04: 3,585 validation errors
2025-10-05: 3,344 validation errors
2025-10-07: 2,858 validation errors
2025-10-06: 2,816 validation errors
Pattern: Late September peak followed by elevated plateau through early October
```
### 1.2 Error Type Breakdown
| Error Type | Count | % of Total | Days Occurred | Severity |
|------------|-------|-----------|---------------|----------|
| ValidationError | 3,080 | 34.77% | 36 | High |
| TypeError | 2,767 | 31.23% | 36 | High |
| Error (generic) | 2,711 | 30.60% | 36 | High |
| SqliteError | 202 | 2.28% | 32 | Medium |
| unknown_error | 89 | 1.00% | 3 | Low |
| MCP_server_timeout | 6 | 0.07% | 1 | Critical |
| MCP_server_init_fail | 3 | 0.03% | 1 | Critical |
**Critical Insight:** 96.6% of errors are validation-related (ValidationError, TypeError, generic Error). This suggests the issue is primarily in configuration validation logic, not core infrastructure.
**Detailed Error Categories:**
**ValidationError (3,080 occurrences - 34.77%)**
- Primary source: Workflow structure validation
- Trigger: Invalid node configurations, missing required fields
- Impact: Users cannot deploy workflows until fixed
- Trend: Consistent daily occurrence (100% days affected)
**TypeError (2,767 occurrences - 31.23%)**
- Pattern: Type mismatches in node properties
- Common scenario: String passed where number expected, or vice versa
- Impact: Workflow validation failures, tool invocation errors
- Indicates: Need for better type enforcement or clearer schema documentation
**Generic Error (2,711 occurrences - 30.60%)**
- Least helpful category; lacks actionable context
- Likely source: Unhandled exceptions in validation pipeline
- Recommendations: Implement error code system with specific error types
- Impact on DX: Users cannot determine root cause
---
## 2. Validation Error Patterns
### 2.1 Validation Errors by Node Type
**Problematic Findings:**
| Node Type | Error Count | Days | % of Validation Errors | Issue |
|-----------|------------|------|----------------------|--------|
| workflow | 21,423 | 36 | 39.11% | **CRITICAL** - 39% of all validation errors at workflow level |
| [KEY] | 656 | 35 | 1.20% | Property key validation failures |
| ______ | 643 | 33 | 1.17% | Placeholder nodes (test data) |
| Webhook | 435 | 35 | 0.79% | Webhook configuration issues |
| HTTP_Request | 212 | 29 | 0.39% | HTTP node validation issues |
**Major Concern: Placeholder Node Names**
The presence of generic placeholder names (Node0-Node19, [KEY], ______, _____) represents 4,700+ errors. These appear to be:
1. Test data that wasn't cleaned up
2. Incomplete workflow definitions from users
3. Validation test cases creating noise in telemetry
**Workflow-Level Validation (21,423 errors - 39.11%)**
This is the single largest error category. Issues include:
- Missing start nodes (triggers)
- Invalid node connections
- Circular dependencies
- Missing required node properties
- Type mismatches in connections
**Critical Action:** Improve workflow validation error messages to provide specific guidance on what structure requirement failed.
### 2.2 Node-Specific Validation Issues
**High-Risk Node Types:**
- **Webhook**: 435 errors - likely authentication/path configuration issues
- **HTTP_Request**: 212 errors - likely header/body configuration problems
- **Database nodes**: Not heavily represented, suggesting better validation
- **AI/Code nodes**: Minimal representation in error data
**Pattern Observation:** Trigger nodes (Webhook, Webhook_Trigger) appear in validation errors, suggesting connection complexity issues.
---
## 3. Tool Usage and Success Rates
### 3.1 Overall Tool Performance
**Top 25 Tools by Usage (90 days):**
| Tool | Invocations | Success Rate | Failure Rate | Avg Duration (ms) | Status |
|------|------------|--------------|--------------|-----------------|--------|
| n8n_update_partial_workflow | 103,732 | 99.06% | 0.94% | 417.77 | Reliable |
| search_nodes | 63,366 | 99.89% | 0.11% | 28.01 | Excellent |
| get_node_essentials | 49,625 | 96.19% | 3.81% | 4.79 | Good |
| n8n_create_workflow | 49,578 | 96.35% | 3.65% | 359.08 | Good |
| n8n_get_workflow | 37,703 | 99.94% | 0.06% | 291.99 | Excellent |
| n8n_validate_workflow | 29,341 | 99.70% | 0.30% | 269.33 | Excellent |
| n8n_update_full_workflow | 19,429 | 99.27% | 0.73% | 415.39 | Reliable |
| n8n_get_execution | 19,409 | 99.90% | 0.10% | 652.97 | Excellent |
| n8n_list_executions | 17,111 | 100.00% | 0.00% | 375.46 | Perfect |
| get_node_documentation | 11,403 | 95.87% | 4.13% | 2.45 | Needs Work |
| get_node_info | 10,304 | 88.28% | 11.72% | 3.85 | **CRITICAL** |
| validate_workflow | 9,738 | 94.50% | 5.50% | 33.63 | Concerning |
| validate_node_operation | 5,654 | 93.58% | 6.42% | 5.05 | Concerning |
### 3.2 Critical Tool Issues
**1. `get_node_info` - 11.72% Failure Rate (CRITICAL)**
- **Failures:** 1,208 out of 10,304 invocations
- **Impact:** Users cannot retrieve node specifications when building workflows
- **Likely Cause:**
- Database schema mismatches
- Missing node documentation
- Encoding/parsing errors
- **Recommendation:** Immediately review error logs for this tool; implement fallback to cache or defaults
**2. `validate_workflow` - 5.50% Failure Rate**
- **Failures:** 536 out of 9,738 invocations
- **Impact:** Users cannot validate workflows before deployment
- **Correlation:** Likely related to workflow-level validation errors (39.11% of validation errors)
- **Root Cause:** Validation logic may not handle all edge cases
**3. `get_node_documentation` - 4.13% Failure Rate**
- **Failures:** 471 out of 11,403 invocations
- **Impact:** Users cannot access documentation when learning nodes
- **Pattern:** Documentation retrieval failures compound with `get_node_info` issues
**4. `validate_node_operation` - 6.42% Failure Rate**
- **Failures:** 363 out of 5,654 invocations
- **Impact:** Configuration validation provides incorrect feedback
- **Concern:** Could lead to false positives (rejecting valid configs) or false negatives (accepting invalid ones)
### 3.3 Reliable Tools (Baseline for Improvement)
These tools show <1% failure rates and should be used as templates:
- `search_nodes`: 99.89% (0.11% failure)
- `n8n_get_workflow`: 99.94% (0.06% failure)
- `n8n_get_execution`: 99.90% (0.10% failure)
- `n8n_list_executions`: 100.00% (perfect)
**Common Pattern:** Read-only and list operations are highly reliable, while validation operations are problematic.
---
## 4. Tool Usage Patterns and Bottlenecks
### 4.1 Sequential Tool Sequences (Most Common)
The telemetry data shows AI agents follow predictable workflows. Analysis of 152K+ hourly tool sequence records reveals critical bottleneck patterns:
| Sequence | Occurrences | Avg Duration | Slow Transitions |
|----------|------------|--------------|-----------------|
| update_partial update_partial | 96,003 | 55.2s | 66% |
| search_nodes search_nodes | 68,056 | 11.2s | 17% |
| get_node_essentials get_node_essentials | 51,854 | 10.6s | 17% |
| create_workflow create_workflow | 41,204 | 54.9s | 80% |
| search_nodes get_node_essentials | 28,125 | 19.3s | 34% |
| get_workflow update_partial | 27,113 | 53.3s | 84% |
| update_partial validate_workflow | 25,203 | 20.1s | 41% |
| list_executions get_execution | 23,101 | 13.9s | 22% |
| validate_workflow update_partial | 23,013 | 60.6s | 74% |
| update_partial get_workflow | 19,876 | 96.6s | 63% |
**Critical Issues Identified:**
1. **Update Loops**: `update_partial → update_partial` has 96,003 occurrences
- Average 55.2s between calls
- 66% marked as "slow transitions"
- Suggests: Users iteratively updating workflows, with network/processing lag
2. **Massive Duration on `update_partial → get_workflow`**: 96.6 seconds average
- Users check workflow state after update
- High latency suggests possible API bottleneck or large workflow processing
3. **Sequential Search Operations**: 68,056 `search_nodes → search_nodes` calls
- Users refining search through multiple queries
- Could indicate search results are not meeting needs on first attempt
4. **Read-After-Write Patterns**: Many sequences involve getting/validating after updates
- Suggests transactions aren't atomic; users manually verify state
- Could be optimized by returning updated state in response
### 4.2 Implications for AI Agents
AI agents exhibit these problematic patterns:
- **Excessive retries**: Same operation repeated multiple times
- **State uncertainty**: Need to re-fetch state after modifications
- **Search inefficiency**: Multiple queries to find right tools/nodes
- **Long wait times**: Up to 96 seconds between sequential operations
**This creates:**
- Slower agent response times to users
- Higher API load and costs
- Poor user experience (agents appear "stuck")
- Wasted computational resources
---
## 5. Session and User Activity Analysis
### 5.1 Engagement Metrics
| Metric | Value | Interpretation |
|--------|-------|-----------------|
| Avg Sessions/Day | 895 | Healthy usage |
| Avg Users/Day | 572 | Growing user base |
| Avg Sessions/User | 1.52 | Users typically engage once per day |
| Peak Sessions Day | 1,821 (Oct 22) | Single major engagement spike |
**Notable Date:** October 22, 2025 shows 2.94 sessions per user (vs. typical 1.4-1.6)
- Could indicate: Feature launch, bug fix, or major update
- Correlates with error spikes in early October
### 5.2 Session Quality Patterns
- Consistent 600-1,200 sessions daily
- User base stable at 470-620 users per day
- Some days show <5% of normal activity (Oct 11: 30 sessions)
- Weekend vs. weekday patterns not visible in daily aggregates
---
## 6. Search Query Analysis (User Intent)
### 6.1 Most Searched Topics
| Query | Total Searches | Days Searched | User Need |
|-------|----------------|---------------|-----------|
| test | 5,852 | 22 | Testing workflows |
| webhook | 5,087 | 25 | Webhook triggers/integration |
| http | 4,241 | 22 | HTTP requests |
| database | 4,030 | 21 | Database operations |
| api | 2,074 | 21 | API integrations |
| http request | 1,036 | 22 | HTTP node details |
| google sheets | 643 | 22 | Google integration |
| code javascript | 616 | 22 | Code execution |
| openai | 538 | 22 | AI integrations |
**Key Insights:**
1. **Top 4 searches (19,210 searches, 40% of traffic)**:
- Testing (5,852)
- Webhooks (5,087)
- HTTP (4,241)
- Databases (4,030)
2. **Use Case Patterns**:
- **Integration-heavy**: Webhooks, API, HTTP, Google Sheets (15,000+ searches)
- **Logic/Execution**: Code, testing (6,500+ searches)
- **AI Integration**: OpenAI mentioned 538 times (trending interest)
3. **Learning Curve Indicators**:
- "http request" vs. "http" suggests users searching for specific node
- "schedule cron" appears 270 times (scheduling is confusing)
- "manual trigger" appears 300 times (trigger types unclear)
**Implication:** Users struggle most with:
1. HTTP request configuration (1,300+ searches for HTTP-related topics)
2. Scheduling/triggers (800+ searches for trigger types)
3. Understanding testing practices (5,852 searches)
---
## 7. Workflow Quality and Validation
### 7.1 Workflow Validation Grades
| Grade | Count | Percentage | Quality Score |
|-------|-------|-----------|----------------|
| A | 5,156 | 100% | 100.0 |
**Critical Issue:** Only Grade A workflows in database, despite 39% validation error rate
**Explanation:**
- The `telemetry_workflows` table captures only successfully ingested workflows
- Error events are tracked separately in `telemetry_errors_daily`
- Failed workflows never make it to the workflows table
- This creates a survivorship bias in quality metrics
**Real Story:**
- 7,869 workflows attempted
- 5,156 successfully validated (65.5% success rate implied)
- 2,713 workflows failed validation (34.5% failure rate implied)
---
## 8. Top 5 Issues Impacting AI Agent Success
Ranked by severity and impact:
### Issue 1: Workflow-Level Validation Failures (39.11% of validation errors)
**Problem:** 21,423 validation errors related to workflow structure validation
**Root Causes:**
- Invalid node connections
- Missing trigger nodes
- Circular dependencies
- Type mismatches in connections
- Incomplete node configurations
**AI Agent Impact:**
- Agents cannot deploy workflows
- Error messages too generic ("workflow validation failed")
- No guidance on what structure requirement failed
- Forces agents to retry with different structures
**Quick Win:** Enhance workflow validation error messages to specify which structural requirement failed
**Implementation Effort:** Medium (2-3 days)
---
### Issue 2: `get_node_info` Unreliability (11.72% failure rate)
**Problem:** 1,208 failures out of 10,304 invocations
**Root Causes:**
- Likely missing node documentation or schema
- Encoding issues with complex node definitions
- Database connectivity problems during specific queries
**AI Agent Impact:**
- Agents cannot retrieve node specifications when building
- Fall back to guessing or using incomplete essentials
- Creates cascading validation errors
- Slows down workflow creation
**Quick Win:** Add retry logic with exponential backoff; implement fallback to cache
**Implementation Effort:** Low (1 day)
---
### Issue 3: Slow Sequential Update Operations (96,003 occurrences, avg 55.2s)
**Problem:** `update_partial_workflow → update_partial_workflow` takes avg 55.2 seconds with 66% slow transitions
**Root Causes:**
- Network latency between operations
- Large workflow serialization
- Possible blocking on previous operations
- No batch update capability
**AI Agent Impact:**
- Agents wait 55+ seconds between sequential modifications
- Workflow construction takes minutes instead of seconds
- Poor perceived performance
- Users abandon incomplete workflows
**Quick Win:** Implement batch workflow update operation
**Implementation Effort:** High (5-7 days)
---
### Issue 4: Search Result Relevancy Issues (68,056 `search_nodes → search_nodes` calls)
**Problem:** Users perform multiple search queries in sequence (17% slow transitions)
**Root Causes:**
- Initial search results don't match user intent
- Search ranking algorithm suboptimal
- Users unsure of node names
- Broad searches returning too many results
**AI Agent Impact:**
- Agents make multiple search attempts to find right node
- Increases API calls and latency
- Uncertainty in node selection
- Compounds with slow subsequent operations
**Quick Win:** Analyze top 50 repeated search sequences; improve ranking for high-volume queries
**Implementation Effort:** Medium (3 days)
---
### Issue 5: `validate_node_operation` Inaccuracy (6.42% failure rate)
**Problem:** 363 failures out of 5,654 invocations; validation provides unreliable feedback
**Root Causes:**
- Validation logic doesn't handle all node operation combinations
- Missing edge case handling
- Validator version mismatches
- Property dependency logic incomplete
**AI Agent Impact:**
- Agents may trust invalid configurations (false positives)
- Or reject valid ones (false negatives)
- Either way: Unreliable feedback breaks agent judgment
- Forces manual verification
**Quick Win:** Add telemetry to capture validation false positive/negative cases
**Implementation Effort:** Medium (4 days)
---
## 9. Temporal and Anomaly Patterns
### 9.1 Error Spike Events
**Major Spike #1: October 12, 2025**
- Error increase: 567.86% (28 187 errors)
- Context: Validation errors jumped from low to baseline
- Likely event: System restart, deployment, or database issue
**Major Spike #2: September 26, 2025**
- Daily validation errors: 6,222 (highest single day)
- Represents: 70% of September error volume
- Context: Possible large test batch or migration
**Major Spike #3: Early October (Oct 3-10)**
- Sustained elevation: 3,344-2,038 errors daily
- Duration: 8 days of high error rates
- Recovery: October 11 drops to 28 errors (83.72% decrease)
- Suggests: Incident and mitigation
### 9.2 Recent Trend (Last 10 Days)
- Stabilized at 130-278 errors/day
- More predictable pattern
- Suggests: System stabilization post-October incident
- Current error rate: ~60 errors/day (normal baseline)
---
## 10. Actionable Recommendations
### Priority 1 (Immediate - Week 1)
1. **Fix `get_node_info` Reliability**
- Impact: Affects 1,200+ failures affecting agents
- Action: Review error logs; add retry logic; implement cache fallback
- Expected benefit: Reduce tool failure rate from 11.72% to <1%
2. **Improve Workflow Validation Error Messages**
- Impact: 39% of validation errors lack clarity
- Action: Create specific error codes for structural violations
- Expected benefit: Reduce user frustration; improve agent success rate
- Example: Instead of "validation failed", return "Missing start trigger node"
3. **Add Batch Workflow Update Operation**
- Impact: 96,003 sequential updates at 55.2s each
- Action: Create `n8n_batch_update_workflow` tool
- Expected benefit: 80-90% reduction in workflow update time
### Priority 2 (High - Week 2-3)
4. **Implement Validation Caching**
- Impact: Reduce repeated validation of identical configs
- Action: Cache validation results with invalidation on node updates
- Expected benefit: 40-50% reduction in `validate_workflow` calls
5. **Improve Node Search Ranking**
- Impact: 68,056 sequential search calls
- Action: Analyze top repeated sequences; adjust ranking algorithm
- Expected benefit: Fewer searches needed; faster node discovery
6. **Add TypeScript Types for Common Nodes**
- Impact: Type mismatches cause 31.23% of errors
- Action: Generate strict TypeScript definitions for top 50 nodes
- Expected benefit: AI agents make fewer type-related mistakes
### Priority 3 (Medium - Week 4)
7. **Implement Return-Updated-State Pattern**
- Impact: Users fetch state after every update (19,876 `update → get_workflow` calls)
- Action: Update tools to return full updated state
- Expected benefit: Eliminate unnecessary API calls; reduce round-trips
8. **Add Workflow Diff Generation**
- Impact: Help users understand what changed after updates
- Action: Generate human-readable diffs of workflow changes
- Expected benefit: Better visibility; easier debugging
9. **Create Validation Test Suite**
- Impact: Generic placeholder nodes (Node0-19) creating noise
- Action: Clean up test data; implement proper test isolation
- Expected benefit: Clearer signal in telemetry; 600+ error reduction
### Priority 4 (Documentation - Ongoing)
10. **Create Error Code Documentation**
- Document each error type with resolution steps
- Examples of what causes ValidationError, TypeError, etc.
- Quick reference for agents and developers
11. **Add Configuration Examples for Top 20 Nodes**
- HTTP Request (1,300+ searches)
- Webhook (5,087 searches)
- Database nodes (4,030 searches)
- With working examples and common pitfalls
12. **Create Trigger Configuration Guide**
- Explain scheduling (270+ "schedule cron" searches)
- Manual triggers (300 searches)
- Webhook triggers (5,087 searches)
- Clear comparison of use cases
---
## 11. Monitoring Recommendations
### Key Metrics to Track
1. **Tool Failure Rates** (daily):
- Alert if `get_node_info` > 5%
- Alert if `validate_workflow` > 2%
- Alert if `validate_node_operation` > 3%
2. **Workflow Validation Success Rate**:
- Target: >95% of workflows pass validation first attempt
- Current: Estimated 65% (5,156 of 7,869)
3. **Sequential Operation Latency**:
- Track p50/p95/p99 for update operations
- Target: <5s for sequential updates
- Current: 55.2s average (needs optimization)
4. **Error Rate Volatility**:
- Daily error count should stay within 100-200
- Alert if day-over-day change >30%
5. **Search Query Success**:
- Track how many repeated searches for same term
- Target: <2 searches needed to find node
- Current: 17-34% slow transitions
### Dashboards to Create
1. **Daily Error Dashboard**
- Error counts by type (Validation, Type, Generic)
- Error trends over 7/30/90 days
- Top error-triggering operations
2. **Tool Health Dashboard**
- Failure rates for all tools
- Success rate trends
- Duration trends for slow operations
3. **Workflow Quality Dashboard**
- Validation success rates
- Common failure patterns
- Node type error distributions
4. **User Experience Dashboard**
- Session counts and user trends
- Search patterns and result relevancy
- Average workflow creation time
---
## 12. SQL Queries Used (For Reproducibility)
### Query 1: Error Overview
```sql
SELECT
COUNT(*) as total_error_events,
COUNT(DISTINCT date) as days_with_errors,
ROUND(AVG(error_count), 2) as avg_errors_per_day,
MAX(error_count) as peak_errors_in_day
FROM telemetry_errors_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days';
```
### Query 2: Error Type Distribution
```sql
SELECT
error_type,
SUM(error_count) as total_occurrences,
COUNT(DISTINCT date) as days_occurred,
ROUND(SUM(error_count)::numeric / (SELECT SUM(error_count) FROM telemetry_errors_daily) * 100, 2) as percentage_of_all_errors
FROM telemetry_errors_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY error_type
ORDER BY total_occurrences DESC;
```
### Query 3: Tool Success Rates
```sql
SELECT
tool_name,
SUM(usage_count) as total_invocations,
SUM(success_count) as successful_invocations,
SUM(failure_count) as failed_invocations,
ROUND(100.0 * SUM(success_count) / SUM(usage_count), 2) as success_rate_percent,
ROUND(AVG(avg_duration_ms)::numeric, 2) as avg_duration_ms,
COUNT(DISTINCT date) as days_active
FROM telemetry_tool_usage_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY tool_name
ORDER BY total_invocations DESC;
```
### Query 4: Validation Errors by Node Type
```sql
SELECT
node_type,
error_type,
SUM(error_count) as total_occurrences,
ROUND(SUM(error_count)::numeric / SUM(SUM(error_count)) OVER () * 100, 2) as percentage_of_validation_errors
FROM telemetry_validation_errors_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY node_type, error_type
ORDER BY total_occurrences DESC;
```
### Query 5: Tool Sequences
```sql
SELECT
sequence_pattern,
SUM(occurrence_count) as total_occurrences,
ROUND(AVG(avg_time_delta_ms)::numeric, 2) as avg_duration_ms,
SUM(slow_transition_count) as slow_transitions
FROM telemetry_tool_sequences_hourly
WHERE hour >= NOW() - INTERVAL '90 days'
GROUP BY sequence_pattern
ORDER BY total_occurrences DESC;
```
### Query 6: Session Metrics
```sql
SELECT
date,
total_sessions,
unique_users,
ROUND(total_sessions::numeric / unique_users, 2) as avg_sessions_per_user
FROM telemetry_session_metrics_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
ORDER BY date DESC;
```
### Query 7: Search Queries
```sql
SELECT
query_text,
SUM(search_count) as total_searches,
COUNT(DISTINCT date) as days_searched
FROM telemetry_search_queries_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY query_text
ORDER BY total_searches DESC;
```
---
## Conclusion
The n8n-MCP telemetry analysis reveals that while core infrastructure is robust (most tools >99% reliability), there are five critical issues preventing optimal AI agent success:
1. **Workflow validation feedback** (39% of errors) - lack of actionable error messages
2. **Tool reliability** (11.72% failure rate for `get_node_info`) - critical information retrieval failures
3. **Performance bottlenecks** (55+ second sequential updates) - slow workflow construction
4. **Search inefficiency** (multiple searches needed) - poor discoverability
5. **Validation accuracy** (6.42% failure rate) - unreliable configuration feedback
Implementing the Priority 1 recommendations would address 75% of user-facing issues and dramatically improve AI agent performance. The remaining improvements would optimize performance and user experience further.
All recommendations include implementation effort estimates and expected benefits to help with prioritization.
---
**Report Prepared By:** AI Telemetry Analyst
**Data Source:** n8n-MCP Supabase Telemetry Database
**Next Review:** November 15, 2025 (weekly cadence recommended)

View File

@@ -1,468 +0,0 @@
# n8n-MCP Telemetry Data - Visualization Reference
## Charts, Tables, and Graphs for Presentations
---
## 1. Error Distribution Chart Data
### Error Types Pie Chart
```
ValidationError 3,080 (34.77%) ← Largest slice
TypeError 2,767 (31.23%)
Generic Error 2,711 (30.60%)
SqliteError 202 (2.28%)
Unknown/Other 99 (1.12%)
```
**Chart Type:** Pie Chart or Donut Chart
**Key Message:** 96.6% of errors are validation-related
### Error Volume Line Chart (90 days)
```
Date Range: Aug 10 - Nov 8, 2025
Baseline: 60-65 errors/day (normal)
Peak: Oct 30 (276 errors, 4.5x baseline)
Current: ~130-160 errors/day (stabilizing)
Notable Events:
- Oct 12: 567% spike (incident event)
- Oct 3-10: 8-day plateau (incident period)
- Oct 11: 83% drop (mitigation)
```
**Chart Type:** Line Graph
**Scale:** 0-300 errors/day
**Trend:** Volatile but stabilizing
---
## 2. Tool Success Rates Bar Chart
### High-Risk Tools (Ranked by Failure Rate)
```
Tool Name | Success Rate | Failure Rate | Invocations
------------------------------|-------------|--------------|-------------
get_node_info | 88.28% | 11.72% | 10,304
validate_node_operation | 93.58% | 6.42% | 5,654
get_node_documentation | 95.87% | 4.13% | 11,403
validate_workflow | 94.50% | 5.50% | 9,738
get_node_essentials | 96.19% | 3.81% | 49,625
n8n_create_workflow | 96.35% | 3.65% | 49,578
n8n_update_partial_workflow | 99.06% | 0.94% | 103,732
```
**Chart Type:** Horizontal Bar Chart
**Color Coding:** Red (<95%), Yellow (95-99%), Green (>99%)
**Target Line:** 99% success rate
---
## 3. Tool Usage Volume Bubble Chart
### Tool Invocation Volume (90 days)
```
X-axis: Total Invocations (log scale)
Y-axis: Success Rate (%)
Bubble Size: Error Count
Tool Clusters:
- High Volume, High Success (ideal): search_nodes (63K), list_executions (17K)
- High Volume, Medium Success (risky): n8n_create_workflow (50K), get_node_essentials (50K)
- Low Volume, Low Success (critical): get_node_info (10K), validate_node_operation (6K)
```
**Chart Type:** Bubble/Scatter Chart
**Focus:** Tools in lower-right quadrant are problematic
---
## 4. Sequential Operation Performance
### Tool Sequence Duration Distribution
```
Sequence Pattern | Count | Avg Duration (s) | Slow %
-----------------------------------------|--------|------------------|-------
update → update | 96,003 | 55.2 | 66%
search → search | 68,056 | 11.2 | 17%
essentials → essentials | 51,854 | 10.6 | 17%
create → create | 41,204 | 54.9 | 80%
search → essentials | 28,125 | 19.3 | 34%
get_workflow → update_partial | 27,113 | 53.3 | 84%
update → validate | 25,203 | 20.1 | 41%
list_executions → get_execution | 23,101 | 13.9 | 22%
validate → update | 23,013 | 60.6 | 74%
update → get_workflow (read-after-write) | 19,876 | 96.6 | 63%
```
**Chart Type:** Horizontal Bar Chart
**Sort By:** Occurrences (descending)
**Highlight:** Operations with >50% slow transitions
---
## 5. Search Query Analysis
### Top 10 Search Queries
```
Query | Count | Days Searched | User Need
----------------|-------|---------------|------------------
test | 5,852 | 22 | Testing workflows
webhook | 5,087 | 25 | Trigger/integration
http | 4,241 | 22 | HTTP requests
database | 4,030 | 21 | Database operations
api | 2,074 | 21 | API integration
http request | 1,036 | 22 | Specific node
google sheets | 643 | 22 | Google integration
code javascript | 616 | 22 | Code execution
openai | 538 | 22 | AI integration
telegram | 528 | 22 | Chat integration
```
**Chart Type:** Horizontal Bar Chart
**Grouping:** Integration-heavy (15K), Logic/Execution (6.5K), AI (1K)
---
## 6. Validation Errors by Node Type
### Top 15 Node Types by Error Count
```
Node Type | Errors | % of Total | Status
-------------------------|---------|------------|--------
workflow (structure) | 21,423 | 39.11% | CRITICAL
[test placeholders] | 4,700 | 8.57% | Should exclude
Webhook | 435 | 0.79% | Needs docs
HTTP_Request | 212 | 0.39% | Needs docs
[Generic node names] | 3,500 | 6.38% | Should exclude
Schedule/Trigger nodes | 700 | 1.28% | Needs docs
Database nodes | 450 | 0.82% | Generally OK
Code/JS nodes | 280 | 0.51% | Generally OK
AI/OpenAI nodes | 150 | 0.27% | Generally OK
Other | 900 | 1.64% | Various
```
**Chart Type:** Horizontal Bar Chart
**Insight:** 39% are workflow-level; 15% are test data noise
---
## 7. Session and User Metrics Timeline
### Daily Sessions and Users (30-day rolling average)
```
Date Range: Oct 1-31, 2025
Metrics:
- Avg Sessions/Day: 895
- Avg Users/Day: 572
- Avg Sessions/User: 1.52
Weekly Trend:
Week 1 (Oct 1-7): 900 sessions/day, 550 users
Week 2 (Oct 8-14): 880 sessions/day, 580 users
Week 3 (Oct 15-21): 920 sessions/day, 600 users
Week 4 (Oct 22-28): 1,100 sessions/day, 620 users (spike)
Week 5 (Oct 29-31): 880 sessions/day, 575 users
```
**Chart Type:** Dual-axis line chart
- Left axis: Sessions/day (600-1,200)
- Right axis: Users/day (400-700)
---
## 8. Error Rate Over Time with Annotations
### Error Timeline with Key Events
```
Date | Daily Errors | Day-over-Day | Event/Pattern
--------------|-------------|-------------|------------------
Sep 26 | 6,222 | +156% | INCIDENT: Major spike
Sep 27-30 | 1,200 avg | -45% | Recovery period
Oct 1-5 | 3,000 avg | +120% | Sustained elevation
Oct 6-10 | 2,300 avg | -30% | Declining trend
Oct 11 | 28 | -83.72% | MAJOR DROP: Possible fix
Oct 12 | 187 | +567.86% | System restart/redeployment
Oct 13-30 | 180 avg | Stable | New baseline established
Oct 31 | 130 | -53.24% | Current trend: improving
Current Trajectory: Stabilizing at 60-65 errors/day baseline
```
**Chart Type:** Column chart with annotations
**Y-axis:** 0-300 errors/day
**Annotations:** Mark incident events
---
## 9. Performance Impact Matrix
### Estimated Time Impact on User Workflows
```
Operation | Current | After Phase 1 | Improvement
---------------------------|---------|---------------|------------
Create 5-node workflow | 4-6 min | 30 seconds | 91% faster
Add single node property | 55s | <1s | 98% faster
Update 10 workflow params | 9 min | 5 seconds | 99% faster
Find right node (search) | 30-60s | 15-20s | 50% faster
Validate workflow | Varies | <2s | 80% faster
Total Workflow Creation Time:
- Current: 15-20 minutes for complex workflow
- After Phase 1: 2-3 minutes
- Improvement: 85-90% reduction
```
**Chart Type:** Comparison bar chart
**Color coding:** Current (red), Target (green)
---
## 10. Tool Failure Rate Comparison
### Tool Failure Rates Ranked
```
Rank | Tool Name | Failure % | Severity | Action
-----|------------------------------|-----------|----------|--------
1 | get_node_info | 11.72% | CRITICAL | Fix immediately
2 | validate_node_operation | 6.42% | HIGH | Fix week 2
3 | validate_workflow | 5.50% | HIGH | Fix week 2
4 | get_node_documentation | 4.13% | MEDIUM | Fix week 2
5 | get_node_essentials | 3.81% | MEDIUM | Monitor
6 | n8n_create_workflow | 3.65% | MEDIUM | Monitor
7 | n8n_update_partial_workflow | 0.94% | LOW | Baseline
8 | search_nodes | 0.11% | LOW | Excellent
9 | n8n_list_executions | 0.00% | LOW | Excellent
10 | n8n_health_check | 0.00% | LOW | Excellent
```
**Chart Type:** Horizontal bar chart with target line (1%)
**Color coding:** Red (>5%), Yellow (2-5%), Green (<2%)
---
## 11. Issue Severity and Impact Matrix
### Prioritization Matrix
```
High Impact | Low Impact
High ┌────────────────────┼────────────────────┐
Effort │ 1. Validation │ 4. Search ranking │
│ Messages (2 days) │ (2 days) │
│ Impact: 39% │ Impact: 2% │
│ │ 5. Type System │
│ │ (3 days) │
│ 3. Batch Updates │ Impact: 5% │
│ (2 days) │ │
│ Impact: 6% │ │
└────────────────────┼────────────────────┘
Low │ 2. get_node_info │ 7. Return State │
Effort │ Fix (1 day) │ (1 day) │
│ Impact: 14% │ Impact: 2% │
│ 6. Type Stubs │ │
│ (1 day) │ │
│ Impact: 5% │ │
└────────────────────┼────────────────────┘
```
**Chart Type:** 2x2 matrix
**Bubble size:** Relative impact
**Focus:** Lower-right quadrant (high impact, low effort)
---
## 12. Implementation Timeline with Expected Improvements
### Gantt Chart with Metrics
```
Week 1: Immediate Wins
├─ Fix get_node_info (1 day) → 91% reduction in failures
├─ Validation messages (2 days) → 40% improvement in clarity
└─ Batch updates (2 days) → 90% latency improvement
Week 2-3: High Priority
├─ Validation caching (2 days) → 40% fewer validation calls
├─ Search ranking (2 days) → 30% fewer retries
└─ Type stubs (3 days) → 25% fewer type errors
Week 4: Optimization
├─ Return state (1 day) → Eliminate 40% redundant calls
└─ Workflow diffs (1 day) → Better debugging visibility
Expected Cumulative Impact:
- Week 1: 40-50% improvement (600+ fewer errors/day)
- Week 3: 70% improvement (1,900 fewer errors/day)
- Week 5: 77% improvement (2,000+ fewer errors/day)
```
**Chart Type:** Gantt chart with overlay
**Overlay:** Expected error reduction graph
---
## 13. Cost-Benefit Analysis
### Implementation Investment vs. Returns
```
Investment:
- Engineering time: 1 FTE × 5 weeks = $15,000
- Testing/QA: $2,000
- Documentation: $1,000
- Total: $18,000
Returns (Estimated):
- Support ticket reduction: 40% fewer errors = $4,000/month = $48,000/year
- User retention improvement: +5% = $20,000/month = $240,000/year
- AI agent efficiency: +30% = $10,000/month = $120,000/year
- Developer productivity: +20% = $5,000/month = $60,000/year
Total Returns: ~$468,000/year (26x ROI)
Payback Period: < 2 weeks
```
**Chart Type:** Waterfall chart
**Format:** Investment vs. Single-Year Returns
---
## 14. Key Metrics Dashboard
### One-Page Dashboard for Tracking
```
╔════════════════════════════════════════════════════════════╗
║ n8n-MCP Error & Performance Dashboard ║
║ Last 24 Hours ║
╠════════════════════════════════════════════════════════════╣
║ ║
║ Total Errors Today: 142 ↓ 5% vs yesterday ║
║ Most Common Error: ValidationError (45%) ║
║ Critical Failures: get_node_info (8 cases) ║
║ Avg Session Time: 2m 34s ↑ 15% (slower) ║
║ ║
║ ┌──────────────────────────────────────────────────┐ ║
║ │ Tool Success Rates (Top 5 Issues) │ ║
║ ├──────────────────────────────────────────────────┤ ║
║ │ get_node_info ███░░ 88.28% │ ║
║ │ validate_node_operation █████░ 93.58% │ ║
║ │ validate_workflow █████░ 94.50% │ ║
║ │ get_node_documentation █████░ 95.87% │ ║
║ │ get_node_essentials █████░ 96.19% │ ║
║ └──────────────────────────────────────────────────┘ ║
║ ║
║ ┌──────────────────────────────────────────────────┐ ║
║ │ Error Trend (Last 7 Days) │ ║
║ │ │ ║
║ │ 350 │ ╱╲ │ ║
║ │ 300 │ ╱╲ ╲ │ ║
║ │ 250 │ ╲╱ ╲╱╲ │ ║
║ │ 200 │ ╲╱╲ │ ║
║ │ 150 │ ╲╱─╲ │ ║
║ │ 100 │ ─ │ ║
║ │ 0 └─────────────────────────────────────┘ │ ║
║ └──────────────────────────────────────────────────┘ ║
║ ║
║ Action Items: Fix get_node_info | Improve error msgs ║
║ ║
╚════════════════════════════════════════════════════════════╝
```
**Format:** ASCII art for reports; convert to Grafana/Datadog for live dashboard
---
## 15. Before/After Comparison
### Visual Representation of Improvements
```
Metric │ Before | After | Improvement
────────────────────────────┼────────┼────────┼─────────────
get_node_info failure rate │ 11.72% │ <1% │ 91% ↓
Workflow validation clarity │ 20% │ 95% │ 475% ↑
Update operation latency │ 55.2s │ <5s │ 91% ↓
Search retry rate │ 17% │ <5% │ 70% ↓
Type error frequency │ 2,767 │ 2,000 │ 28% ↓
Daily error count │ 65 │ 15 │ 77% ↓
User satisfaction (est.) │ 6/10 │ 9/10 │ 50% ↑
Workflow creation time │ 18min │ 2min │ 89% ↓
```
**Chart Type:** Comparison table with ↑/↓ indicators
**Color coding:** Green for improvements, Red for current state
---
## Chart Recommendations by Audience
### For Executive Leadership
1. Error Distribution Pie Chart
2. Cost-Benefit Analysis Waterfall
3. Implementation Timeline with Impact
4. KPI Dashboard
### For Product Team
1. Tool Success Rates Bar Chart
2. Error Type Breakdown
3. User Search Patterns
4. Session Metrics Timeline
### For Engineering
1. Tool Reliability Scatter Plot
2. Sequential Operation Performance
3. Error Rate with Annotations
4. Before/After Metrics Table
### For Customer Support
1. Error Trend Line Chart
2. Common Validation Issues
3. Top Search Queries
4. Troubleshooting Reference
---
## SQL Queries for Data Export
All visualizations above can be generated from these queries:
```sql
-- Error distribution
SELECT error_type, SUM(error_count) FROM telemetry_errors_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY error_type ORDER BY SUM(error_count) DESC;
-- Tool success rates
SELECT tool_name,
ROUND(100.0 * SUM(success_count) / SUM(usage_count), 2) as success_rate,
SUM(failure_count) as failures,
SUM(usage_count) as invocations
FROM telemetry_tool_usage_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY tool_name ORDER BY success_rate ASC;
-- Daily trends
SELECT date, SUM(error_count) as daily_errors
FROM telemetry_errors_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY date ORDER BY date DESC;
-- Top searches
SELECT query_text, SUM(search_count) as count
FROM telemetry_search_queries_daily
WHERE date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY query_text ORDER BY count DESC LIMIT 20;
```
---
**Created for:** Presentations, Reports, Dashboards
**Format:** Markdown with ASCII, easily convertible to:
- Excel/Google Sheets
- PowerBI/Tableau
- Grafana/Datadog
- Presentation slides
---
**Last Updated:** November 8, 2025
**Data Freshness:** Live (updated daily)
**Review Frequency:** Weekly

View File

@@ -1,345 +0,0 @@
# n8n-MCP Telemetry Analysis - Executive Summary
## Quick Reference for Decision Makers
**Analysis Date:** November 8, 2025
**Data Period:** August 10 - November 8, 2025 (90 days)
**Status:** Critical Issues Identified - Action Required
---
## Key Statistics at a Glance
| Metric | Value | Status |
|--------|-------|--------|
| Total Errors (90 days) | 8,859 | 96% are validation-related |
| Daily Average | 60.68 | Baseline (60-65 errors/day normal) |
| Peak Error Day | Oct 30 | 276 errors (4.5x baseline) |
| Days with Errors | 36/90 (40%) | Intermittent spikes |
| Most Common Error | ValidationError | 34.77% of all errors |
| Critical Tool Failure | get_node_info | 11.72% failure rate |
| Performance Bottleneck | Sequential updates | 55.2 seconds per operation |
| Active Users/Day | 572 | Healthy engagement |
| Total Users (90 days) | ~5,000+ | Growing user base |
---
## The 5 Critical Issues
### 1. Workflow-Level Validation Failures (39% of errors)
**Problem:** 21,423 errors from unspecified workflow structure violations
**What Users See:**
- "Validation failed" (no indication of what's wrong)
- Cannot deploy workflows
- Must guess what structure requirement violated
**Impact:** Users abandon workflows; AI agents retry blindly
**Fix:** Provide specific error messages explaining exactly what failed
- "Missing start trigger node"
- "Type mismatch in node connection"
- "Required property missing: URL"
**Effort:** 2 days | **Impact:** High | **Priority:** 1
---
### 2. `get_node_info` Unreliability (11.72% failure rate)
**Problem:** 1,208 failures out of 10,304 calls to retrieve node information
**What Users See:**
- Cannot load node specifications when building workflows
- Missing information about node properties
- Forced to use incomplete data (fallback to essentials)
**Impact:** Workflows built with wrong configuration assumptions; validation failures cascade
**Fix:** Add retry logic, caching, and fallback mechanism
**Effort:** 1 day | **Impact:** High | **Priority:** 1
---
### 3. Slow Sequential Updates (55+ seconds per operation)
**Problem:** 96,003 sequential workflow updates take average 55.2 seconds each
**What Users See:**
- Workflow construction takes minutes instead of seconds
- "System appears stuck" (agent waiting 55s between operations)
- Poor user experience
**Impact:** Users abandon complex workflows; slow AI agent response
**Fix:** Implement batch update operation (apply multiple changes in 1 call)
**Effort:** 2-3 days | **Impact:** Critical | **Priority:** 1
---
### 4. Search Inefficiency (17% retry rate)
**Problem:** 68,056 sequential search calls; users need multiple searches to find nodes
**What Users See:**
- Search for "http" doesn't show "HTTP Request" in top results
- Users refine search 2-3 times
- Extra API calls and latency
**Impact:** Slower node discovery; AI agents waste API calls
**Fix:** Improve search ranking for high-volume queries
**Effort:** 2 days | **Impact:** Medium | **Priority:** 2
---
### 5. Type-Related Validation Errors (31.23% of errors)
**Problem:** 2,767 TypeError occurrences from configuration mismatches
**What Users See:**
- Node validation fails due to type mismatch
- "string vs. number" errors without clear resolution
- Configuration seems correct but validation fails
**Impact:** Users unsure of correct configuration format
**Fix:** Implement strict type system; add TypeScript types for common nodes
**Effort:** 3 days | **Impact:** Medium | **Priority:** 2
---
## Business Impact Summary
### Current State: What's Broken?
| Area | Problem | Impact |
|------|---------|--------|
| **Reliability** | `get_node_info` fails 11.72% | Users blocked 1 in 8 times |
| **Feedback** | Generic error messages | Users can't self-fix errors |
| **Performance** | 55s per sequential update | 5-node workflow takes 4+ minutes |
| **Search** | 17% require refine search | Extra latency; poor UX |
| **Types** | 31% of errors type-related | Users make wrong assumptions |
### If No Action Taken
- Error volume likely to remain at 60+ per day
- User frustration compounds
- AI agents become unreliable (cascading failures)
- Adoption plateau or decline
- Support burden increases
### With Phase 1 Fixes (Week 1)
- `get_node_info` reliability: 11.72% → <1% (91% improvement)
- Validation errors: 21,423 <1,000 (95% improvement in clarity)
- Sequential updates: 55.2s <5s (91% improvement)
- **Overall error reduction: 40-50%**
- **User satisfaction: +60%** (estimated)
### Full Implementation (4-5 weeks)
- **Error volume: 8,859 <2,000 per quarter** (77% reduction)
- **Tool failure rates: <1% across board**
- **Performance: 90% improvement in workflow creation**
- **User retention: +35%** (estimated)
---
## Implementation Roadmap
### Week 1 (Immediate Wins)
1. Fix `get_node_info` reliability [1 day]
2. Improve validation error messages [2 days]
3. Add batch update operation [2 days]
**Impact:** Address 60% of user-facing issues
### Week 2-3 (High Priority)
4. Implement validation caching [1-2 days]
5. Improve search ranking [2 days]
6. Add TypeScript types [3 days]
**Impact:** Performance +70%; Errors -30%
### Week 4 (Optimization)
7. Return updated state in responses [1-2 days]
8. Add workflow diff generation [1-2 days]
**Impact:** Eliminate 40% of API calls
### Ongoing (Documentation)
9. Create error code documentation [1 week]
10. Add configuration examples [2 weeks]
---
## Resource Requirements
| Phase | Duration | Team | Impact | Business Value |
|-------|----------|------|--------|-----------------|
| Phase 1 | 1 week | 1 engineer | 60% of issues | High ROI |
| Phase 2 | 2 weeks | 1 engineer | +30% improvement | Medium ROI |
| Phase 3 | 1 week | 1 engineer | +10% improvement | Low ROI |
| Phase 4 | 3 weeks | 0.5 engineer | Support reduction | Medium ROI |
**Total:** 7 weeks, 1 engineer FTE, +35% overall improvement
---
## Risk Assessment
| Risk | Likelihood | Impact | Mitigation |
|------|------------|--------|-----------|
| Breaking API changes | Low | High | Maintain backward compatibility |
| Performance regression | Low | High | Load test before deployment |
| Validation false positives | Medium | Medium | Beta test with sample workflows |
| Incomplete implementation | Low | Medium | Clear definition of done per task |
**Overall Risk Level:** Low (with proper mitigation)
---
## Success Metrics (Measurable)
### By End of Week 1
- [ ] `get_node_info` failure rate < 2%
- [ ] Validation errors provide specific guidance
- [ ] Batch update operation deployed and tested
### By End of Week 3
- [ ] Overall error rate < 3,000/quarter
- [ ] Tool success rates > 98% across board
- [ ] Average workflow creation time < 2 minutes
### By End of Week 5
- [ ] Error volume < 2,000/quarter (77% reduction)
- [ ] All users can self-resolve 80% of common errors
- [ ] AI agent success rate improves by 30%
---
## Top Recommendations
### Do This First (Week 1)
1. **Fix `get_node_info`** - Affects most critical user action
- Add retry logic [4 hours]
- Implement cache [4 hours]
- Add fallback [4 hours]
2. **Improve Validation Messages** - Addresses 39% of errors
- Create error code system [8 hours]
- Enhance validation logic [8 hours]
- Add help documentation [4 hours]
3. **Add Batch Updates** - Fixes performance bottleneck
- Define API [4 hours]
- Implement handler [12 hours]
- Test & integrate [4 hours]
### Avoid This (Anti-patterns)
- Increasing error logging without actionable feedback
- Adding more validation without improving error messages
- Optimizing non-critical operations while critical issues remain
- Waiting for perfect data before implementing fixes
---
## Stakeholder Questions & Answers
**Q: Why are there so many validation errors if most tools work (96%+)?**
A: Validation happens in a separate system. Core tools are reliable, but validation feedback is poor. Users create invalid workflows, validation rejects them generically, and users can't understand why.
**Q: Is the system unstable?**
A: No. Infrastructure is stable (99% uptime estimated). The issue is usability: errors are generic and operations are slow.
**Q: Should we defer fixes until next quarter?**
A: No. Every day of 60+ daily errors compounds user frustration. Early fixes have highest ROI (1 week = 40-50% improvement).
**Q: What about the Oct 30 spike (276 errors)?**
A: Likely specific trigger (batch test, migration). Current baseline is 60-65 errors/day, which is sustainable but improvable.
**Q: Which issue is most urgent?**
A: `get_node_info` reliability. It's the foundation for everything else. Without it, users can't build workflows correctly.
---
## Next Steps
1. **This Week**
- [ ] Review this analysis with engineering team
- [ ] Estimate resource allocation
- [ ] Prioritize Phase 1 tasks
2. **Next Week**
- [ ] Start Phase 1 implementation
- [ ] Set up monitoring for improvements
- [ ] Begin user communication about fixes
3. **Week 3**
- [ ] Deploy Phase 1 fixes
- [ ] Measure improvements
- [ ] Start Phase 2
---
## Questions?
**For detailed analysis:** See TELEMETRY_ANALYSIS_REPORT.md
**For technical details:** See TELEMETRY_TECHNICAL_DEEP_DIVE.md
**For implementation:** See IMPLEMENTATION_ROADMAP.md
---
**Analysis by:** AI Telemetry Analyst
**Confidence Level:** High (506K+ events analyzed)
**Last Updated:** November 8, 2025
**Review Frequency:** Weekly recommended
**Next Review Date:** November 15, 2025
---
## Appendix: Key Data Points
### Error Distribution
- ValidationError: 3,080 (34.77%)
- TypeError: 2,767 (31.23%)
- Generic Error: 2,711 (30.60%)
- SqliteError: 202 (2.28%)
- Other: 99 (1.12%)
### Tool Reliability (Top Issues)
- `get_node_info`: 88.28% success (11.72% failure)
- `validate_node_operation`: 93.58% success (6.42% failure)
- `get_node_documentation`: 95.87% success (4.13% failure)
- All others: 96-100% success
### User Engagement
- Daily sessions: 895 (avg)
- Daily users: 572 (avg)
- Sessions/user: 1.52 (avg)
- Peak day: 1,821 sessions (Oct 22)
### Most Searched Topics
1. Testing (5,852 searches)
2. Webhooks (5,087)
3. HTTP (4,241)
4. Database (4,030)
5. API integration (2,074)
### Performance Bottlenecks
- Update loop: 55.2s avg (66% slow)
- Read-after-write: 96.6s avg (63% slow)
- Search refinement: 17% need 2+ queries
- Session creation: ~5-10 seconds

View File

@@ -1,918 +0,0 @@
# Telemetry Workflow Mutation Tracking Specification
**Purpose:** Define the technical requirements for capturing workflow mutation data to build the n8n-fixer dataset
**Status:** Specification Document (Pre-Implementation)
---
## 1. Overview
This specification details how to extend the n8n-mcp telemetry system to capture:
- **Before State:** Complete workflow JSON before modification
- **Instruction:** The transformation instruction/prompt
- **After State:** Complete workflow JSON after modification
- **Metadata:** Timestamps, user ID, success metrics, validation states
---
## 2. Schema Design
### 2.1 New Database Table: `workflow_mutations`
```sql
CREATE TABLE IF NOT EXISTS workflow_mutations (
-- Primary Key & Identifiers
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL,
workflow_id TEXT, -- n8n workflow ID (nullable for new workflows)
-- Source Workflow Snapshot (Before)
before_workflow_json JSONB NOT NULL, -- Complete workflow definition
before_workflow_hash TEXT NOT NULL, -- SHA-256(before_workflow_json)
before_validation_status TEXT NOT NULL CHECK(before_validation_status IN (
'valid', -- Workflow passes validation
'invalid', -- Has validation errors
'unknown' -- Unknown state (not tested)
)),
before_error_count INTEGER, -- Number of validation errors
before_error_types TEXT[], -- Array: ['type_error', 'missing_field', ...]
-- Mutation Details
instruction TEXT NOT NULL, -- The modification instruction/prompt
instruction_type TEXT NOT NULL CHECK(instruction_type IN (
'ai_generated', -- Generated by AI/LLM
'user_provided', -- User input/request
'auto_fix', -- System auto-correction
'validation_correction' -- Validation rule fix
)),
mutation_source TEXT, -- Which tool/service created the mutation
-- e.g., 'n8n_autofix_workflow', 'validation_engine'
mutation_tool_version TEXT, -- Version of tool that performed mutation
-- Target Workflow Snapshot (After)
after_workflow_json JSONB NOT NULL, -- Complete modified workflow
after_workflow_hash TEXT NOT NULL, -- SHA-256(after_workflow_json)
after_validation_status TEXT NOT NULL CHECK(after_validation_status IN (
'valid',
'invalid',
'unknown'
)),
after_error_count INTEGER, -- Validation errors after mutation
after_error_types TEXT[], -- Remaining error types
-- Mutation Analysis (Pre-calculated for Performance)
nodes_modified TEXT[], -- Array of modified node IDs/names
nodes_added TEXT[], -- New nodes in after state
nodes_removed TEXT[], -- Removed nodes
nodes_modified_count INTEGER, -- Count of modified nodes
nodes_added_count INTEGER,
nodes_removed_count INTEGER,
connections_modified BOOLEAN, -- Were connections/edges changed?
connections_before_count INTEGER, -- Number of connections before
connections_after_count INTEGER, -- Number after
properties_modified TEXT[], -- Changed property paths
-- e.g., ['nodes[0].parameters.url', ...]
properties_modified_count INTEGER,
expressions_modified BOOLEAN, -- Were expressions/formulas changed?
-- Complexity Metrics
complexity_before TEXT CHECK(complexity_before IN (
'simple',
'medium',
'complex'
)),
complexity_after TEXT,
node_count_before INTEGER,
node_count_after INTEGER,
node_types_before TEXT[],
node_types_after TEXT[],
-- Outcome Metrics
mutation_success BOOLEAN, -- Did mutation achieve intended goal?
validation_improved BOOLEAN, -- true if: error_count_after < error_count_before
validation_errors_fixed INTEGER, -- Count of errors fixed
new_errors_introduced INTEGER, -- Errors created by mutation
-- Optional: User Feedback
user_approved BOOLEAN, -- User accepted the mutation?
user_feedback TEXT, -- User comment (truncated)
-- Data Quality & Compression
workflow_size_before INTEGER, -- Byte size of before_workflow_json
workflow_size_after INTEGER, -- Byte size of after_workflow_json
is_compressed BOOLEAN DEFAULT false, -- True if workflows are gzip-compressed
-- Timing
execution_duration_ms INTEGER, -- Time taken to apply mutation
created_at TIMESTAMP DEFAULT NOW(),
-- Metadata
tags TEXT[], -- Custom tags for filtering
metadata JSONB -- Flexible metadata storage
);
```
### 2.2 Indexes for Performance
```sql
-- User Analysis (User's mutation history)
CREATE INDEX idx_mutations_user_id
ON workflow_mutations(user_id, created_at DESC);
-- Workflow Analysis (Mutations to specific workflow)
CREATE INDEX idx_mutations_workflow_id
ON workflow_mutations(workflow_id, created_at DESC);
-- Mutation Success Rate
CREATE INDEX idx_mutations_success
ON workflow_mutations(mutation_success, created_at DESC);
-- Validation Improvement Analysis
CREATE INDEX idx_mutations_validation_improved
ON workflow_mutations(validation_improved, created_at DESC);
-- Time-series Analysis
CREATE INDEX idx_mutations_created_at
ON workflow_mutations(created_at DESC);
-- Source Analysis
CREATE INDEX idx_mutations_source
ON workflow_mutations(mutation_source, created_at DESC);
-- Instruction Type Analysis
CREATE INDEX idx_mutations_instruction_type
ON workflow_mutations(instruction_type, created_at DESC);
-- Composite: For common query patterns
CREATE INDEX idx_mutations_user_success_time
ON workflow_mutations(user_id, mutation_success, created_at DESC);
CREATE INDEX idx_mutations_source_validation
ON workflow_mutations(mutation_source, validation_improved, created_at DESC);
```
### 2.3 Optional: Materialized View for Analytics
```sql
-- Pre-calculate common metrics for fast dashboarding
CREATE MATERIALIZED VIEW vw_mutation_analytics AS
SELECT
DATE(created_at) as mutation_date,
instruction_type,
mutation_source,
COUNT(*) as total_mutations,
SUM(CASE WHEN mutation_success THEN 1 ELSE 0 END) as successful_mutations,
SUM(CASE WHEN validation_improved THEN 1 ELSE 0 END) as validation_improved_count,
ROUND(100.0 * COUNT(*) FILTER(WHERE mutation_success = true)
/ NULLIF(COUNT(*), 0), 2) as success_rate,
AVG(nodes_modified_count) as avg_nodes_modified,
AVG(properties_modified_count) as avg_properties_modified,
AVG(execution_duration_ms) as avg_duration_ms,
AVG(before_error_count) as avg_errors_before,
AVG(after_error_count) as avg_errors_after,
AVG(validation_errors_fixed) as avg_errors_fixed
FROM workflow_mutations
GROUP BY DATE(created_at), instruction_type, mutation_source;
CREATE INDEX idx_mutation_analytics_date
ON vw_mutation_analytics(mutation_date DESC);
```
---
## 3. TypeScript Interfaces
### 3.1 Core Mutation Interface
```typescript
// In src/telemetry/telemetry-types.ts
export interface WorkflowMutationEvent extends TelemetryEvent {
event: 'workflow_mutation';
properties: {
// Identification
workflowId?: string;
// Hashes for deduplication & integrity
beforeHash: string; // SHA-256 of before state
afterHash: string; // SHA-256 of after state
// Instruction
instruction: string; // The modification prompt/request
instructionType: 'ai_generated' | 'user_provided' | 'auto_fix' | 'validation_correction';
mutationSource?: string; // Tool that created the instruction
// Change Summary
nodesModified: number;
propertiesChanged: number;
connectionsModified: boolean;
expressionsModified: boolean;
// Outcome
mutationSuccess: boolean;
validationImproved: boolean;
errorsBefore: number;
errorsAfter: number;
// Performance
executionDurationMs?: number;
workflowSizeBefore?: number;
workflowSizeAfter?: number;
}
}
export interface WorkflowMutation {
// Primary Key
id: string; // UUID
user_id: string; // Anonymized user
workflow_id?: string; // n8n workflow ID
// Before State
before_workflow_json: any; // Complete workflow
before_workflow_hash: string;
before_validation_status: 'valid' | 'invalid' | 'unknown';
before_error_count?: number;
before_error_types?: string[];
// Mutation
instruction: string;
instruction_type: 'ai_generated' | 'user_provided' | 'auto_fix' | 'validation_correction';
mutation_source?: string;
mutation_tool_version?: string;
// After State
after_workflow_json: any;
after_workflow_hash: string;
after_validation_status: 'valid' | 'invalid' | 'unknown';
after_error_count?: number;
after_error_types?: string[];
// Analysis
nodes_modified?: string[];
nodes_added?: string[];
nodes_removed?: string[];
nodes_modified_count?: number;
connections_modified?: boolean;
properties_modified?: string[];
properties_modified_count?: number;
// Complexity
complexity_before?: 'simple' | 'medium' | 'complex';
complexity_after?: 'simple' | 'medium' | 'complex';
node_count_before?: number;
node_count_after?: number;
// Outcome
mutation_success: boolean;
validation_improved: boolean;
validation_errors_fixed?: number;
new_errors_introduced?: number;
user_approved?: boolean;
// Timing
created_at: string; // ISO 8601
execution_duration_ms?: number;
}
```
### 3.2 Mutation Analysis Service
```typescript
// New file: src/telemetry/mutation-analyzer.ts
export interface MutationDiff {
nodesAdded: string[];
nodesRemoved: string[];
nodesModified: Map<string, PropertyDiff[]>;
connectionsChanged: boolean;
expressionsChanged: boolean;
}
export interface PropertyDiff {
path: string; // e.g., "parameters.url"
beforeValue: any;
afterValue: any;
isExpression: boolean; // Contains {{}} or $json?
}
export class WorkflowMutationAnalyzer {
/**
* Analyze differences between before/after workflows
*/
static analyzeDifferences(
beforeWorkflow: any,
afterWorkflow: any
): MutationDiff {
// Implementation: Deep comparison of workflow structures
// Return detailed diff information
}
/**
* Extract changed property paths
*/
static getChangedProperties(diff: MutationDiff): string[] {
// Implementation
}
/**
* Determine if expression/formula was modified
*/
static hasExpressionChanges(diff: MutationDiff): boolean {
// Implementation
}
/**
* Validate workflow structure
*/
static validateWorkflowStructure(workflow: any): {
isValid: boolean;
errors: string[];
errorTypes: string[];
} {
// Implementation
}
}
```
---
## 4. Integration Points
### 4.1 TelemetryManager Extension
```typescript
// In src/telemetry/telemetry-manager.ts
export class TelemetryManager {
// ... existing code ...
/**
* Track workflow mutation (new method)
*/
async trackWorkflowMutation(
beforeWorkflow: any,
instruction: string,
afterWorkflow: any,
options?: {
instructionType?: 'ai_generated' | 'user_provided' | 'auto_fix';
mutationSource?: string;
workflowId?: string;
success?: boolean;
executionDurationMs?: number;
userApproved?: boolean;
}
): Promise<void> {
this.ensureInitialized();
this.performanceMonitor.startOperation('trackWorkflowMutation');
try {
await this.eventTracker.trackWorkflowMutation(
beforeWorkflow,
instruction,
afterWorkflow,
options
);
// Auto-flush mutations to prevent data loss
await this.flush();
} catch (error) {
const telemetryError = error instanceof TelemetryError
? error
: new TelemetryError(
TelemetryErrorType.UNKNOWN_ERROR,
'Failed to track workflow mutation',
{ error: String(error) }
);
this.errorAggregator.record(telemetryError);
} finally {
this.performanceMonitor.endOperation('trackWorkflowMutation');
}
}
}
```
### 4.2 EventTracker Extension
```typescript
// In src/telemetry/event-tracker.ts
export class TelemetryEventTracker {
// ... existing code ...
private mutationQueue: WorkflowMutation[] = [];
private mutationAnalyzer = new WorkflowMutationAnalyzer();
/**
* Track a workflow mutation
*/
async trackWorkflowMutation(
beforeWorkflow: any,
instruction: string,
afterWorkflow: any,
options?: MutationTrackingOptions
): Promise<void> {
if (!this.isEnabled()) return;
try {
// 1. Analyze differences
const diff = this.mutationAnalyzer.analyzeDifferences(
beforeWorkflow,
afterWorkflow
);
// 2. Calculate hashes
const beforeHash = this.calculateHash(beforeWorkflow);
const afterHash = this.calculateHash(afterWorkflow);
// 3. Detect validation changes
const beforeValidation = this.mutationAnalyzer.validateWorkflowStructure(
beforeWorkflow
);
const afterValidation = this.mutationAnalyzer.validateWorkflowStructure(
afterWorkflow
);
// 4. Create mutation record
const mutation: WorkflowMutation = {
id: generateUUID(),
user_id: this.getUserId(),
workflow_id: options?.workflowId,
before_workflow_json: beforeWorkflow,
before_workflow_hash: beforeHash,
before_validation_status: beforeValidation.isValid ? 'valid' : 'invalid',
before_error_count: beforeValidation.errors.length,
before_error_types: beforeValidation.errorTypes,
instruction,
instruction_type: options?.instructionType || 'user_provided',
mutation_source: options?.mutationSource,
after_workflow_json: afterWorkflow,
after_workflow_hash: afterHash,
after_validation_status: afterValidation.isValid ? 'valid' : 'invalid',
after_error_count: afterValidation.errors.length,
after_error_types: afterValidation.errorTypes,
nodes_modified: Array.from(diff.nodesModified.keys()),
nodes_added: diff.nodesAdded,
nodes_removed: diff.nodesRemoved,
properties_modified: this.mutationAnalyzer.getChangedProperties(diff),
connections_modified: diff.connectionsChanged,
mutation_success: options?.success !== false,
validation_improved: afterValidation.errors.length
< beforeValidation.errors.length,
validation_errors_fixed: Math.max(
0,
beforeValidation.errors.length - afterValidation.errors.length
),
created_at: new Date().toISOString(),
execution_duration_ms: options?.executionDurationMs,
user_approved: options?.userApproved
};
// 5. Validate and queue
const validated = this.validator.validateMutation(mutation);
if (validated) {
this.mutationQueue.push(validated);
}
// 6. Track as event for real-time monitoring
this.trackEvent('workflow_mutation', {
beforeHash,
afterHash,
instructionType: options?.instructionType || 'user_provided',
nodesModified: diff.nodesModified.size,
propertiesChanged: diff.properties_modified?.length || 0,
mutationSuccess: options?.success !== false,
validationImproved: mutation.validation_improved,
errorsBefore: beforeValidation.errors.length,
errorsAfter: afterValidation.errors.length
});
} catch (error) {
logger.debug('Failed to track workflow mutation:', error);
throw new TelemetryError(
TelemetryErrorType.VALIDATION_ERROR,
'Failed to process workflow mutation',
{ error: error instanceof Error ? error.message : String(error) }
);
}
}
/**
* Get queued mutations
*/
getMutationQueue(): WorkflowMutation[] {
return [...this.mutationQueue];
}
/**
* Clear mutation queue
*/
clearMutationQueue(): void {
this.mutationQueue = [];
}
/**
* Calculate SHA-256 hash of workflow
*/
private calculateHash(workflow: any): string {
const crypto = require('crypto');
const normalized = JSON.stringify(workflow, null, 0);
return crypto.createHash('sha256').update(normalized).digest('hex');
}
}
```
### 4.3 BatchProcessor Extension
```typescript
// In src/telemetry/batch-processor.ts
export class TelemetryBatchProcessor {
// ... existing code ...
/**
* Flush mutations to Supabase
*/
private async flushMutations(
mutations: WorkflowMutation[]
): Promise<boolean> {
if (this.isFlushingMutations || mutations.length === 0) return true;
this.isFlushingMutations = true;
try {
const batches = this.createBatches(
mutations,
TELEMETRY_CONFIG.MAX_BATCH_SIZE
);
for (const batch of batches) {
const result = await this.executeWithRetry(async () => {
const { error } = await this.supabase!
.from('workflow_mutations')
.insert(batch);
if (error) throw error;
logger.debug(`Flushed batch of ${batch.length} workflow mutations`);
return true;
}, 'Flush workflow mutations');
if (!result) {
this.addToDeadLetterQueue(batch);
return false;
}
}
return true;
} catch (error) {
logger.debug('Failed to flush mutations:', error);
throw new TelemetryError(
TelemetryErrorType.NETWORK_ERROR,
'Failed to flush mutations',
{ error: error instanceof Error ? error.message : String(error) },
true
);
} finally {
this.isFlushingMutations = false;
}
}
}
```
---
## 5. Integration with Workflow Tools
### 5.1 n8n_autofix_workflow
```typescript
// Where n8n_autofix_workflow applies fixes
import { telemetry } from '../telemetry';
export async function n8n_autofix_workflow(
workflow: any,
options?: AutofixOptions
): Promise<WorkflowFixResult> {
const beforeWorkflow = JSON.parse(JSON.stringify(workflow)); // Deep copy
try {
// Apply fixes
const fixed = await applyFixes(workflow, options);
// Track mutation
await telemetry.trackWorkflowMutation(
beforeWorkflow,
'Auto-fix validation errors',
fixed,
{
instructionType: 'auto_fix',
mutationSource: 'n8n_autofix_workflow',
success: true,
executionDurationMs: duration
}
);
return fixed;
} catch (error) {
// Track failed mutation attempt
await telemetry.trackWorkflowMutation(
beforeWorkflow,
'Auto-fix validation errors',
beforeWorkflow, // No changes
{
instructionType: 'auto_fix',
mutationSource: 'n8n_autofix_workflow',
success: false
}
);
throw error;
}
}
```
### 5.2 n8n_update_partial_workflow
```typescript
// Partial workflow updates
export async function n8n_update_partial_workflow(
workflow: any,
operations: DiffOperation[]
): Promise<UpdateResult> {
const beforeWorkflow = JSON.parse(JSON.stringify(workflow));
const instructionText = formatOperationsAsInstruction(operations);
try {
const updated = applyOperations(workflow, operations);
await telemetry.trackWorkflowMutation(
beforeWorkflow,
instructionText,
updated,
{
instructionType: 'user_provided',
mutationSource: 'n8n_update_partial_workflow'
}
);
return updated;
} catch (error) {
await telemetry.trackWorkflowMutation(
beforeWorkflow,
instructionText,
beforeWorkflow,
{
instructionType: 'user_provided',
mutationSource: 'n8n_update_partial_workflow',
success: false
}
);
throw error;
}
}
```
---
## 6. Data Quality & Validation
### 6.1 Mutation Validation Rules
```typescript
// In src/telemetry/mutation-validator.ts
export class WorkflowMutationValidator {
/**
* Validate mutation data before storage
*/
static validate(mutation: WorkflowMutation): ValidationResult {
const errors: string[] = [];
// Required fields
if (!mutation.user_id) errors.push('user_id is required');
if (!mutation.before_workflow_json) errors.push('before_workflow_json required');
if (!mutation.after_workflow_json) errors.push('after_workflow_json required');
if (!mutation.before_workflow_hash) errors.push('before_workflow_hash required');
if (!mutation.after_workflow_hash) errors.push('after_workflow_hash required');
if (!mutation.instruction) errors.push('instruction is required');
if (!mutation.instruction_type) errors.push('instruction_type is required');
// Hash verification
const beforeHash = calculateHash(mutation.before_workflow_json);
const afterHash = calculateHash(mutation.after_workflow_json);
if (beforeHash !== mutation.before_workflow_hash) {
errors.push('before_workflow_hash mismatch');
}
if (afterHash !== mutation.after_workflow_hash) {
errors.push('after_workflow_hash mismatch');
}
// Deduplication: Skip if before == after
if (beforeHash === afterHash) {
errors.push('before and after states are identical (skipping)');
}
// Size validation
const beforeSize = JSON.stringify(mutation.before_workflow_json).length;
const afterSize = JSON.stringify(mutation.after_workflow_json).length;
if (beforeSize > 10 * 1024 * 1024) {
errors.push('before_workflow_json exceeds 10MB size limit');
}
if (afterSize > 10 * 1024 * 1024) {
errors.push('after_workflow_json exceeds 10MB size limit');
}
// Instruction validation
if (mutation.instruction.length > 5000) {
mutation.instruction = mutation.instruction.substring(0, 5000);
}
if (mutation.instruction.length < 3) {
errors.push('instruction too short (min 3 chars)');
}
// Error count validation
if (mutation.before_error_count && mutation.before_error_count < 0) {
errors.push('before_error_count cannot be negative');
}
if (mutation.after_error_count && mutation.after_error_count < 0) {
errors.push('after_error_count cannot be negative');
}
return {
isValid: errors.length === 0,
errors
};
}
}
```
### 6.2 Data Compression Strategy
For large workflows (>1MB):
```typescript
import { gzipSync, gunzipSync } from 'zlib';
export function compressWorkflow(workflow: any): {
compressed: string; // base64
originalSize: number;
compressedSize: number;
} {
const json = JSON.stringify(workflow);
const buffer = Buffer.from(json, 'utf-8');
const compressed = gzipSync(buffer);
const base64 = compressed.toString('base64');
return {
compressed: base64,
originalSize: buffer.length,
compressedSize: compressed.length
};
}
export function decompressWorkflow(compressed: string): any {
const buffer = Buffer.from(compressed, 'base64');
const decompressed = gunzipSync(buffer);
const json = decompressed.toString('utf-8');
return JSON.parse(json);
}
```
---
## 7. Query Examples for Analysis
### 7.1 Basic Mutation Statistics
```sql
-- Overall mutation metrics
SELECT
COUNT(*) as total_mutations,
COUNT(*) FILTER(WHERE mutation_success) as successful,
COUNT(*) FILTER(WHERE validation_improved) as validation_improved,
ROUND(100.0 * COUNT(*) FILTER(WHERE mutation_success) / COUNT(*), 2) as success_rate,
ROUND(100.0 * COUNT(*) FILTER(WHERE validation_improved) / COUNT(*), 2) as improvement_rate,
AVG(nodes_modified_count) as avg_nodes_modified,
AVG(properties_modified_count) as avg_properties_modified,
AVG(execution_duration_ms)::INTEGER as avg_duration_ms
FROM workflow_mutations
WHERE created_at >= NOW() - INTERVAL '7 days';
```
### 7.2 Success by Instruction Type
```sql
SELECT
instruction_type,
COUNT(*) as count,
ROUND(100.0 * COUNT(*) FILTER(WHERE mutation_success) / COUNT(*), 2) as success_rate,
ROUND(100.0 * COUNT(*) FILTER(WHERE validation_improved) / COUNT(*), 2) as improvement_rate,
AVG(validation_errors_fixed) as avg_errors_fixed,
AVG(new_errors_introduced) as avg_new_errors
FROM workflow_mutations
WHERE created_at >= NOW() - INTERVAL '30 days'
GROUP BY instruction_type
ORDER BY count DESC;
```
### 7.3 Most Common Mutations
```sql
SELECT
properties_modified,
COUNT(*) as frequency,
ROUND(100.0 * COUNT(*) / (SELECT COUNT(*) FROM workflow_mutations
WHERE created_at >= NOW() - INTERVAL '30 days'), 2) as percentage
FROM workflow_mutations
WHERE created_at >= NOW() - INTERVAL '30 days'
ORDER BY frequency DESC
LIMIT 20;
```
### 7.4 Complexity Impact
```sql
SELECT
complexity_before,
complexity_after,
COUNT(*) as transitions,
ROUND(100.0 * COUNT(*) FILTER(WHERE mutation_success) / COUNT(*), 2) as success_rate
FROM workflow_mutations
WHERE created_at >= NOW() - INTERVAL '30 days'
GROUP BY complexity_before, complexity_after
ORDER BY transitions DESC;
```
---
## 8. Implementation Roadmap
### Phase 1: Infrastructure (Week 1)
- [ ] Create `workflow_mutations` table in Supabase
- [ ] Add indexes for common query patterns
- [ ] Update TypeScript types
- [ ] Create mutation analyzer service
- [ ] Add mutation validator
### Phase 2: Integration (Week 2)
- [ ] Extend TelemetryManager with trackWorkflowMutation()
- [ ] Extend EventTracker with mutation queue
- [ ] Extend BatchProcessor with flush logic
- [ ] Add mutation event type
### Phase 3: Tool Integration (Week 3)
- [ ] Integrate with n8n_autofix_workflow
- [ ] Integrate with n8n_update_partial_workflow
- [ ] Add test cases
- [ ] Documentation
### Phase 4: Validation & Analysis (Week 4)
- [ ] Run sample queries
- [ ] Validate data quality
- [ ] Create analytics dashboard
- [ ] Begin dataset collection
---
## 9. Security & Privacy Considerations
- **No Credentials:** Sanitizer strips credentials before storage
- **No Secrets:** Workflow secret references removed
- **User Anonymity:** User ID is anonymized
- **Hash Verification:** All workflow hashes verified before storage
- **Size Limits:** 10MB max per workflow (with compression option)
- **Retention:** Define data retention policy separately
- **Encryption:** Enable Supabase encryption at rest
- **Access Control:** Restrict table access to application-level only
---
## 10. Performance Considerations
| Aspect | Target | Strategy |
|--------|--------|----------|
| **Batch Flush** | <5s latency | 5-second flush interval + auto-flush |
| **Large Workflows** | >1MB support | Gzip compression + base64 encoding |
| **Query Performance** | <100ms | Strategic indexing + materialized views |
| **Storage Growth** | <50GB/month | Compression + retention policies |
| **Network Throughput** | <1MB/batch | Compress before transmission |
---
*End of Specification*

View File

@@ -1,450 +0,0 @@
# N8N-Fixer Dataset: Telemetry Infrastructure Analysis
**Analysis Completed:** November 12, 2025
**Scope:** N8N-MCP Telemetry Database Schema & Workflow Mutation Tracking
**Status:** Ready for Implementation Planning
---
## Overview
This document synthesizes a comprehensive analysis of the n8n-mcp telemetry infrastructure and provides actionable recommendations for building an n8n-fixer dataset with before/instruction/after workflow snapshots.
**Key Findings:**
- Telemetry system is production-ready with 276K+ events tracked
- Supabase PostgreSQL backend stores all events
- Current system **does NOT capture workflow mutations** (before→after transitions)
- Requires new table + instrumentation to collect fixer dataset
- Implementation is straightforward with 3-4 weeks of development
---
## Documentation Map
### 1. TELEMETRY_ANALYSIS.md (Primary Reference)
**Length:** 720 lines | **Read Time:** 20-30 minutes
**Contains:**
- Complete schema analysis (tables, columns, types)
- All 12 event types with examples
- Current workflow tracking capabilities
- Missing data for mutation tracking
- Recommended schema additions
- Technical implementation details
**Start Here If:** You need the complete picture of current capabilities and gaps
---
### 2. TELEMETRY_MUTATION_SPEC.md (Implementation Blueprint)
**Length:** 918 lines | **Read Time:** 30-40 minutes
**Contains:**
- Detailed SQL schema for `workflow_mutations` table
- Complete TypeScript interfaces and types
- Integration points with existing tools
- Mutation analyzer service specification
- Batch processor extensions
- Query examples for dataset analysis
**Start Here If:** You're ready to implement the mutation tracking system
---
### 3. TELEMETRY_QUICK_REFERENCE.md (Developer Guide)
**Length:** 503 lines | **Read Time:** 10-15 minutes
**Contains:**
- Supabase connection details
- Common queries and patterns
- Performance tips and tricks
- Code file references
- Quick lookup for event types
**Start Here If:** You need to query existing telemetry data or reference specific details
---
### 4. TELEMETRY_QUICK_REFERENCE.md (Archive)
These documents from November 8 contain additional context:
- `TELEMETRY_ANALYSIS_REPORT.md` - Executive summary with visualizations
- `TELEMETRY_EXECUTIVE_SUMMARY.md` - High-level overview
- `TELEMETRY_TECHNICAL_DEEP_DIVE.md` - Architecture details
- `TELEMETRY_DATA_FOR_VISUALIZATION.md` - Sample data for dashboards
---
## Current State Summary
### Telemetry Backend
```
URL: https://ydyufsohxdfpopqbubwk.supabase.co
Database: PostgreSQL
Tables: telemetry_events (276K rows)
telemetry_workflows (6.5K rows)
Privacy: PII sanitization enabled
Scope: Anonymous tool usage, workflows, errors
```
### Tracked Event Categories
1. **Tool Usage** (40-50%) - Which tools users employ
2. **Tool Sequences** (20-30%) - How tools are chained together
3. **Errors** (10-15%) - Error types and context
4. **Validation** (5-10%) - Configuration validation details
5. **Workflows** (5-10%) - Workflow creation and structure
6. **Performance** (5-10%) - Operation latency
7. **Sessions** (misc) - User session metadata
### What's Missing for N8N-Fixer
```
MISSING: Workflow Mutation Events
- No before workflow capture
- No instruction/transformation storage
- No after workflow snapshot
- No mutation success metrics
- No validation improvement tracking
```
---
## Recommended Implementation Path
### Phase 1: Infrastructure (1-2 weeks)
1. Create `workflow_mutations` table in Supabase
- See TELEMETRY_MUTATION_SPEC.md Section 2.1 for full SQL
- Includes 20+ strategic indexes
- Supports compression for large workflows
2. Update TypeScript types
- New `WorkflowMutation` interface
- New `WorkflowMutationEvent` event type
- Mutation analyzer service
3. Add data validators
- Hash verification
- Deduplication logic
- Size validation
---
### Phase 2: Core Integration (1-2 weeks)
1. Extend TelemetryManager
- Add `trackWorkflowMutation()` method
- Auto-flush mutations to prevent loss
2. Extend EventTracker
- Add mutation queue
- Mutation analyzer integration
- Validation state detection
3. Extend BatchProcessor
- Flush workflow mutations to Supabase
- Retry logic and dead letter queue
- Performance monitoring
---
### Phase 3: Tool Integration (1 week)
Instrument 3 key tools to capture mutations:
1. **n8n_autofix_workflow**
- Before: Broken workflow
- Instruction: "Auto-fix validation errors"
- After: Fixed workflow
- Type: `auto_fix`
2. **n8n_update_partial_workflow**
- Before: Current workflow
- Instruction: Diff operations
- After: Updated workflow
- Type: `user_provided`
3. **Validation Engine** (if applicable)
- Before: Invalid workflow
- Instruction: Validation correction
- After: Valid workflow
- Type: `validation_correction`
---
### Phase 4: Validation & Analysis (1 week)
1. Data quality verification
- Hash validation
- Size checks
- Deduplication effectiveness
2. Sample query execution
- Success rate by instruction type
- Common mutations
- Complexity impact
3. Dataset assessment
- Volume estimates
- Data distribution
- Quality metrics
---
## Key Metrics You'll Collect
### Per Mutation Record
- **Identification:** User ID, Workflow ID, Timestamp
- **Before State:** Full workflow JSON, hash, validation status
- **Instruction:** The transformation prompt/directive
- **After State:** Full workflow JSON, hash, validation status
- **Changes:** Nodes modified, properties changed, connections modified
- **Outcome:** Success boolean, validation improvement, errors fixed
### Aggregate Analysis
```sql
-- Success rates by instruction type
SELECT instruction_type, COUNT(*) as count,
ROUND(100.0 * COUNT(*) FILTER(WHERE mutation_success) / COUNT(*), 2) as success_rate
FROM workflow_mutations
GROUP BY instruction_type;
-- Validation improvement distribution
SELECT validation_errors_fixed, COUNT(*) as count
FROM workflow_mutations
WHERE validation_improved = true
GROUP BY 1
ORDER BY 2 DESC;
-- Complexity transitions
SELECT complexity_before, complexity_after, COUNT(*) as transitions
FROM workflow_mutations
GROUP BY 1, 2;
```
---
## Storage Requirements
### Data Size Estimates
```
Average Before Workflow: 10 KB
Average After Workflow: 10 KB
Average Instruction: 500 B
Indexes & Metadata: 5 KB
Per Mutation Total: 25 KB
Monthly Mutations (estimate): 10K-50K
Monthly Storage: 250 MB - 1.2 GB
Annual Storage: 3-14 GB
```
### Optimization Strategies
1. **Compression:** Gzip workflows >1MB
2. **Deduplication:** Skip identical before/after pairs
3. **Retention:** Define archival policy (90 days? 1 year?)
4. **Indexing:** Materialized views for common queries
---
## Data Safety & Privacy
### Current Protections
- User IDs are anonymized
- Credentials are stripped from workflows
- Email addresses are masked [EMAIL]
- API keys are masked [KEY]
- URLs are masked [URL]
- Error messages are sanitized
### For Mutations Table
- Continue PII sanitization
- Hash verification for integrity
- Size limits (10 MB per workflow with compression)
- User consent (telemetry opt-in)
---
## Integration Points
### Where to Add Tracking Calls
```typescript
// In n8n_autofix_workflow
await telemetry.trackWorkflowMutation(
originalWorkflow,
'Auto-fix validation errors',
fixedWorkflow,
{ instructionType: 'auto_fix', success: true }
);
// In n8n_update_partial_workflow
await telemetry.trackWorkflowMutation(
currentWorkflow,
formatOperationsAsInstruction(operations),
updatedWorkflow,
{ instructionType: 'user_provided' }
);
```
### No Breaking Changes
- Fully backward compatible
- Existing telemetry unaffected
- Optional feature (can disable if needed)
- Doesn't require version bump
---
## Success Criteria
### Phase 1 Complete When:
- [ ] `workflow_mutations` table created with all indexes
- [ ] TypeScript types defined and compiling
- [ ] Validators written and tested
- [ ] No schema changes needed (validated against use cases)
### Phase 2 Complete When:
- [ ] TelemetryManager has `trackWorkflowMutation()` method
- [ ] EventTracker queues mutations properly
- [ ] BatchProcessor flushes mutations to Supabase
- [ ] Integration tests pass
### Phase 3 Complete When:
- [ ] 3+ tools instrumented with tracking calls
- [ ] Manual testing shows mutations captured
- [ ] Sample mutations visible in Supabase
- [ ] No performance regression in tools
### Phase 4 Complete When:
- [ ] 100+ mutations collected and validated
- [ ] Sample queries execute correctly
- [ ] Data quality metrics acceptable
- [ ] Dataset ready for ML training
---
## File Structure for Implementation
```
src/telemetry/
├── telemetry-types.ts (Update: Add WorkflowMutation interface)
├── telemetry-manager.ts (Update: Add trackWorkflowMutation method)
├── event-tracker.ts (Update: Add mutation tracking)
├── batch-processor.ts (Update: Add flush mutations)
├── mutation-analyzer.ts (NEW: Analyze workflow diffs)
├── mutation-validator.ts (NEW: Validate mutation data)
└── index.ts (Update: Export new functions)
tests/
└── unit/telemetry/
├── mutation-analyzer.test.ts (NEW)
├── mutation-validator.test.ts (NEW)
└── telemetry-integration.test.ts (Update)
```
---
## Risk Assessment
### Low Risk
- No changes to existing event system
- Supabase table addition is non-breaking
- TypeScript types only (no runtime impact)
### Medium Risk
- Large workflows may impact performance if not compressed
- Storage costs if dataset grows faster than estimated
- Mitigation: Compression + retention policy
### High Risk
- None identified if implemented as specified
---
## Next Steps
1. **Review This Analysis**
- Read TELEMETRY_ANALYSIS.md (main reference)
- Review TELEMETRY_MUTATION_SPEC.md (implementation guide)
2. **Plan Implementation**
- Estimate developer hours
- Assign implementation tasks
- Create Jira tickets or equivalent
3. **Phase 1: Create Infrastructure**
- Create Supabase table
- Define TypeScript types
- Write validators
4. **Phase 2: Integrate Core**
- Extend telemetry system
- Write integration tests
5. **Phase 3: Instrument Tools**
- Add tracking calls to 3+ mutation sources
- Test end-to-end
6. **Phase 4: Validate**
- Collect sample data
- Run analysis queries
- Begin dataset collection
---
## Questions to Answer Before Starting
1. **Data Retention:** How long should mutations be kept? (90 days? 1 year?)
2. **Storage Budget:** What's acceptable monthly storage cost?
3. **Workflow Size:** What's the max workflow size to store? (with or without compression?)
4. **Dataset Timeline:** When do you need first 1K/10K/100K samples?
5. **Privacy:** Any additional PII to sanitize beyond current approach?
6. **User Consent:** Should mutation tracking be separate opt-in from telemetry?
---
## Useful Commands
### View Current Telemetry Tables
```sql
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name LIKE 'telemetry%';
```
### Count Current Events
```sql
SELECT event, COUNT(*) FROM telemetry_events
GROUP BY event ORDER BY 2 DESC;
```
### Check Workflow Deduplication Rate
```sql
SELECT COUNT(*) as total,
COUNT(DISTINCT workflow_hash) as unique
FROM telemetry_workflows;
```
---
## Document References
All documents are in the n8n-mcp repository root:
| Document | Purpose | Read Time |
|----------|---------|-----------|
| TELEMETRY_ANALYSIS.md | Complete schema & event analysis | 20-30 min |
| TELEMETRY_MUTATION_SPEC.md | Implementation specification | 30-40 min |
| TELEMETRY_QUICK_REFERENCE.md | Developer quick lookup | 10-15 min |
| TELEMETRY_ANALYSIS_REPORT.md | Executive summary (archive) | 15-20 min |
| TELEMETRY_TECHNICAL_DEEP_DIVE.md | Architecture (archive) | 20-25 min |
---
## Summary
The n8n-mcp telemetry infrastructure is mature, privacy-conscious, and well-designed. It currently tracks user interactions effectively but lacks workflow mutation capture needed for the n8n-fixer dataset.
**The solution is straightforward:** Add a single `workflow_mutations` table, extend the tracking system, and instrument 3-4 key tools.
**Implementation effort:** 3-4 weeks for a complete, production-ready system.
**Result:** A high-quality dataset of before/instruction/after workflow transformations suitable for training ML models to fix broken n8n workflows automatically.
---
**Analysis completed by:** Telemetry Data Analyst
**Date:** November 12, 2025
**Status:** Ready for implementation planning
For questions or clarifications, refer to the detailed specifications or raise issues on GitHub.

View File

@@ -1,503 +0,0 @@
# Telemetry Quick Reference Guide
Quick lookup for telemetry data access, queries, and common analysis patterns.
---
## Supabase Connection Details
### Database
- **URL:** `https://ydyufsohxdfpopqbubwk.supabase.co`
- **Project:** n8n-mcp telemetry database
- **Region:** (inferred from URL)
### Anon Key
Located in: `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/telemetry-types.ts` (line 105)
### Tables
| Name | Rows | Purpose |
|------|------|---------|
| `telemetry_events` | 276K+ | Discrete events (tool usage, errors, validation) |
| `telemetry_workflows` | 6.5K+ | Workflow metadata (structure, complexity) |
### Proposed Table
| Name | Rows | Purpose |
|------|------|---------|
| `workflow_mutations` | TBD | Before/instruction/after workflow snapshots |
---
## Event Types & Properties
### High-Volume Events
#### `tool_used` (40-50% of traffic)
```json
{
"event": "tool_used",
"properties": {
"tool": "get_node_info",
"success": true,
"duration": 245
}
}
```
**Query:** Find most used tools
```sql
SELECT properties->>'tool' as tool, COUNT(*) as count
FROM telemetry_events
WHERE event = 'tool_used' AND created_at >= NOW() - INTERVAL '7 days'
GROUP BY 1 ORDER BY 2 DESC;
```
#### `tool_sequence` (20-30% of traffic)
```json
{
"event": "tool_sequence",
"properties": {
"previousTool": "search_nodes",
"currentTool": "get_node_info",
"timeDelta": 1250,
"isSlowTransition": false,
"sequence": "search_nodes->get_node_info"
}
}
```
**Query:** Find common tool sequences
```sql
SELECT properties->>'sequence' as flow, COUNT(*) as count
FROM telemetry_events
WHERE event = 'tool_sequence' AND created_at >= NOW() - INTERVAL '30 days'
GROUP BY 1 ORDER BY 2 DESC LIMIT 20;
```
---
### Error & Validation Events
#### `error_occurred` (10-15% of traffic)
```json
{
"event": "error_occurred",
"properties": {
"errorType": "validation_error",
"context": "Node config failed [KEY]",
"tool": "config_validator",
"error": "[SANITIZED] type error",
"mcpMode": "stdio",
"platform": "darwin"
}
}
```
**Query:** Error frequency by type
```sql
SELECT
properties->>'errorType' as error_type,
COUNT(*) as frequency,
COUNT(DISTINCT user_id) as affected_users
FROM telemetry_events
WHERE event = 'error_occurred' AND created_at >= NOW() - INTERVAL '24 hours'
GROUP BY 1 ORDER BY 2 DESC;
```
#### `validation_details` (5-10% of traffic)
```json
{
"event": "validation_details",
"properties": {
"nodeType": "nodes_base_httpRequest",
"errorType": "required_field_missing",
"errorCategory": "required_field_error",
"details": { /* error details */ }
}
}
```
**Query:** Validation errors by node type
```sql
SELECT
properties->>'nodeType' as node_type,
properties->>'errorType' as error_type,
COUNT(*) as count
FROM telemetry_events
WHERE event = 'validation_details' AND created_at >= NOW() - INTERVAL '7 days'
GROUP BY 1, 2 ORDER BY 3 DESC;
```
---
### Workflow Events
#### `workflow_created`
```json
{
"event": "workflow_created",
"properties": {
"nodeCount": 3,
"nodeTypes": 2,
"complexity": "simple",
"hasTrigger": true,
"hasWebhook": false
}
}
```
**Query:** Workflow creation trends
```sql
SELECT
DATE(created_at) as date,
COUNT(*) as workflows_created,
AVG((properties->>'nodeCount')::int) as avg_nodes,
COUNT(*) FILTER(WHERE properties->>'complexity' = 'simple') as simple_count
FROM telemetry_events
WHERE event = 'workflow_created' AND created_at >= NOW() - INTERVAL '30 days'
GROUP BY 1 ORDER BY 1;
```
#### `workflow_validation_failed`
```json
{
"event": "workflow_validation_failed",
"properties": {
"nodeCount": 5
}
}
```
**Query:** Validation failure rate
```sql
SELECT
COUNT(*) FILTER(WHERE event = 'workflow_created') as successful,
COUNT(*) FILTER(WHERE event = 'workflow_validation_failed') as failed,
ROUND(100.0 * COUNT(*) FILTER(WHERE event = 'workflow_validation_failed')
/ NULLIF(COUNT(*), 0), 2) as failure_rate
FROM telemetry_events
WHERE created_at >= NOW() - INTERVAL '7 days'
AND event IN ('workflow_created', 'workflow_validation_failed');
```
---
### Session & System Events
#### `session_start`
```json
{
"event": "session_start",
"properties": {
"version": "2.22.15",
"platform": "darwin",
"arch": "arm64",
"nodeVersion": "v18.17.0",
"isDocker": false,
"cloudPlatform": null,
"mcpMode": "stdio",
"startupDurationMs": 1234
}
}
```
**Query:** Platform distribution
```sql
SELECT
properties->>'platform' as platform,
properties->>'arch' as arch,
COUNT(*) as sessions,
AVG((properties->>'startupDurationMs')::int) as avg_startup_ms
FROM telemetry_events
WHERE event = 'session_start' AND created_at >= NOW() - INTERVAL '30 days'
GROUP BY 1, 2 ORDER BY 3 DESC;
```
---
## Workflow Metadata Table Queries
### Workflow Complexity Distribution
```sql
SELECT
complexity,
COUNT(*) as count,
AVG(node_count) as avg_nodes,
MAX(node_count) as max_nodes
FROM telemetry_workflows
GROUP BY complexity
ORDER BY count DESC;
```
### Most Common Node Type Combinations
```sql
SELECT
node_types,
COUNT(*) as frequency
FROM telemetry_workflows
GROUP BY node_types
ORDER BY frequency DESC
LIMIT 20;
```
### Workflows with Triggers vs Webhooks
```sql
SELECT
has_trigger,
has_webhook,
COUNT(*) as count,
ROUND(100.0 * COUNT(*) / (SELECT COUNT(*) FROM telemetry_workflows), 2) as percentage
FROM telemetry_workflows
GROUP BY 1, 2;
```
### Deduplicated Workflows (by hash)
```sql
SELECT
COUNT(DISTINCT workflow_hash) as unique_workflows,
COUNT(*) as total_rows,
COUNT(DISTINCT user_id) as unique_users
FROM telemetry_workflows;
```
---
## Common Analysis Patterns
### 1. User Journey Analysis
```sql
-- Tool usage patterns for a user (anonymized)
WITH user_events AS (
SELECT
user_id,
event,
properties->>'tool' as tool,
created_at,
LAG(event) OVER(PARTITION BY user_id ORDER BY created_at) as prev_event
FROM telemetry_events
WHERE event IN ('tool_used', 'tool_sequence')
AND created_at >= NOW() - INTERVAL '7 days'
)
SELECT
prev_event,
event,
COUNT(*) as transitions
FROM user_events
WHERE prev_event IS NOT NULL
GROUP BY 1, 2
ORDER BY 3 DESC
LIMIT 20;
```
### 2. Performance Trends
```sql
-- Tool execution performance over time
WITH perf_data AS (
SELECT
properties->>'tool' as tool,
(properties->>'duration')::int as duration,
DATE(created_at) as date
FROM telemetry_events
WHERE event = 'tool_used'
AND created_at >= NOW() - INTERVAL '30 days'
)
SELECT
date,
tool,
COUNT(*) as executions,
AVG(duration)::INTEGER as avg_duration_ms,
PERCENTILE_CONT(0.95) WITHIN GROUP(ORDER BY duration) as p95_duration_ms,
MAX(duration) as max_duration_ms
FROM perf_data
GROUP BY date, tool
ORDER BY date DESC, tool;
```
### 3. Error Analysis with Context
```sql
-- Recent errors with affected tools
SELECT
properties->>'errorType' as error_type,
properties->>'tool' as affected_tool,
properties->>'context' as context,
COUNT(*) as occurrences,
MAX(created_at) as most_recent,
COUNT(DISTINCT user_id) as users_affected
FROM telemetry_events
WHERE event = 'error_occurred'
AND created_at >= NOW() - INTERVAL '24 hours'
GROUP BY 1, 2, 3
ORDER BY 4 DESC, 5 DESC;
```
### 4. Node Configuration Patterns
```sql
-- Most configured nodes and their complexity
WITH config_data AS (
SELECT
properties->>'nodeType' as node_type,
(properties->>'propertiesSet')::int as props_set,
properties->>'usedDefaults' = 'true' as used_defaults
FROM telemetry_events
WHERE event = 'node_configuration'
AND created_at >= NOW() - INTERVAL '30 days'
)
SELECT
node_type,
COUNT(*) as configurations,
AVG(props_set)::INTEGER as avg_props_set,
ROUND(100.0 * SUM(CASE WHEN used_defaults THEN 1 ELSE 0 END)
/ COUNT(*), 2) as default_usage_rate
FROM config_data
GROUP BY node_type
ORDER BY 2 DESC
LIMIT 20;
```
### 5. Search Effectiveness
```sql
-- Search queries and their success
SELECT
properties->>'searchType' as search_type,
COUNT(*) as total_searches,
COUNT(*) FILTER(WHERE (properties->>'hasResults')::boolean) as with_results,
ROUND(100.0 * COUNT(*) FILTER(WHERE (properties->>'hasResults')::boolean)
/ COUNT(*), 2) as success_rate,
AVG((properties->>'resultsFound')::int) as avg_results
FROM telemetry_events
WHERE event = 'search_query'
AND created_at >= NOW() - INTERVAL '7 days'
GROUP BY 1
ORDER BY 2 DESC;
```
---
## Data Size Estimates
### Current Data Volume
- **Total Events:** ~276K rows
- **Size per Event:** ~200 bytes (average)
- **Total Size (events):** ~55 MB
- **Total Workflows:** ~6.5K rows
- **Size per Workflow:** ~2 KB (sanitized)
- **Total Size (workflows):** ~13 MB
**Total Current Storage:** ~68 MB
### Growth Projections
- **Daily Events:** ~1,000-2,000
- **Monthly Growth:** ~30-60 MB
- **Annual Growth:** ~360-720 MB
---
## Helpful Constants
### Event Type Values
```
tool_used
tool_sequence
error_occurred
validation_details
node_configuration
performance_metric
search_query
workflow_created
workflow_validation_failed
session_start
startup_completed
startup_error
```
### Complexity Values
```
'simple'
'medium'
'complex'
```
### Validation Status Values (for mutations)
```
'valid'
'invalid'
'unknown'
```
### Instruction Type Values (for mutations)
```
'ai_generated'
'user_provided'
'auto_fix'
'validation_correction'
```
---
## Tips & Tricks
### Finding Zero-Result Searches
```sql
SELECT properties->>'query' as search_term, COUNT(*) as attempts
FROM telemetry_events
WHERE event = 'search_query'
AND (properties->>'isZeroResults')::boolean = true
AND created_at >= NOW() - INTERVAL '7 days'
GROUP BY 1 ORDER BY 2 DESC;
```
### Identifying Slow Operations
```sql
SELECT
properties->>'operation' as operation,
COUNT(*) as count,
PERCENTILE_CONT(0.99) WITHIN GROUP(ORDER BY (properties->>'duration')::int) as p99_ms
FROM telemetry_events
WHERE event = 'performance_metric'
AND created_at >= NOW() - INTERVAL '7 days'
GROUP BY 1
HAVING PERCENTILE_CONT(0.99) WITHIN GROUP(ORDER BY (properties->>'duration')::int) > 1000
ORDER BY 3 DESC;
```
### User Retention Analysis
```sql
-- Active users by week
WITH weekly_users AS (
SELECT
DATE_TRUNC('week', created_at) as week,
COUNT(DISTINCT user_id) as active_users
FROM telemetry_events
WHERE created_at >= NOW() - INTERVAL '90 days'
GROUP BY 1
)
SELECT week, active_users
FROM weekly_users
ORDER BY week DESC;
```
### Platform Usage Breakdown
```sql
SELECT
properties->>'platform' as platform,
properties->>'arch' as architecture,
COALESCE(properties->>'cloudPlatform', 'local') as deployment,
COUNT(DISTINCT user_id) as unique_users
FROM telemetry_events
WHERE event = 'session_start'
AND created_at >= NOW() - INTERVAL '30 days'
GROUP BY 1, 2, 3
ORDER BY 4 DESC;
```
---
## File References for Development
### Source Code
- **Types:** `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/telemetry-types.ts`
- **Manager:** `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/telemetry-manager.ts`
- **Tracker:** `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/event-tracker.ts`
- **Processor:** `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/src/telemetry/batch-processor.ts`
### Documentation
- **Full Analysis:** `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/TELEMETRY_ANALYSIS.md`
- **Mutation Spec:** `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/TELEMETRY_MUTATION_SPEC.md`
- **This Guide:** `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/TELEMETRY_QUICK_REFERENCE.md`
---
*Last Updated: November 12, 2025*

View File

@@ -1,654 +0,0 @@
# n8n-MCP Telemetry Technical Deep-Dive
## Detailed Error Patterns and Root Cause Analysis
---
## 1. ValidationError Root Causes (3,080 occurrences)
### 1.1 Workflow Structure Validation (21,423 node-level errors - 39.11%)
**Error Distribution by Node:**
- `workflow` node: 21,423 errors (39.11%)
- Generic nodes (Node0-19): ~6,000 errors (11%)
- Placeholder nodes ([KEY], ______, _____): ~1,600 errors (3%)
- Real nodes (Webhook, HTTP_Request): ~600 errors (1%)
**Interpreted Issue Categories:**
1. **Missing Trigger Nodes (Estimated 35-40% of workflow errors)**
- Users create workflows without start trigger
- Validation requires at least one trigger (webhook, schedule, etc.)
- Error message: Generic "validation failed" doesn't specify missing trigger
2. **Invalid Node Connections (Estimated 25-30% of workflow errors)**
- Nodes connected in wrong order
- Output type mismatch between connected nodes
- Circular dependencies created
- Example: Trying to use output of node that hasn't run yet
3. **Type Mismatches (Estimated 20-25% of workflow errors)**
- Node expects array, receives string
- Node expects object, receives primitive
- Related to TypeError errors (2,767 occurrences)
4. **Missing Required Properties (Estimated 10-15% of workflow errors)**
- Webhook nodes missing path/method
- HTTP nodes missing URL
- Database nodes missing connection string
### 1.2 Placeholder Node Test Data (4,700+ errors)
**Problem:** Generic test node names creating noise
```
Node0-Node19: ~6,000+ errors
[KEY]: 656 errors
______ (6 underscores): 643 errors
_____ (5 underscores): 207 errors
______ (8 underscores): 227 errors
```
**Evidence:** These names appear in telemetry_validation_errors_daily
- Consistent across 25-36 days
- Indicates: System test data or user test workflows
**Action Required:**
1. Filter test data from telemetry (add flag for test vs. production)
2. Clean up existing test workflows from database
3. Implement test isolation so test events don't pollute metrics
### 1.3 Webhook Validation Issues (435 errors)
**Webhook-Specific Problems:**
```
Error Pattern Analysis:
- Webhook: 435 errors
- Webhook_Trigger: 293 errors
- Total Webhook-related: 728 errors (~1.3% of validation errors)
```
**Common Webhook Failures:**
1. **Missing Required Fields:**
- No HTTP method specified (GET/POST/PUT/DELETE)
- No URL path configured
- No authentication method selected
2. **Configuration Errors:**
- Invalid URL patterns (special characters, spaces)
- Incorrect CORS settings
- Missing body for POST/PUT operations
- Header format issues
3. **Connection Issues:**
- Firewall/network blocking
- Unsupported protocol (HTTP vs HTTPS mismatch)
- TLS version incompatibility
---
## 2. TypeError Root Causes (2,767 occurrences)
### 2.1 Type Mismatch Categories
**Pattern Analysis:**
- 31.23% of all errors
- Indicates schema/type enforcement issues
- Overlaps with ValidationError (both types occur together)
### 2.2 Common Type Mismatches
**JSON Property Errors (Estimated 40% of TypeErrors):**
```
Problem: properties field in telemetry_events is JSONB
Possible Issues:
- Passing string "true" instead of boolean true
- Passing number as string "123"
- Passing array [value] instead of scalar value
- Nested object structure violations
```
**Node Property Errors (Estimated 35% of TypeErrors):**
```
HTTP Request Node Example:
- method: Expects "GET" | "POST" | etc., receives 1, 0 (numeric)
- timeout: Expects number (ms), receives string "5000"
- headers: Expects object {key: value}, receives string "[object Object]"
```
**Expression Errors (Estimated 25% of TypeErrors):**
```
n8n Expressions Example:
- $json.count expects number, receives $json.count_str (string)
- $node[nodeId].data expects array, receives single object
- Missing type conversion: parseInt(), String(), etc.
```
### 2.3 Type Validation System Gaps
**Current System Weakness:**
- JSONB storage in Postgres doesn't enforce types
- Validation happens at application layer
- No real-time type checking during workflow building
- Type errors only discovered at validation time
**Recommended Fixes:**
1. Implement strict schema validation in node parser
2. Add TypeScript definitions for all node properties
3. Generate type stubs from node definitions
4. Validate types during property extraction phase
---
## 3. Generic Error Root Causes (2,711 occurrences)
### 3.1 Why Generic Errors Are Problematic
**Current Classification:**
- 30.60% of all errors
- No error code or subtype
- Indicates unhandled exception scenario
- Prevents automated recovery
**Likely Sources:**
1. **Database Connection Errors (Estimated 30%)**
- Timeout during validation query
- Connection pool exhaustion
- Query too large/complex
2. **Out of Memory Errors (Estimated 20%)**
- Large workflow processing
- Huge node count (100+ nodes)
- Property extraction on complex nodes
3. **Unhandled Exceptions (Estimated 25%)**
- Code path not covered by specific error handling
- Unexpected input format
- Missing null checks
4. **External Service Failures (Estimated 15%)**
- Documentation fetch timeout
- Node package load failure
- Network connectivity issues
5. **Unknown Issues (Estimated 10%)**
- No further categorization available
### 3.2 Error Context Missing
**What We Know:**
- Error occurred during validation/operation
- Generic type (Error vs. ValidationError vs. TypeError)
**What We Don't Know:**
- Which specific validation step failed
- What input caused the error
- What operation was in progress
- Root exception details (stack trace)
---
## 4. Tool-Specific Failure Analysis
### 4.1 `get_node_info` - 11.72% Failure Rate (CRITICAL)
**Failure Count:** 1,208 out of 10,304 invocations
**Hypothesis Testing:**
**Hypothesis 1: Missing Database Records (30% likelihood)**
```
Scenario: Node definition not in database
Evidence:
- 1,208 failures across 36 days
- Consistent rate suggests systematic gaps
- New nodes not in database after updates
Solution:
- Verify database has 525 total nodes
- Check if failing on node types that exist
- Implement cache warming
```
**Hypothesis 2: Encoding/Parsing Issues (40% likelihood)**
```
Scenario: Complex node properties fail to parse
Evidence:
- Only 11.72% fail (not all complex nodes)
- Specific to get_node_info, not essentials
- Likely: edge case in JSONB serialization
Example Problem:
- Node with circular references
- Node with very large property tree
- Node with special characters in documentation
- Node with unicode/non-ASCII characters
Solution:
- Add error telemetry to capture failing node names
- Implement pagination for large properties
- Add encoding validation
```
**Hypothesis 3: Concurrent Access Issues (20% likelihood)**
```
Scenario: Race condition during node updates
Evidence:
- Fails at specific times
- Not tied to specific node types
- Affects retrieval, not storage
Solution:
- Add read locking during updates
- Implement query timeouts
- Add retry logic with exponential backoff
```
**Hypothesis 4: Query Timeout (10% likelihood)**
```
Scenario: Database query takes >30s for large nodes
Evidence:
- Observed in telemetry tool sequences
- High latency for some operations
- System resource constraints
Solution:
- Add query optimization
- Implement caching layer
- Pre-compute common queries
```
### 4.2 `get_node_documentation` - 4.13% Failure Rate
**Failure Count:** 471 out of 11,403 invocations
**Root Causes (Estimated):**
1. **Missing Documentation (40%)** - Some nodes lack comprehensive docs
2. **Retrieval Errors (30%)** - Timeout fetching from n8n.io API
3. **Parsing Errors (20%)** - Documentation format issues
4. **Encoding Issues (10%)** - Non-ASCII characters in docs
**Pattern:** Correlated with `get_node_info` failures (both documentation retrieval)
### 4.3 `validate_node_operation` - 6.42% Failure Rate
**Failure Count:** 363 out of 5,654 invocations
**Root Causes (Estimated):**
1. **Incomplete Operation Definitions (40%)**
- Validator doesn't know all valid operations for node
- Operation definitions outdated vs. actual node
- New operations not in validator database
2. **Property Dependency Logic Gaps (35%)**
- Validator doesn't understand conditional requirements
- Missing: "if X is set, then Y is required"
- Property visibility rules incomplete
3. **Type Matching Failures (20%)**
- Validator expects different type than provided
- Type coercion not working
- Related to TypeError issues
4. **Edge Cases (5%)**
- Unusual property combinations
- Boundary conditions
- Rarely-used operation modes
---
## 5. Temporal Error Patterns
### 5.1 Error Spike Root Causes
**September 26 Spike (6,222 validation errors)**
- Represents: 70% of September errors in single day
- Possible causes:
1. Batch workflow import test
2. Database migration or schema change
3. Node definitions updated incompatibly
4. System performance issue (slow validation)
**October 12 Spike (567.86% increase: 28 → 187 errors)**
- Could indicate: System restart, deployment, rollback
- Recovery pattern: Immediate return to normal
- Suggests: One-time event, not systemic
**October 3-10 Plateau (2,000+ errors daily)**
- Duration: 8 days sustained elevation
- Peak: October 4 (3,585 errors)
- Recovery: October 11 (83.72% drop to 28 errors)
- Interpretation: Incident period with mitigation
### 5.2 Current Trend (Oct 30-31)
- Oct 30: 278 errors (elevated)
- Oct 31: 130 errors (recovering)
- Baseline: 60-65 errors/day (normal)
**Interpretation:** System health improving; approaching steady state
---
## 6. Tool Sequence Performance Bottlenecks
### 6.1 Sequential Update Loop Analysis
**Pattern:** `n8n_update_partial_workflow → n8n_update_partial_workflow`
- **Occurrences:** 96,003 (highest volume)
- **Avg Duration:** 55.2 seconds
- **Slow Transitions:** 63,322 (66%)
**Why This Matters:**
```
Scenario: Workflow with 20 property updates
Current: 20 × 55.2s = 18.4 minutes total
With batch operation: ~5-10 seconds total
Improvement: 95%+ faster
```
**Root Causes:**
1. **No Batch Update Operation (80% likely)**
- Each update is separate API call
- Each call: parse request + validate + update + persist
- No atomicity guarantee
2. **Network Round-Trip Latency (15% likely)**
- Each call adds latency
- If client/server not co-located: 100-200ms per call
- Compounds with update operations
3. **Validation on Each Update (5% likely)**
- Full workflow validation on each property change
- Could be optimized to field-level validation
**Solution:**
```typescript
// Proposed Batch Update Operation
interface BatchUpdateRequest {
workflowId: string;
operations: [
{ type: 'updateNode', nodeId: string, properties: object },
{ type: 'updateConnection', from: string, to: string, config: object },
{ type: 'updateSettings', settings: object }
];
validateFull: boolean; // Full or incremental validation
}
// Returns: Updated workflow with all changes applied atomically
```
### 6.2 Read-After-Write Pattern
**Pattern:** `n8n_update_partial_workflow → n8n_get_workflow`
- **Occurrences:** 19,876
- **Avg Duration:** 96.6 seconds
- **Pattern:** Users verify state after update
**Root Causes:**
1. **Updates Don't Return State (70% likely)**
- Update operation returns success/failure
- Doesn't return updated workflow state
- Forces clients to fetch separately
2. **Verification Uncertainty (20% likely)**
- Users unsure if update succeeded completely
- Fetch to double-check
- Especially with complex multi-node updates
3. **Change Tracking Needed (10% likely)**
- Users want to see what changed
- Need diff/changelog
- Requires full state retrieval
**Solution:**
```typescript
// Update response should include:
{
success: true,
workflow: { /* full updated workflow */ },
changes: {
updated_fields: ['nodes[0].name', 'settings.timezone'],
added_connections: [{ from: 'node1', to: 'node2' }],
removed_nodes: []
}
}
```
### 6.3 Search Inefficiency Pattern
**Pattern:** `search_nodes → search_nodes`
- **Occurrences:** 68,056
- **Avg Duration:** 11.2 seconds
- **Slow Transitions:** 11,544 (17%)
**Root Causes:**
1. **Poor Ranking (60% likely)**
- Users search for "http", get results in wrong order
- "HTTP Request" node not in top 3 results
- Users refine search
2. **Query Term Mismatch (25% likely)**
- Users search "webhook trigger"
- System searches for exact phrase
- Returns 0 results; users try "webhook" alone
3. **Incomplete Result Matching (15% likely)**
- Synonym support missing
- Category/tag matching weak
- Users don't know official node names
**Solution:**
```
Analyze top 50 repeated search sequences:
- "http" → "http request" → "HTTP Request"
Action: Rank "HTTP Request" in top 3 for "http" search
- "schedule" → "schedule trigger" → "cron"
Action: Tag scheduler nodes with "cron", "schedule trigger" synonyms
- "webhook" → "webhook trigger" → "HTTP Trigger"
Action: Improve documentation linking webhook triggers
```
---
## 7. Validation Accuracy Issues
### 7.1 `validate_workflow` - 5.50% Failure Rate
**Root Causes:**
1. **Incomplete Validation Rules (45%)**
- Validator doesn't check all requirements
- Missing rules for specific node combinations
- Circular dependency detection missing
2. **Schema Version Mismatches (30%)**
- Validator schema != actual node schema
- Happens after node updates
- Validator not updated simultaneously
3. **Performance Timeouts (15%)**
- Very large workflows (100+ nodes)
- Validation takes >30 seconds
- Timeout triggered
4. **Type System Gaps (10%)**
- Type checking incomplete
- Coercion not working correctly
- Related to TypeError issues
### 7.2 `validate_node_operation` - 6.42% Failure Rate
**Root Causes (Estimated):**
1. **Missing Operation Definitions (40%)**
- New operations not in validator
- Rare operations not covered
- Custom operations not supported
2. **Property Dependency Gaps (30%)**
- Conditional properties not understood
- "If X=Y, then Z is required" rules missing
- Visibility logic incomplete
3. **Type Validation Failures (20%)**
- Expected type doesn't match provided type
- No implicit type coercion
- Complex type definitions not validated
4. **Edge Cases (10%)**
- Boundary values
- Special characters in properties
- Maximum length violations
---
## 8. Systemic Issues Identified
### 8.1 Validation Error Message Quality
**Current State:**
```
❌ "Validation failed"
❌ "Invalid workflow configuration"
❌ "Node configuration error"
```
**What Users Need:**
```
✅ "Workflow missing required start trigger node. Add a trigger (Webhook, Schedule, or Manual Trigger)"
✅ "HTTP Request node 'call_api' missing required URL property"
✅ "Cannot connect output from 'set_values' (type: string) to 'http_request' input (expects: object)"
```
**Impact:** Generic errors prevent both users and AI agents from self-correcting
### 8.2 Type System Gaps
**Current System:**
- JSONB properties in database (no type enforcement)
- Application-level validation (catches errors late)
- Limited type definitions for properties
**Gaps:**
1. No strict schema validation during ingestion
2. Type coercion not automatic
3. Complex type definitions (unions, intersections) not supported
### 8.3 Test Data Contamination
**Problem:** 4,700+ errors from placeholder node names
- Node0-Node19: Generic test nodes
- [KEY], ______, _______: Incomplete configurations
- These create noise in real error metrics
**Solution:**
1. Flag test vs. production data at ingestion
2. Separate test telemetry database
3. Filter test data from production analysis
---
## 9. Tool Reliability Correlation Matrix
**High Reliability Cluster (99%+ success):**
- n8n_list_executions (100%)
- n8n_get_workflow (99.94%)
- n8n_get_execution (99.90%)
- search_nodes (99.89%)
**Medium Reliability Cluster (95-99% success):**
- get_node_essentials (96.19%)
- n8n_create_workflow (96.35%)
- get_node_documentation (95.87%)
- validate_workflow (94.50%)
**Problematic Cluster (<95% success):**
- get_node_info (88.28%) ← CRITICAL
- validate_node_operation (93.58%)
**Pattern:** Information retrieval tools have lower success than state manipulation tools
**Hypothesis:** Read operations affected by:
- Stale caches
- Missing data
- Encoding issues
- Network timeouts
---
## 10. Recommendations by Root Cause
### Validation Error Improvements (Target: 50% reduction)
1. **Specific Error Messages** (+25% reduction)
- Map 39% workflow errors → specific structural requirements
- "Missing start trigger" vs. "validation failed"
2. **Test Data Isolation** (+15% reduction)
- Remove 4,700+ errors from placeholder nodes
- Separate test telemetry pipeline
3. **Type System Strictness** (+10% reduction)
- Implement schema validation on ingestion
- Prevent type mismatches at source
### Tool Reliability Improvements (Target: 10% reduction overall)
1. **get_node_info Reliability** (-1,200 errors potential)
- Add retry logic
- Implement read cache
- Fallback to essentials
2. **Workflow Validation** (-500 errors potential)
- Improve validation logic
- Add missing edge case handling
- Optimize performance
3. **Node Operation Validation** (-360 errors potential)
- Complete operation definitions
- Implement property dependency logic
- Add type coercion
### Performance Improvements (Target: 90% latency reduction)
1. **Batch Update Operation**
- Reduce 96,003 sequential updates from 55.2s to <5s each
- Potential: 18-minute reduction per workflow construction
2. **Return Updated State**
- Eliminate 19,876 redundant get_workflow calls
- Reduce round trips by 40%
3. **Search Ranking**
- Reduce 68,056 sequential searches
- Improve hit rate on first search
---
## Conclusion
The n8n-MCP system exhibits:
1. **Strong Infrastructure** (99%+ reliability for core operations)
2. **Weak Information Retrieval** (`get_node_info` at 88%)
3. **Poor User Feedback** (generic error messages)
4. **Validation Gaps** (39% of errors unspecified)
5. **Performance Bottlenecks** (sequential operations at 55+ seconds)
Each issue has clear root causes and actionable solutions. Implementing Priority 1 recommendations would address 80% of user-facing problems and significantly improve AI agent success rates.
---
**Report Prepared By:** AI Telemetry Analyst
**Technical Depth:** Deep Dive Level
**Audience:** Engineering Team / Architecture Review
**Date:** November 8, 2025

View File

@@ -1,683 +0,0 @@
# N8N-MCP Telemetry Analysis: Validation Failures as System Feedback
**Analysis Date:** November 8, 2025
**Data Period:** September 26 - November 8, 2025 (90 days)
**Report Type:** Comprehensive Validation Failure Root Cause Analysis
---
## Executive Summary
Validation failures in n8n-mcp are NOT system failures—they are the system working exactly as designed, catching configuration errors before deployment. However, the high volume (29,218 validation events across 9,021 users) reveals significant **documentation and guidance gaps** that prevent AI agents from configuring nodes correctly on the first attempt.
### Critical Findings:
1. **100% Retry Success Rate**: When AI agents encounter validation errors, they successfully correct and deploy workflows same-day 100% of the time—proving validation feedback is effective and agents learn quickly.
2. **Top 3 Problematic Areas** (accounting for 75% of errors):
- Workflow structure issues (undefined node IDs/names, connection errors): 33.2%
- Webhook/trigger configuration: 6.7%
- Required field documentation: 7.7%
3. **Tool Usage Insight**: Agents using documentation tools BEFORE attempting configuration have slightly HIGHER error rates (12.6% vs 10.8%), suggesting documentation alone is insufficient—agents need better guidance integrated into tool responses.
4. **Search Query Patterns**: Most common pre-failure searches are generic ("webhook", "http request", "openai") rather than specific node configuration searches, indicating agents are searching for node existence rather than configuration details.
5. **Node-Specific Crisis Points**:
- **Webhook/Webhook Trigger**: 127 combined failures (47 unique users)
- **AI Agent**: 36 failures (20 users) - missing AI model connections
- **Slack variants**: 101 combined failures (7 users)
- **Generic nodes** ([KEY], underscores): 275 failures - likely malformed JSON from agents
---
## Detailed Analysis
### 1. Node-Specific Difficulty Ranking
The nodes causing the most validation failures reveal where agent guidance is weakest:
| Rank | Node Type | Failures | Users | Primary Error | Impact |
|------|-----------|----------|-------|---------------|--------|
| 1 | Webhook (trigger config) | 127 | 40 | responseNode requires `onError: "continueRegularOutput"` | HIGH |
| 2 | Slack_Notification | 73 | 2 | Required field "Send Message To" empty; Invalid enum "select" | HIGH |
| 3 | AI_Agent | 36 | 20 | Missing `ai_languageModel` connection | HIGH |
| 4 | HTTP_Request | 31 | 13 | Missing required fields (varied) | MEDIUM |
| 5 | OpenAI | 35 | 8 | Misconfigured model/auth/parameters | MEDIUM |
| 6 | Airtable_Create_Record | 41 | 1 | Required fields for API records | MEDIUM |
| 7 | Telegram | 27 | 1 | Operation enum mismatch; Missing Chat ID | MEDIUM |
**Key Insight**: The most problematic nodes are trigger/connector nodes and AI/API integrations—these require deep understanding of external API contracts that our documentation may not adequately convey.
---
### 2. Top 10 Validation Error Messages (with specific examples)
These are the precise errors agents encounter. Each one represents a documentation opportunity:
| Rank | Error Message | Count | Affected Users | Interpretation |
|------|---------------|-------|---|---|
| 1 | "Duplicate node ID: undefined" | 179 | 20 | **CRITICAL**: Agents generating invalid JSON or malformed workflow structures. Likely JSON parsing issues on LLM side. |
| 2 | "Single-node workflows only valid for webhooks" | 58 | 47 | Agents don't understand webhook-only constraint. Need explicit documentation. |
| 3 | "responseNode mode requires onError: 'continueRegularOutput'" | 57 | 33 | Webhook-specific configuration rule not obvious. **Error message is helpful but documentation missing context.** |
| 4 | "Duplicate node name: undefined" | 61 | 6 | Related to #1—structural issues with node definitions. |
| 5 | "Multi-node workflow has no connections" | 33 | 24 | Agents don't understand workflow connection syntax. **Need examples in documentation.** |
| 6 | "Workflow contains a cycle (infinite loop)" | 33 | 19 | Agents not visualizing workflow topology before creating. |
| 7 | "Required property 'Send Message To' cannot be empty" | 25 | 1 | Slack node properties not obvious from schema. |
| 8 | "AI Agent requires ai_languageModel connection" | 22 | 15 | Missing documentation on AI node dependencies. |
| 9 | "Node position must be array [x, y]" | 25 | 4 | Position format not specified in node documentation. |
| 10 | "Invalid value for 'operation'. Must be one of: [list]" | 14 | 1 | Enum values not provided before validation. |
---
### 3. Error Categories & Root Causes
Breaking down all 4,898 validation details events into categories reveals the real problems:
```
Error Category Distribution:
┌─────────────────────────────────┬───────────┬──────────┐
│ Category │ Count │ % of All │
├─────────────────────────────────┼───────────┼──────────┤
│ Other (workflow structure) │ 1,268 │ 25.89% │
│ Connection/Linking Errors │ 676 │ 13.80% │
│ Missing Required Field │ 378 │ 7.72% │
│ Invalid Field Value/Enum │ 202 │ 4.12% │
│ Error Handler Configuration │ 148 │ 3.02% │
│ Invalid Position │ 109 │ 2.23% │
│ Unknown Node Type │ 88 │ 1.80% │
│ Missing typeVersion │ 50 │ 1.02% │
├─────────────────────────────────┼───────────┼──────────┤
│ SUBTOTAL (Top Issues) │ 2,919 │ 59.60% │
│ All Other Errors │ 1,979 │ 40.40% │
└─────────────────────────────────┴───────────┴──────────┘
```
### 3.1 Root Cause Analysis by Category
**[25.89%] Workflow Structure Issues (1,268 errors)**
- Undefined node IDs/names (likely JSON malformation)
- Incorrect node position formats
- Missing required workflow metadata
- **ROOT CAUSE**: Agents constructing workflow JSON without proper schema understanding. Need better template examples and validation error context.
**[13.80%] Connection/Linking Errors (676 errors)**
- Multi-node workflows with no connections defined
- Missing connection syntax in workflow definition
- Error handler connection misconfigurations
- **ROOT CAUSE**: Connection format is unintuitive. Sample workflows in documentation critically needed.
**[7.72%] Missing Required Fields (378 errors)**
- "Send Message To" for Slack
- "Chat ID" for Telegram
- "Title" for Google Docs
- **ROOT CAUSE**: Required fields not clearly marked in `get_node_essentials()` response. Need explicit "REQUIRED" labeling.
**[4.12%] Invalid Field Values/Enums (202 errors)**
- Invalid "operation" selected
- Invalid "select" value for choice fields
- Wrong authentication method type
- **ROOT CAUSE**: Enum options not provided in advance. Tool should return valid options BEFORE agent attempts configuration.
**[3.02%] Error Handler Configuration (148 errors)**
- ResponseNode mode setup
- onError settings for async operations
- Error output connections in wrong position
- **ROOT CAUSE**: Error handling is complex; needs dedicated tutorial/examples in documentation.
---
### 4. Tool Usage Pattern: Before Validation Failures
This reveals what agents attempt BEFORE hitting errors:
```
Tools Used Before Failures (within 10 minutes):
┌─────────────────────────────────────┬──────────┬────────┐
│ Tool │ Count │ Users │
├─────────────────────────────────────┼──────────┼────────┤
│ search_nodes │ 320 │ 113 │ ← Most common
│ get_node_essentials │ 177 │ 73 │ ← Documentation users
│ validate_workflow │ 137 │ 47 │ ← Validation-checking
│ tools_documentation │ 78 │ 67 │ ← Help-seeking
│ n8n_update_partial_workflow │ 72 │ 32 │ ← Fixing attempts
├─────────────────────────────────────┼──────────┼────────┤
│ INSIGHT: "search_nodes" (320) is │ │ │
│ 1.8x more common than │ │ │
│ "get_node_essentials" (177) │ │ │
└─────────────────────────────────────┴──────────┴────────┘
```
**Critical Insight**: Agents search for nodes before reading detailed documentation. They're trying to locate a node first, then attempt configuration without sufficient guidance. The search_nodes tool should provide better configuration hints.
---
### 5. Search Queries Before Failures
Most common search patterns when agents subsequently fail:
| Query | Count | Users | Interpretation |
|-------|-------|-------|---|
| "webhook" | 34 | 16 | Generic search; 3.4min before failure |
| "http request" | 32 | 20 | Generic search; 4.1min before failure |
| "openai" | 23 | 7 | Generic search; 3.4min before failure |
| "slack" | 16 | 9 | Generic search; 6.1min before failure |
| "gmail" | 12 | 4 | Generic search; 0.1min before failure |
| "telegram" | 10 | 10 | Generic search; 5.8min before failure |
**Finding**: Searches are too generic. Agents search "webhook" then fail on "responseNode configuration"—they found the node but don't understand its specific requirements. Need **operation-specific search results**.
---
### 6. Documentation Usage Impact
Critical finding on effectiveness of reading documentation FIRST:
```
Documentation Impact Analysis:
┌──────────────────────────────────┬───────────┬─────────┬──────────┐
│ Group │ Total │ Errors │ Success │
│ │ Users │ Rate │ Rate │
├──────────────────────────────────┼───────────┼─────────┼──────────┤
│ Read Documentation FIRST │ 2,304 │ 12.6% │ 87.4% │
│ Did NOT Read Documentation │ 673 │ 10.8% │ 89.2% │
└──────────────────────────────────┴───────────┴─────────┴──────────┘
Result: Counter-intuitive!
- Documentation readers have 1.8% HIGHER error rate
- BUT they attempt MORE workflows (21,748 vs 3,869)
- Interpretation: Advanced users read docs and attempt complex workflows
```
**Critical Implication**: Current documentation doesn't prevent errors. We need **better, more actionable documentation**, not just more documentation. Documentation should have:
1. Clear required field callouts
2. Example configurations
3. Common pitfall warnings
4. Operation-specific guidance
---
### 7. Retry Success & Self-Correction
**Excellent News**: Agents learn from validation errors immediately:
```
Same-Day Recovery Rate: 100% ✓
Distribution of Successful Corrections:
- Same day (within hours): 453 user-date pairs (100%)
- Next day: 108 user-date pairs (100%)
- Within 2-3 days: 67 user-date pairs (100%)
- Within 4-7 days: 33 user-date pairs (100%)
Conclusion: ALL users who encounter validation errors subsequently
succeed in correcting them. Validation feedback works perfectly.
The system is teaching agents what's wrong.
```
**This validates the premise: Validation is not broken. Guidance is broken.**
---
### 8. Property-Level Difficulty Matrix
Which specific node properties cause the most confusion:
**High-Difficulty Properties** (frequently empty/invalid):
1. **Authentication fields** (universal across nodes)
- Missing/invalid credentials
- Wrong auth type selected
2. **Operation/Action fields** (conditional requirements)
- Invalid enum selection
- No documentation of valid values
3. **Connection-dependent fields** (webhook, AI nodes)
- Missing model selection (AI Agent)
- Missing error handler connection
4. **Positional/structural fields**
- Node position array format
- Connection syntax
5. **Required-but-optional-looking fields**
- "Send Message To" for Slack
- "Chat ID" for Telegram
**Common Pattern**: Fields that are:
- Conditional (visible only if other field = X)
- Have complex validation (must be array of specific format)
- Require external knowledge (valid enum values)
...are the most error-prone.
---
## Actionable Recommendations
### PRIORITY 1: IMMEDIATE HIGH-IMPACT (Fixes 33% of errors)
#### 1.1 Fix Webhook Configuration Documentation
**Impact**: 127 failures, 40 unique users
**Action Items**:
- Create a dedicated "Webhook & Trigger Configuration" guide
- Explicitly document the `responseNode mode` requires `onError: "continueRegularOutput"` rule
- Provide before/after examples showing correct vs incorrect configuration
- Add to `get_node_essentials()` for Webhook nodes: "⚠️ IMPORTANT: If using responseNode, add onError field"
**SQL Query for Verification**:
```sql
SELECT
properties->>'nodeType' as node_type,
properties->'details'->>'message' as error_message,
COUNT(*) as count
FROM telemetry_events
WHERE event = 'validation_details'
AND properties->>'nodeType' IN ('Webhook', 'Webhook_Trigger')
AND created_at >= NOW() - INTERVAL '90 days'
GROUP BY node_type, properties->'details'->>'message'
ORDER BY count DESC;
```
**Expected Outcome**: 10-15% reduction in webhook-related failures
---
#### 1.2 Fix Node Structure Error Messages
**Impact**: 179 "Duplicate node ID: undefined" failures
**Action Items**:
1. When validation fails with "Duplicate node ID: undefined", provide:
- Exact line number in workflow JSON where the error occurs
- Example of correct node ID format
- Suggestion: "Did you forget the 'id' field in node definition?"
2. Enhance `n8n_validate_workflow` to detect structural issues BEFORE attempting validation:
- Check all nodes have `id` field
- Check all nodes have `type` field
- Provide detailed structural report
**Code Location**: `/src/services/workflow-validator.ts`
**Expected Outcome**: 50-60% reduction in "undefined" node errors
---
#### 1.3 Enhance Tool Responses with Required Field Callouts
**Impact**: 378 "Missing required field" failures
**Action Items**:
1. Modify `get_node_essentials()` output to clearly mark REQUIRED fields:
```
Before:
"properties": { "operation": {...} }
After:
"properties": {
"operation": {..., "required": true, "required_label": "⚠️ REQUIRED"}
}
```
2. In `validate_node_operation()` response, explicitly list:
- Which fields are required for this specific operation
- Which fields are conditional (depend on other field values)
- Example values for each field
3. Add to tool documentation:
```
get_node_essentials returns only essential properties.
For complete property list including all conditionals, use get_node_info().
```
**Code Location**: `/src/services/property-filter.ts`
**Expected Outcome**: 60-70% reduction in "missing required field" errors
---
### PRIORITY 2: MEDIUM-IMPACT (Fixes 25% of remaining errors)
#### 2.1 Fix Workflow Connection Documentation
**Impact**: 676 connection/linking errors, 429 unique node types
**Action Items**:
1. Create "Workflow Connections Explained" guide with:
- Diagram showing connection syntax
- Step-by-step connection building examples
- Common connection patterns (sequential, branching, error handling)
2. Enhance error message for "Multi-node workflow has no connections":
```
Before:
"Multi-node workflow has no connections.
Nodes must be connected to create a workflow..."
After:
"Multi-node workflow has no connections.
You created nodes: [list]
Add connections to link them. Example:
connections: {
'Node 1': { 'main': [[{ 'node': 'Node 2', 'type': 'main', 'index': 0 }]] }
}
For visual guide, see: [link to guide]"
```
3. Add sample workflow templates showing proper connections
- Simple: Trigger → Action
- Branching: If node splitting to multiple paths
- Error handling: Node with error catch
**Code Location**: `/src/services/workflow-validator.ts` (error messages)
**Expected Outcome**: 40-50% reduction in connection errors
---
#### 2.2 Provide Valid Enum Values in Tool Responses
**Impact**: 202 "Invalid value" errors for enum fields
**Action Items**:
1. Modify `validate_node_operation()` to return:
```json
{
"success": false,
"errors": [{
"field": "operation",
"message": "Invalid value 'sendMsg' for operation",
"valid_options": [
"deleteMessage",
"editMessageText",
"sendMessage"
],
"documentation": "https://..."
}]
}
```
2. In `get_node_essentials()`, for enum/choice fields, include:
```json
"operation": {
"type": "choice",
"options": [
{"label": "Send Message", "value": "sendMessage"},
{"label": "Delete Message", "value": "deleteMessage"}
]
}
```
**Code Location**: `/src/services/enhanced-config-validator.ts`
**Expected Outcome**: 80%+ reduction in enum selection errors
---
#### 2.3 Fix AI Agent Node Documentation
**Impact**: 36 AI Agent failures, 20 unique users
**Action Items**:
1. Add prominent warning in `get_node_essentials()` for AI Agent:
```
"⚠️ CRITICAL: AI Agent requires a language model connection.
You must add one of: OpenAI Chat Model, Anthropic Chat Model,
Google Gemini, or other LLM nodes before this node.
See example: [link]"
```
2. Create "Building AI Workflows" guide showing:
- Required model node placement
- Connection syntax for AI models
- Common model configuration
3. Add validation check: AI Agent node must have incoming connection from an LLM node
**Code Location**: `/src/services/node-specific-validators.ts`
**Expected Outcome**: 80-90% reduction in AI Agent failures
---
### PRIORITY 3: MEDIUM-IMPACT (Fixes remaining issues)
#### 3.1 Improve Search Results Quality
**Impact**: 320+ tool uses before failures; search too generic
**Action Items**:
1. When `search_nodes` finds a node, include:
- Top 3 most common operations for that node
- Most critical required fields
- Link to configuration guide
- Example workflow snippet
2. Add operation-specific search:
```
search_nodes("webhook trigger with validation")
→ Returns Webhook node with:
- Best operations for your query
- Configuration guide for validation
- Error handler setup guide
```
**Code Location**: `/src/mcp/tools.ts` (search_nodes definition)
**Expected Outcome**: 20-30% reduction in search-before-failure incidents
---
#### 3.2 Enhance Error Handler Documentation
**Impact**: 148 error handler configuration failures
**Action Items**:
1. Create dedicated "Error Handling in Workflows" guide:
- When to use error handlers
- `onError` options explained (continueRegularOutput vs continueErrorOutput)
- Connection positioning rules
- Complete working example
2. Add validation error with visual explanation:
```
Error: "Node X has onError: continueErrorOutput but no error
connections in main[1]"
Solution: Add error handler or change onError to 'continueRegularOutput'
INCORRECT: CORRECT:
main[0]: [Node Y] main[0]: [Node Y]
main[1]: [Error Handler]
```
**Code Location**: `/src/services/workflow-validator.ts`
**Expected Outcome**: 70%+ reduction in error handler failures
---
#### 3.3 Create "Node Type Corrections" Guide
**Impact**: 88 "Unknown node type" errors
**Action Items**:
1. Add helpful suggestions when unknown node type detected:
```
Unknown node type: "nodes-base.googleDocsTool"
Did you mean one of these?
- nodes-base.googleDocs (87% match)
- nodes-base.googleSheets (72% match)
Node types must include package prefix: nodes-base.nodeName
```
2. Build fuzzy matcher for common node type mistakes
**Code Location**: `/src/services/workflow-validator.ts`
**Expected Outcome**: 70%+ reduction in unknown node type errors
---
## Implementation Roadmap
### Phase 1 (Weeks 1-2): Quick Wins
- [ ] Fix Webhook documentation and error messages (1.1)
- [ ] Enhance required field callouts in tools (1.3)
- [ ] Improve error structure validation messages (1.2)
**Expected Impact**: 25-30% reduction in validation failures
### Phase 2 (Weeks 3-4): Documentation
- [ ] Create "Workflow Connections" guide (2.1)
- [ ] Create "Error Handling" guide (3.2)
- [ ] Add enum value suggestions to tool responses (2.2)
**Expected Impact**: Additional 15-20% reduction
### Phase 3 (Weeks 5-6): Advanced Features
- [ ] Enhance search results (3.1)
- [ ] Add AI Agent node validation (2.3)
- [ ] Create node type correction suggestions (3.3)
**Expected Impact**: Additional 10-15% reduction
### Target: 50-65% reduction in validation failures through better guidance
---
## Measurement & Validation
### KPIs to Track Post-Implementation
1. **Validation Failure Rate**: Currently 12.6% for documentation users
- Target: 6-7% (50% reduction)
2. **First-Attempt Success Rate**: Currently unknown, but retry success is 100%
- Target: 85%+ (measure in new telemetry)
3. **Time to Valid Configuration**: Currently unknown
- Target: Measure and reduce by 30%
4. **Tool Usage Before Failures**: Currently search_nodes dominates
- Target: Measure shift toward get_node_essentials/info
5. **Specific Node Improvements**:
- Webhook: 127 → <30 failures (76% reduction)
- AI Agent: 36 → <5 failures (86% reduction)
- Slack: 101 → <20 failures (80% reduction)
### SQL to Track Progress
```sql
-- Monitor validation failure trends by node type
SELECT
DATE(created_at) as date,
properties->>'nodeType' as node_type,
COUNT(*) as failure_count
FROM telemetry_events
WHERE event = 'validation_details'
GROUP BY DATE(created_at), properties->>'nodeType'
ORDER BY date DESC, failure_count DESC;
-- Monitor recovery rates
WITH failures_then_success AS (
SELECT
user_id,
DATE(created_at) as failure_date,
COUNT(*) as failures,
SUM(CASE WHEN LEAD(event) OVER (PARTITION BY user_id ORDER BY created_at) = 'workflow_created' THEN 1 ELSE 0 END) as recovered
FROM telemetry_events
WHERE event = 'validation_details'
AND created_at >= NOW() - INTERVAL '7 days'
GROUP BY user_id, DATE(created_at)
)
SELECT
failure_date,
SUM(failures) as total_failures,
SUM(recovered) as immediate_recovery,
ROUND(100.0 * SUM(recovered) / NULLIF(SUM(failures), 0), 1) as recovery_rate_pct
FROM failures_then_success
GROUP BY failure_date
ORDER BY failure_date DESC;
```
---
## Conclusion
The n8n-mcp validation system is working perfectly—it catches errors and provides feedback that agents learn from instantly. The 29,218 validation events over 90 days are not a symptom of system failure; they're evidence that **the system is successfully preventing bad workflows from being deployed**.
The challenge is not validation; it's **guidance quality**. Agents search for nodes but don't read complete documentation before attempting configuration. Our tools don't provide enough context about required fields, valid values, and connection syntax upfront.
By implementing the recommendations above, focusing on:
1. Clearer required field identification
2. Better error messages with actionable solutions
3. More comprehensive workflow structure documentation
4. Valid enum values provided in advance
5. Operation-specific configuration guides
...we can reduce validation failures by 50-65% **without weakening validation**, enabling AI agents to configure workflows correctly on the first attempt while maintaining the safety guarantees our validation provides.
---
## Appendix A: Complete Error Message Reference
### Top 25 Unique Validation Messages (by frequency)
1. **"Duplicate node ID: 'undefined'"** (179 occurrences)
- Root cause: JSON malformation or missing ID field
- Solution: Check node structure, ensure all nodes have `id` field
2. **"Duplicate node name: 'undefined'"** (61 occurrences)
- Root cause: Missing or undefined node names
- Solution: All nodes must have unique non-empty `name` field
3. **"Single-node workflows are only valid for webhook endpoints..."** (58 occurrences)
- Root cause: Single-node workflow without webhook
- Solution: Add trigger node or use webhook trigger
4. **"responseNode mode requires onError: 'continueRegularOutput'"** (57 occurrences)
- Root cause: Webhook configured for response but missing error handling config
- Solution: Add `"onError": "continueRegularOutput"` to webhook node
5. **"Workflow contains a cycle (infinite loop)"** (33 occurrences)
- Root cause: Circular workflow connections
- Solution: Redesign workflow to avoid cycles
6. **"Multi-node workflow has no connections..."** (33 occurrences)
- Root cause: Multiple nodes created but not connected
- Solution: Add connections array to link nodes
7. **"Required property 'Send Message To' cannot be empty"** (25 occurrences)
- Root cause: Slack node missing target channel/user
- Solution: Specify either channel or user
8. **"Invalid value for 'select'. Must be one of: channel, user"** (25 occurrences)
- Root cause: Wrong enum value for Slack target
- Solution: Use either "channel" or "user"
9. **"Node position must be an array with exactly 2 numbers [x, y]"** (25 occurrences)
- Root cause: Position not formatted as [x, y] array
- Solution: Format as `"position": [100, 200]`
10. **"AI Agent 'AI Agent' requires an ai_languageModel connection..."** (22 occurrences)
- Root cause: AI Agent node created without language model
- Solution: Add LLM node and connect it
[Additional messages follow same pattern...]
---
## Appendix B: Data Quality Notes
- **Data Source**: PostgreSQL Supabase database, `telemetry_events` table
- **Sample Size**: 29,218 validation_details events from 9,021 unique users
- **Time Period**: 43 days (Sept 26 - Nov 8, 2025)
- **Data Quality**: 100% of validation events marked with `errorType: "error"`
- **Limitations**:
- User IDs aggregated for privacy (individual user behavior not exposed)
- Workflow content sanitized (no actual code/credentials captured)
- Error categorization performed via pattern matching on error messages
---
**Report Prepared**: November 8, 2025
**Next Review Date**: November 22, 2025 (2-week progress check)
**Responsible Team**: n8n-mcp Development Team

View File

@@ -1,377 +0,0 @@
# N8N-MCP Validation Analysis: Executive Summary
**Date**: November 8, 2025 | **Period**: 90 days (Sept 26 - Nov 8) | **Data Quality**: ✓ Verified
---
## One-Page Executive Summary
### The Core Finding
**Validation failures are NOT broken—they're evidence the system is working correctly.** 29,218 validation events prevented bad configurations from deploying to production. However, these events reveal **critical documentation and guidance gaps** that cause AI agents to misconfigure nodes.
---
## Key Metrics at a Glance
```
VALIDATION HEALTH SCORECARD
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Metric Value Status
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Validation Events 29,218 Normal
Unique Users Affected 9,021 Normal
First-Attempt Success Rate ~77%* ⚠️ Fixable
Retry Success Rate 100% ✓ Excellent
Same-Day Recovery Rate 100% ✓ Excellent
Documentation Reader Error Rate 12.6% ⚠️ High
Non-Reader Error Rate 10.8% ✓ Better
* Estimated: 100% same-day retry success on 29,218 failures
suggests ~77% first-attempt success (29,218 + 21,748 = 50,966 total)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
---
## Top 3 Problem Areas (75% of all errors)
### 1. Workflow Structure Issues (33.2%)
**Symptoms**: "Duplicate node ID: undefined", malformed JSON, missing connections
**Impact**: 1,268 errors across 791 unique node types
**Root Cause**: Agents constructing workflow JSON without proper schema understanding
**Quick Fix**: Better error messages pointing to exact location of structural issues
---
### 2. Webhook & Trigger Configuration (6.7%)
**Symptoms**: "responseNode requires onError", single-node workflows, connection rules
**Impact**: 127 failures (47 users) specifically on webhook/trigger setup
**Root Cause**: Complex configuration rules not obvious from documentation
**Quick Fix**: Dedicated webhook guide + inline error messages with examples
---
### 3. Required Fields (7.7%)
**Symptoms**: "Required property X cannot be empty", missing Slack channel, missing AI model
**Impact**: 378 errors; Agents don't know which fields are required
**Root Cause**: Tool responses don't clearly mark required vs optional fields
**Quick Fix**: Add required field indicators to `get_node_essentials()` output
---
## Problem Nodes (Top 7)
| Node | Failures | Users | Primary Issue |
|------|----------|-------|---------------|
| Webhook/Trigger | 127 | 40 | Error handler configuration rules |
| Slack Notification | 73 | 2 | Missing "Send Message To" field |
| AI Agent | 36 | 20 | Missing language model connection |
| HTTP Request | 31 | 13 | Missing required parameters |
| OpenAI | 35 | 8 | Authentication/model configuration |
| Airtable | 41 | 1 | Required record fields |
| Telegram | 27 | 1 | Operation enum selection |
**Pattern**: Trigger/connector nodes and AI integrations are hardest to configure
---
## Error Category Breakdown
```
What Goes Wrong (root cause distribution):
┌────────────────────────────────────────┐
│ Workflow structure (undefined IDs) 26% │ ■■■■■■■■■■■■
│ Connection/linking errors 14% │ ■■■■■■
│ Missing required fields 8% │ ■■■■
│ Invalid enum values 4% │ ■■
│ Error handler configuration 3% │ ■
│ Invalid position format 2% │ ■
│ Unknown node types 2% │ ■
│ Missing typeVersion 1% │
│ All others 40% │ ■■■■■■■■■■■■■■■■■■
└────────────────────────────────────────┘
```
---
## Agent Behavior: Search Patterns
**Agents search for nodes generically, then fail on specific configuration:**
```
Most Searched Terms (before failures):
"webhook" ................. 34x (failed on: responseNode config)
"http request" ............ 32x (failed on: missing required fields)
"openai" .................. 23x (failed on: model selection)
"slack" ................... 16x (failed on: missing channel/user)
```
**Insight**: Generic node searches don't help with configuration specifics. Agents need targeted guidance on each node's trickiest fields.
---
## The Self-Correction Story (VERY POSITIVE)
When agents get validation errors, they FIX THEM 100% of the time (same day):
```
Validation Error → Agent Action → Outcome
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Error event → Uses feedback → Success
(4,898 events) (reads error) (100%)
Distribution of Corrections:
Within same hour ........ 453 cases (100% succeeded)
Within next day ......... 108 cases (100% succeeded)
Within 2-3 days ......... 67 cases (100% succeeded)
Within 4-7 days ......... 33 cases (100% succeeded)
```
**This proves validation messages are effective. Agents learn instantly. We just need BETTER messages.**
---
## Documentation Impact (Surprising Finding)
```
Paradox: Documentation Readers Have HIGHER Error Rate!
Documentation Readers: 2,304 users | 12.6% error rate | 87.4% success
Non-Documentation: 673 users | 10.8% error rate | 89.2% success
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Explanation: Doc readers attempt COMPLEX workflows (6.8x more attempts)
Simple workflows have higher natural success rate
Action Item: Documentation should PREVENT errors, not just explain them
Need: Better structure, examples, required field callouts
```
---
## Critical Success Factors Discovered
### What Works Well
✓ Validation catches errors effectively
✓ Error messages lead to quick fixes (100% same-day recovery)
✓ Agents attempt workflows again after failures (persistence)
✓ System prevents bad deployments
### What Needs Improvement
✗ Required fields not clearly marked in tool responses
✗ Enum values not provided before validation
✗ Workflow structure documentation lacks examples
✗ Connection syntax unintuitive and not well-documented
✗ Error messages could be more specific
---
## Top 5 Recommendations (Priority Order)
### 1. FIX WEBHOOK DOCUMENTATION (25-day impact)
**Effort**: 1-2 days | **Impact**: 127 failures resolved | **ROI**: HIGH
Create dedicated "Webhook Configuration Guide" explaining:
- responseNode mode setup
- onError requirements
- Error handler connections
- Working examples
---
### 2. ENHANCE TOOL RESPONSES (2-3 days impact)
**Effort**: 2-3 days | **Impact**: 378 failures resolved | **ROI**: HIGH
Modify tools to output:
```
For get_node_essentials():
- Mark required fields with ⚠️ REQUIRED
- Include valid enum options
- Link to configuration guide
For validate_node_operation():
- Show valid field values
- Suggest fixes for each error
- Provide contextual examples
```
---
### 3. IMPROVE WORKFLOW STRUCTURE ERRORS (5-7 days impact)
**Effort**: 3-4 days | **Impact**: 1,268 errors resolved | **ROI**: HIGH
- Better validation error messages pointing to exact issues
- Suggest corrections ("Missing 'id' field in node definition")
- Provide JSON structure examples
---
### 4. CREATE CONNECTION DOCUMENTATION (3-4 days impact)
**Effort**: 2-3 days | **Impact**: 676 errors resolved | **ROI**: MEDIUM
Create "How to Connect Nodes" guide:
- Connection syntax explained
- Step-by-step workflow building
- Common patterns (sequential, branching, error handling)
- Visual diagrams
---
### 5. ADD ERROR HANDLER GUIDE (2-3 days impact)
**Effort**: 1-2 days | **Impact**: 148 errors resolved | **ROI**: MEDIUM
Document error handling clearly:
- When/how to use error handlers
- onError options explained
- Configuration examples
- Common pitfalls
---
## Implementation Impact Projection
```
Current State (Week 0):
- 29,218 validation failures (90-day sample)
- 12.6% error rate (documentation users)
- ~77% first-attempt success rate
After Recommendations (Weeks 4-6):
✓ Webhook issues: 127 → 30 (-76%)
✓ Structure errors: 1,268 → 500 (-61%)
✓ Required fields: 378 → 120 (-68%)
✓ Connection issues: 676 → 340 (-50%)
✓ Error handlers: 148 → 40 (-73%)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Projected Impact: 50-65% reduction in validation failures
New error rate target: 6-7% (50% reduction)
First-attempt success: 77% → 85%+
```
---
## Files for Reference
Full analysis with detailed recommendations:
- **Main Report**: `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/VALIDATION_ANALYSIS_REPORT.md`
- **This Summary**: `/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/VALIDATION_ANALYSIS_SUMMARY.md`
### SQL Queries Used (for reproducibility)
#### Query 1: Overview
```sql
SELECT COUNT(*), COUNT(DISTINCT user_id), MIN(created_at), MAX(created_at)
FROM telemetry_events
WHERE event = 'workflow_validation_failed' AND created_at >= NOW() - INTERVAL '90 days';
```
#### Query 2: Top Error Messages
```sql
SELECT
properties->'details'->>'message' as error_message,
COUNT(*) as count,
COUNT(DISTINCT user_id) as affected_users
FROM telemetry_events
WHERE event = 'validation_details' AND created_at >= NOW() - INTERVAL '90 days'
GROUP BY properties->'details'->>'message'
ORDER BY count DESC
LIMIT 25;
```
#### Query 3: Node-Specific Failures
```sql
SELECT
properties->>'nodeType' as node_type,
COUNT(*) as total_failures,
COUNT(DISTINCT user_id) as affected_users
FROM telemetry_events
WHERE event = 'validation_details' AND created_at >= NOW() - INTERVAL '90 days'
GROUP BY properties->>'nodeType'
ORDER BY total_failures DESC
LIMIT 20;
```
#### Query 4: Retry Success Rate
```sql
WITH failures AS (
SELECT user_id, DATE(created_at) as failure_date
FROM telemetry_events WHERE event = 'validation_details'
)
SELECT
COUNT(DISTINCT f.user_id) as users_with_failures,
COUNT(DISTINCT w.user_id) as users_with_recovery_same_day,
ROUND(100.0 * COUNT(DISTINCT w.user_id) / COUNT(DISTINCT f.user_id), 1) as recovery_rate_pct
FROM failures f
LEFT JOIN telemetry_events w ON w.user_id = f.user_id
AND w.event = 'workflow_created'
AND DATE(w.created_at) = f.failure_date;
```
#### Query 5: Tool Usage Before Failures
```sql
WITH failures AS (
SELECT DISTINCT user_id, created_at FROM telemetry_events
WHERE event = 'validation_details' AND created_at >= NOW() - INTERVAL '90 days'
)
SELECT
te.properties->>'tool' as tool,
COUNT(*) as count_before_failure
FROM telemetry_events te
INNER JOIN failures f ON te.user_id = f.user_id
AND te.created_at < f.created_at AND te.created_at >= f.created_at - INTERVAL '10 minutes'
WHERE te.event = 'tool_used'
GROUP BY te.properties->>'tool'
ORDER BY count DESC;
```
---
## Next Steps
1. **Review this summary** with product team (30 min)
2. **Prioritize recommendations** based on team capacity (30 min)
3. **Assign work** for Priority 1 items (1-2 days effort)
4. **Set up KPI tracking** for post-implementation measurement
5. **Plan review cycle** for Nov 22 (2-week progress check)
---
## Questions This Analysis Answers
✓ Why do AI agents have so many validation failures?
→ Documentation gaps + unclear required field marking + missing examples
✓ Is validation working?
→ YES, perfectly. 100% error recovery rate proves validation provides good feedback
✓ Which nodes are hardest to configure?
→ Webhooks (33), Slack (73), AI Agent (36), HTTP Request (31)
✓ Do agents learn from validation errors?
→ YES, 100% same-day recovery for all 29,218 failures
✓ Does reading documentation help?
→ Counterintuitively, it correlates with HIGHER error rates (but only because doc readers attempt complex workflows)
✓ What's the single biggest source of errors?
→ Workflow structure/JSON malformation (1,268 errors, 26% of total)
✓ Can we reduce validation failures without weakening validation?
→ YES, 50-65% reduction possible through documentation and guidance improvements alone
---
**Report Status**: ✓ Complete | **Data Verified**: ✓ Yes | **Recommendations**: ✓ 5 Priority Items Identified
**Prepared by**: N8N-MCP Telemetry Analysis
**Date**: November 8, 2025
**Confidence Level**: High (comprehensive 90-day dataset, 9,000+ users, 29,000+ events)

Binary file not shown.

View File

@@ -20,19 +20,19 @@ services:
image: n8n-mcp:latest
container_name: n8n-mcp
ports:
- "${PORT:-3000}:${PORT:-3000}"
- "3000:3000"
environment:
- MCP_MODE=${MCP_MODE:-http}
- AUTH_TOKEN=${AUTH_TOKEN}
- NODE_ENV=${NODE_ENV:-production}
- LOG_LEVEL=${LOG_LEVEL:-info}
- PORT=${PORT:-3000}
- PORT=3000
volumes:
# Mount data directory for persistence
- ./data:/app/data
restart: unless-stopped
healthcheck:
test: ["CMD", "sh", "-c", "curl -f http://localhost:$${PORT:-3000}/health"]
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3

View File

@@ -37,12 +37,11 @@ services:
container_name: n8n-mcp
restart: unless-stopped
ports:
- "${MCP_PORT:-3000}:${MCP_PORT:-3000}"
- "${MCP_PORT:-3000}:3000"
environment:
- NODE_ENV=production
- N8N_MODE=true
- MCP_MODE=http
- PORT=${MCP_PORT:-3000}
- N8N_API_URL=http://n8n:5678
- N8N_API_KEY=${N8N_API_KEY}
- MCP_AUTH_TOKEN=${MCP_AUTH_TOKEN}
@@ -57,7 +56,7 @@ services:
n8n:
condition: service_healthy
healthcheck:
test: ["CMD", "sh", "-c", "curl -f http://localhost:$${MCP_PORT:-3000}/health"]
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3

View File

@@ -41,7 +41,7 @@ services:
# Port mapping
ports:
- "${PORT:-3000}:${PORT:-3000}"
- "${PORT:-3000}:3000"
# Resource limits
deploy:
@@ -53,7 +53,7 @@ services:
# Health check
healthcheck:
test: ["CMD", "sh", "-c", "curl -f http://127.0.0.1:$${PORT:-3000}/health"]
test: ["CMD", "curl", "-f", "http://127.0.0.1:3000/health"]
interval: 30s
timeout: 10s
retries: 3

View File

@@ -4,9 +4,7 @@ Connect n8n-MCP to Claude Code CLI for enhanced n8n workflow development from th
## Quick Setup via CLI
### Basic configuration (documentation tools only)
**For Linux, macOS, or Windows (WSL/Git Bash):**
### Basic configuration (documentation tools only):
```bash
claude mcp add n8n-mcp \
-e MCP_MODE=stdio \
@@ -15,21 +13,9 @@ claude mcp add n8n-mcp \
-- npx n8n-mcp
```
**For native Windows PowerShell:**
```powershell
# Note: The backtick ` is PowerShell's line continuation character.
claude mcp add n8n-mcp `
'-e MCP_MODE=stdio' `
'-e LOG_LEVEL=error' `
'-e DISABLE_CONSOLE_OUTPUT=true' `
-- npx n8n-mcp
```
![Adding n8n-MCP server in Claude Code](./img/cc_command.png)
### Full configuration (with n8n management tools)
**For Linux, macOS, or Windows (WSL/Git Bash):**
### Full configuration (with n8n management tools):
```bash
claude mcp add n8n-mcp \
-e MCP_MODE=stdio \
@@ -40,18 +26,6 @@ claude mcp add n8n-mcp \
-- npx n8n-mcp
```
**For native Windows PowerShell:**
```powershell
# Note: The backtick ` is PowerShell's line continuation character.
claude mcp add n8n-mcp `
'-e MCP_MODE=stdio' `
'-e LOG_LEVEL=error' `
'-e DISABLE_CONSOLE_OUTPUT=true' `
'-e N8N_API_URL=https://your-n8n-instance.com' `
'-e N8N_API_KEY=your-api-key' `
-- npx n8n-mcp
```
Make sure to replace `https://your-n8n-instance.com` with your actual n8n URL and `your-api-key` with your n8n API key.
## Alternative Setup Methods
@@ -159,11 +133,9 @@ For optimal results, create a `CLAUDE.md` file in your project root with the ins
## Tips
- If you're running n8n locally, use `http://localhost:5678` as the `N8N_API_URL`.
- The n8n API credentials are optional. Without them, you'll only have access to documentation and validation tools. With credentials, you get full workflow management capabilities.
- **Scope Management:**
- By default, `claude mcp add` uses `--scope local` (also called "user scope"), which saves the configuration to your global user settings and keeps API keys private.
- To share the configuration with your team, use `--scope project`. This saves the configuration to a `.mcp.json` file in your project's root directory.
- **Switching Scope:** The cleanest method is to `remove` the server and then `add` it back with the desired scope flag (e.g., `claude mcp remove n8n-mcp` followed by `claude mcp add n8n-mcp --scope project`).
- **Manual Switching (Advanced):** You can manually edit your `.claude.json` file (e.g., `C:\Users\YourName\.claude.json`). To switch, cut the `"n8n-mcp": { ... }` block from the top-level `"mcpServers"` object (user scope) and paste it into the nested `"mcpServers"` object under your project's path key (project scope), or vice versa. **Important:** You may need to restart Claude Code for manual changes to take effect.
- Claude Code will automatically start the MCP server when you begin a conversation.
- If you're running n8n locally, use `http://localhost:5678` as the N8N_API_URL
- The n8n API credentials are optional - without them, you'll have documentation and validation tools only
- With API credentials, you'll get full workflow management capabilities
- Use `--scope local` (default) to keep your API credentials private
- Use `--scope project` to share configuration with your team (put credentials in environment variables)
- Claude Code will automatically start the MCP server when you begin a conversation

View File

@@ -59,10 +59,10 @@ docker compose up -d
- n8n-mcp-data:/app/data
ports:
- "${PORT:-3000}:${PORT:-3000}"
- "${PORT:-3000}:3000"
healthcheck:
test: ["CMD", "sh", "-c", "curl -f http://127.0.0.1:$${PORT:-3000}/health"]
test: ["CMD", "curl", "-f", "http://127.0.0.1:3000/health"]
interval: 30s
timeout: 10s
retries: 3

View File

@@ -162,7 +162,7 @@ n8n_validate_workflow({id: createdWorkflowId})
n8n_update_partial_workflow({
workflowId: id,
operations: [
{type: 'updateNode', nodeId: 'slack1', updates: {position: [100, 200]}}
{type: 'updateNode', nodeId: 'slack1', changes: {position: [100, 200]}}
]
})

View File

@@ -1,165 +0,0 @@
-- Migration: Create workflow_mutations table for tracking partial update operations
-- Purpose: Capture workflow transformation data to improve partial updates tooling
-- Date: 2025-01-12
-- Create workflow_mutations table
CREATE TABLE IF NOT EXISTS workflow_mutations (
-- Primary key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- User identification (anonymized)
user_id TEXT NOT NULL,
session_id TEXT NOT NULL,
-- Workflow snapshots (compressed JSONB)
workflow_before JSONB NOT NULL,
workflow_after JSONB NOT NULL,
workflow_hash_before TEXT NOT NULL,
workflow_hash_after TEXT NOT NULL,
-- Intent capture
user_intent TEXT NOT NULL,
intent_classification TEXT,
tool_name TEXT NOT NULL CHECK (tool_name IN ('n8n_update_partial_workflow', 'n8n_update_full_workflow')),
-- Operations performed
operations JSONB NOT NULL,
operation_count INTEGER NOT NULL CHECK (operation_count >= 0),
operation_types TEXT[] NOT NULL,
-- Validation metrics
validation_before JSONB,
validation_after JSONB,
validation_improved BOOLEAN,
errors_resolved INTEGER DEFAULT 0 CHECK (errors_resolved >= 0),
errors_introduced INTEGER DEFAULT 0 CHECK (errors_introduced >= 0),
-- Change metrics
nodes_added INTEGER DEFAULT 0 CHECK (nodes_added >= 0),
nodes_removed INTEGER DEFAULT 0 CHECK (nodes_removed >= 0),
nodes_modified INTEGER DEFAULT 0 CHECK (nodes_modified >= 0),
connections_added INTEGER DEFAULT 0 CHECK (connections_added >= 0),
connections_removed INTEGER DEFAULT 0 CHECK (connections_removed >= 0),
properties_changed INTEGER DEFAULT 0 CHECK (properties_changed >= 0),
-- Outcome tracking
mutation_success BOOLEAN NOT NULL,
mutation_error TEXT,
-- Performance metrics
duration_ms INTEGER CHECK (duration_ms >= 0),
-- Timestamps
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Create indexes for efficient querying
-- Primary indexes for filtering
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_user_id
ON workflow_mutations(user_id);
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_session_id
ON workflow_mutations(session_id);
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_created_at
ON workflow_mutations(created_at DESC);
-- Intent and classification indexes
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_intent_classification
ON workflow_mutations(intent_classification)
WHERE intent_classification IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_tool_name
ON workflow_mutations(tool_name);
-- Operation analysis indexes
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_operation_types
ON workflow_mutations USING GIN(operation_types);
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_operation_count
ON workflow_mutations(operation_count);
-- Outcome indexes
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_success
ON workflow_mutations(mutation_success);
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_validation_improved
ON workflow_mutations(validation_improved)
WHERE validation_improved IS NOT NULL;
-- Change metrics indexes
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_nodes_added
ON workflow_mutations(nodes_added)
WHERE nodes_added > 0;
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_nodes_modified
ON workflow_mutations(nodes_modified)
WHERE nodes_modified > 0;
-- Hash indexes for deduplication
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_hash_before
ON workflow_mutations(workflow_hash_before);
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_hash_after
ON workflow_mutations(workflow_hash_after);
-- Composite indexes for common queries
-- Find successful mutations by intent classification
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_success_classification
ON workflow_mutations(mutation_success, intent_classification)
WHERE intent_classification IS NOT NULL;
-- Find mutations that improved validation
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_validation_success
ON workflow_mutations(validation_improved, mutation_success)
WHERE validation_improved IS TRUE;
-- Find mutations by user and time range
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_user_time
ON workflow_mutations(user_id, created_at DESC);
-- Find mutations with significant changes (expression index)
CREATE INDEX IF NOT EXISTS idx_workflow_mutations_significant_changes
ON workflow_mutations((nodes_added + nodes_removed + nodes_modified))
WHERE (nodes_added + nodes_removed + nodes_modified) > 0;
-- Comments for documentation
COMMENT ON TABLE workflow_mutations IS
'Tracks workflow mutations from partial update operations to analyze transformation patterns and improve tooling';
COMMENT ON COLUMN workflow_mutations.workflow_before IS
'Complete workflow JSON before mutation (sanitized, credentials removed)';
COMMENT ON COLUMN workflow_mutations.workflow_after IS
'Complete workflow JSON after mutation (sanitized, credentials removed)';
COMMENT ON COLUMN workflow_mutations.user_intent IS
'User instruction or intent for the workflow change (sanitized for PII)';
COMMENT ON COLUMN workflow_mutations.intent_classification IS
'Classified pattern: add_functionality, modify_configuration, rewire_logic, fix_validation, cleanup, unknown';
COMMENT ON COLUMN workflow_mutations.operations IS
'Array of diff operations performed (addNode, updateNode, addConnection, etc.)';
COMMENT ON COLUMN workflow_mutations.validation_improved IS
'Whether the mutation reduced validation errors (NULL if validation data unavailable)';
-- Row-level security
ALTER TABLE workflow_mutations ENABLE ROW LEVEL SECURITY;
-- Create policy for anonymous inserts (required for telemetry)
CREATE POLICY "Allow anonymous inserts"
ON workflow_mutations
FOR INSERT
TO anon
WITH CHECK (true);
-- Create policy for authenticated reads (for analysis)
CREATE POLICY "Allow authenticated reads"
ON workflow_mutations
FOR SELECT
TO authenticated
USING (true);

2358
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "n8n-mcp",
"version": "2.22.16",
"version": "2.22.0",
"description": "Integration between n8n workflow automation and Model Context Protocol (MCP)",
"main": "dist/index.js",
"types": "dist/index.d.ts",
@@ -140,15 +140,15 @@
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.20.1",
"@n8n/n8n-nodes-langchain": "^1.118.0",
"@n8n/n8n-nodes-langchain": "^1.115.1",
"@supabase/supabase-js": "^2.57.4",
"dotenv": "^16.5.0",
"express": "^5.1.0",
"express-rate-limit": "^7.1.5",
"lru-cache": "^11.2.1",
"n8n": "^1.119.1",
"n8n-core": "^1.118.0",
"n8n-workflow": "^1.116.0",
"n8n": "^1.116.2",
"n8n-core": "^1.115.1",
"n8n-workflow": "^1.113.0",
"openai": "^4.77.0",
"sql.js": "^1.13.0",
"tslib": "^2.6.2",

View File

@@ -1,6 +1,6 @@
{
"name": "n8n-mcp-runtime",
"version": "2.22.16",
"version": "2.22.0",
"description": "n8n MCP Server Runtime Dependencies Only",
"private": true,
"dependencies": {

View File

@@ -1,45 +0,0 @@
#!/usr/bin/env node
/**
* Generate release notes for the initial release
* Used by GitHub Actions when no previous tag exists
*/
const { execSync } = require('child_process');
function generateInitialReleaseNotes(version) {
try {
// Get total commit count
const commitCount = execSync('git rev-list --count HEAD', { encoding: 'utf8' }).trim();
// Generate release notes
const releaseNotes = [
'### 🎉 Initial Release',
'',
`This is the initial release of n8n-mcp v${version}.`,
'',
'---',
'',
'**Release Statistics:**',
`- Commit count: ${commitCount}`,
'- First release setup'
];
return releaseNotes.join('\n');
} catch (error) {
console.error(`Error generating initial release notes: ${error.message}`);
return `Failed to generate initial release notes: ${error.message}`;
}
}
// Parse command line arguments
const version = process.argv[2];
if (!version) {
console.error('Usage: generate-initial-release-notes.js <version>');
process.exit(1);
}
const releaseNotes = generateInitialReleaseNotes(version);
console.log(releaseNotes);

View File

@@ -1,99 +0,0 @@
#!/usr/bin/env ts-node
import * as fs from 'fs';
import * as path from 'path';
import { createDatabaseAdapter } from '../src/database/database-adapter';
interface BatchResponse {
id: string;
custom_id: string;
response: {
status_code: number;
body: {
choices: Array<{
message: {
content: string;
};
}>;
};
};
error: any;
}
async function processBatchMetadata(batchFile: string) {
console.log(`📥 Processing batch file: ${batchFile}`);
// Read the JSONL file
const content = fs.readFileSync(batchFile, 'utf-8');
const lines = content.trim().split('\n');
console.log(`📊 Found ${lines.length} batch responses`);
// Initialize database
const db = await createDatabaseAdapter('./data/nodes.db');
let updated = 0;
let skipped = 0;
let errors = 0;
for (const line of lines) {
try {
const response: BatchResponse = JSON.parse(line);
// Extract template ID from custom_id (format: "template-9100")
const templateId = parseInt(response.custom_id.replace('template-', ''));
// Check for errors
if (response.error || response.response.status_code !== 200) {
console.warn(`⚠️ Template ${templateId}: API error`, response.error);
errors++;
continue;
}
// Extract metadata from response
const metadataJson = response.response.body.choices[0].message.content;
// Validate it's valid JSON
JSON.parse(metadataJson); // Will throw if invalid
// Update database
const stmt = db.prepare(`
UPDATE templates
SET metadata_json = ?
WHERE id = ?
`);
stmt.run(metadataJson, templateId);
updated++;
console.log(`✅ Template ${templateId}: Updated metadata`);
} catch (error: any) {
console.error(`❌ Error processing line:`, error.message);
errors++;
}
}
// Close database
if ('close' in db && typeof db.close === 'function') {
db.close();
}
console.log(`\n📈 Summary:`);
console.log(` - Updated: ${updated}`);
console.log(` - Skipped: ${skipped}`);
console.log(` - Errors: ${errors}`);
console.log(` - Total: ${lines.length}`);
}
// Main
const batchFile = process.argv[2] || '/Users/romualdczlonkowski/Pliki/n8n-mcp/n8n-mcp/docs/batch_68fff7242850819091cfed64f10fb6b4_output.jsonl';
processBatchMetadata(batchFile)
.then(() => {
console.log('\n✅ Batch processing complete!');
process.exit(0);
})
.catch((error) => {
console.error('\n❌ Batch processing failed:', error);
process.exit(1);
});

View File

@@ -365,7 +365,6 @@ const updateWorkflowSchema = z.object({
connections: z.record(z.any()).optional(),
settings: z.any().optional(),
createBackup: z.boolean().optional(),
intent: z.string().optional(),
});
const listWorkflowsSchema = z.object({
@@ -701,22 +700,15 @@ export async function handleUpdateWorkflow(
repository: NodeRepository,
context?: InstanceContext
): Promise<McpToolResponse> {
const startTime = Date.now();
const sessionId = `mutation_${Date.now()}_${Math.random().toString(36).slice(2, 11)}`;
let workflowBefore: any = null;
let userIntent = 'Full workflow update';
try {
const client = ensureApiConfigured(context);
const input = updateWorkflowSchema.parse(args);
const { id, createBackup, intent, ...updateData } = input;
userIntent = intent || 'Full workflow update';
const { id, createBackup, ...updateData } = input;
// If nodes/connections are being updated, validate the structure
if (updateData.nodes || updateData.connections) {
// Always fetch current workflow for validation (need all fields like name)
const current = await client.getWorkflow(id);
workflowBefore = JSON.parse(JSON.stringify(current));
// Create backup before modifying workflow (default: true)
if (createBackup !== false) {
@@ -759,46 +751,13 @@ export async function handleUpdateWorkflow(
// Update workflow
const workflow = await client.updateWorkflow(id, updateData);
// Track successful mutation
if (workflowBefore) {
trackWorkflowMutationForFullUpdate({
sessionId,
toolName: 'n8n_update_full_workflow',
userIntent,
operations: [], // Full update doesn't use diff operations
workflowBefore,
workflowAfter: workflow,
mutationSuccess: true,
durationMs: Date.now() - startTime,
}).catch(err => {
logger.warn('Failed to track mutation telemetry:', err);
});
}
return {
success: true,
data: workflow,
message: `Workflow "${workflow.name}" updated successfully`
};
} catch (error) {
// Track failed mutation
if (workflowBefore) {
trackWorkflowMutationForFullUpdate({
sessionId,
toolName: 'n8n_update_full_workflow',
userIntent,
operations: [],
workflowBefore,
workflowAfter: workflowBefore, // No change since it failed
mutationSuccess: false,
mutationError: error instanceof Error ? error.message : 'Unknown error',
durationMs: Date.now() - startTime,
}).catch(err => {
logger.warn('Failed to track mutation telemetry for failed operation:', err);
});
}
if (error instanceof z.ZodError) {
return {
success: false,
@@ -806,7 +765,7 @@ export async function handleUpdateWorkflow(
details: { errors: error.errors }
};
}
if (error instanceof N8nApiError) {
return {
success: false,
@@ -815,7 +774,7 @@ export async function handleUpdateWorkflow(
details: error.details as Record<string, unknown> | undefined
};
}
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error occurred'
@@ -823,19 +782,6 @@ export async function handleUpdateWorkflow(
}
}
/**
* Track workflow mutation for telemetry (full workflow updates)
*/
async function trackWorkflowMutationForFullUpdate(data: any): Promise<void> {
try {
const { telemetry } = await import('../telemetry/telemetry-manager.js');
await telemetry.trackWorkflowMutation(data);
} catch (error) {
// Silently fail - telemetry should never break core functionality
logger.debug('Telemetry tracking failed:', error);
}
}
export async function handleDeleteWorkflow(args: unknown, context?: InstanceContext): Promise<McpToolResponse> {
try {
const client = ensureApiConfigured(context);
@@ -1615,6 +1561,7 @@ export async function handleListAvailableTools(context?: InstanceContext): Promi
maxRetries: config.maxRetries
} : null,
limitations: [
'Cannot activate/deactivate workflows via API',
'Cannot execute workflows directly (must use webhooks)',
'Cannot stop running executions',
'Tags and credentials have limited API support'

View File

@@ -51,7 +51,6 @@ const workflowDiffSchema = z.object({
validateOnly: z.boolean().optional(),
continueOnError: z.boolean().optional(),
createBackup: z.boolean().optional(),
intent: z.string().optional(),
});
export async function handleUpdatePartialWorkflow(
@@ -59,24 +58,20 @@ export async function handleUpdatePartialWorkflow(
repository: NodeRepository,
context?: InstanceContext
): Promise<McpToolResponse> {
const startTime = Date.now();
const sessionId = `mutation_${Date.now()}_${Math.random().toString(36).slice(2, 11)}`;
let workflowBefore: any = null;
try {
// Debug logging (only in debug mode)
if (process.env.DEBUG_MCP === 'true') {
logger.debug('Workflow diff request received', {
argsType: typeof args,
hasWorkflowId: args && typeof args === 'object' && 'workflowId' in args,
operationCount: args && typeof args === 'object' && 'operations' in args ?
operationCount: args && typeof args === 'object' && 'operations' in args ?
(args as any).operations?.length : 0
});
}
// Validate input
const input = workflowDiffSchema.parse(args);
// Get API client
const client = getN8nApiClient(context);
if (!client) {
@@ -85,13 +80,11 @@ export async function handleUpdatePartialWorkflow(
error: 'n8n API not configured. Please set N8N_API_URL and N8N_API_KEY environment variables.'
};
}
// Fetch current workflow
let workflow;
try {
workflow = await client.getWorkflow(input.id);
// Store original workflow for telemetry
workflowBefore = JSON.parse(JSON.stringify(workflow));
} catch (error) {
if (error instanceof N8nApiError) {
return {
@@ -145,7 +138,6 @@ export async function handleUpdatePartialWorkflow(
error: 'Failed to apply diff operations',
details: {
errors: diffResult.errors,
warnings: diffResult.warnings,
operationsApplied: diffResult.operationsApplied,
applied: diffResult.applied,
failed: diffResult.failed
@@ -162,9 +154,6 @@ export async function handleUpdatePartialWorkflow(
data: {
valid: true,
operationsToApply: input.operations.length
},
details: {
warnings: diffResult.warnings
}
};
}
@@ -252,92 +241,21 @@ export async function handleUpdatePartialWorkflow(
// Update workflow via API
try {
const updatedWorkflow = await client.updateWorkflow(input.id, diffResult.workflow!);
// Handle activation/deactivation if requested
let finalWorkflow = updatedWorkflow;
let activationMessage = '';
if (diffResult.shouldActivate) {
try {
finalWorkflow = await client.activateWorkflow(input.id);
activationMessage = ' Workflow activated.';
} catch (activationError) {
logger.error('Failed to activate workflow after update', activationError);
return {
success: false,
error: 'Workflow updated successfully but activation failed',
details: {
workflowUpdated: true,
activationError: activationError instanceof Error ? activationError.message : 'Unknown error'
}
};
}
} else if (diffResult.shouldDeactivate) {
try {
finalWorkflow = await client.deactivateWorkflow(input.id);
activationMessage = ' Workflow deactivated.';
} catch (deactivationError) {
logger.error('Failed to deactivate workflow after update', deactivationError);
return {
success: false,
error: 'Workflow updated successfully but deactivation failed',
details: {
workflowUpdated: true,
deactivationError: deactivationError instanceof Error ? deactivationError.message : 'Unknown error'
}
};
}
}
// Track successful mutation
if (workflowBefore && !input.validateOnly) {
trackWorkflowMutation({
sessionId,
toolName: 'n8n_update_partial_workflow',
userIntent: input.intent || 'Partial workflow update',
operations: input.operations,
workflowBefore,
workflowAfter: finalWorkflow,
mutationSuccess: true,
durationMs: Date.now() - startTime,
}).catch(err => {
logger.debug('Failed to track mutation telemetry:', err);
});
}
return {
success: true,
data: finalWorkflow,
message: `Workflow "${finalWorkflow.name}" updated successfully. Applied ${diffResult.operationsApplied} operations.${activationMessage}`,
data: updatedWorkflow,
message: `Workflow "${updatedWorkflow.name}" updated successfully. Applied ${diffResult.operationsApplied} operations.`,
details: {
operationsApplied: diffResult.operationsApplied,
workflowId: finalWorkflow.id,
workflowName: finalWorkflow.name,
active: finalWorkflow.active,
workflowId: updatedWorkflow.id,
workflowName: updatedWorkflow.name,
applied: diffResult.applied,
failed: diffResult.failed,
errors: diffResult.errors,
warnings: diffResult.warnings
errors: diffResult.errors
}
};
} catch (error) {
// Track failed mutation
if (workflowBefore && !input.validateOnly) {
trackWorkflowMutation({
sessionId,
toolName: 'n8n_update_partial_workflow',
userIntent: input.intent || 'Partial workflow update',
operations: input.operations,
workflowBefore,
workflowAfter: workflowBefore, // No change since it failed
mutationSuccess: false,
mutationError: error instanceof Error ? error.message : 'Unknown error',
durationMs: Date.now() - startTime,
}).catch(err => {
logger.warn('Failed to track mutation telemetry for failed operation:', err);
});
}
if (error instanceof N8nApiError) {
return {
success: false,
@@ -356,7 +274,7 @@ export async function handleUpdatePartialWorkflow(
details: { errors: error.errors }
};
}
logger.error('Failed to update partial workflow', error);
return {
success: false,
@@ -365,15 +283,3 @@ export async function handleUpdatePartialWorkflow(
}
}
/**
* Track workflow mutation for telemetry
*/
async function trackWorkflowMutation(data: any): Promise<void> {
try {
const { telemetry } = await import('../telemetry/telemetry-manager.js');
await telemetry.trackWorkflowMutation(data);
} catch (error) {
logger.debug('Telemetry tracking failed:', error);
}
}

View File

@@ -70,7 +70,6 @@ export class N8NDocumentationMCPServer {
private previousTool: string | null = null;
private previousToolTimestamp: number = Date.now();
private earlyLogger: EarlyErrorLogger | null = null;
private disabledToolsCache: Set<string> | null = null;
constructor(instanceContext?: InstanceContext, earlyLogger?: EarlyErrorLogger) {
this.instanceContext = instanceContext;
@@ -297,24 +296,19 @@ export class N8NDocumentationMCPServer {
throw new Error('Database is empty. Run "npm run rebuild" to populate node data.');
}
// Check if FTS5 table exists (wrap in try-catch for sql.js compatibility)
try {
const ftsExists = this.db.prepare(`
SELECT name FROM sqlite_master
WHERE type='table' AND name='nodes_fts'
`).get();
// Check if FTS5 table exists
const ftsExists = this.db.prepare(`
SELECT name FROM sqlite_master
WHERE type='table' AND name='nodes_fts'
`).get();
if (!ftsExists) {
logger.warn('FTS5 table missing - search performance will be degraded. Please run: npm run rebuild');
} else {
const ftsCount = this.db.prepare('SELECT COUNT(*) as count FROM nodes_fts').get() as { count: number };
if (ftsCount.count === 0) {
logger.warn('FTS5 index is empty - search will not work properly. Please run: npm run rebuild');
}
if (!ftsExists) {
logger.warn('FTS5 table missing - search performance will be degraded. Please run: npm run rebuild');
} else {
const ftsCount = this.db.prepare('SELECT COUNT(*) as count FROM nodes_fts').get() as { count: number };
if (ftsCount.count === 0) {
logger.warn('FTS5 index is empty - search will not work properly. Please run: npm run rebuild');
}
} catch (ftsError) {
// FTS5 not supported (e.g., sql.js fallback) - this is OK, just warn
logger.warn('FTS5 not available - using fallback search. For better performance, ensure better-sqlite3 is properly installed.');
}
logger.info(`Database health check passed: ${nodeCount.count} nodes loaded`);
@@ -324,52 +318,6 @@ export class N8NDocumentationMCPServer {
}
}
/**
* Parse and cache disabled tools from DISABLED_TOOLS environment variable.
* Returns a Set of tool names that should be filtered from registration.
*
* Cached after first call since environment variables don't change at runtime.
* Includes safety limits: max 10KB env var length, max 200 tools.
*
* @returns Set of disabled tool names
*/
private getDisabledTools(): Set<string> {
// Return cached value if available
if (this.disabledToolsCache !== null) {
return this.disabledToolsCache;
}
let disabledToolsEnv = process.env.DISABLED_TOOLS || '';
if (!disabledToolsEnv) {
this.disabledToolsCache = new Set();
return this.disabledToolsCache;
}
// Safety limit: prevent abuse with very long environment variables
if (disabledToolsEnv.length > 10000) {
logger.warn(`DISABLED_TOOLS environment variable too long (${disabledToolsEnv.length} chars), truncating to 10000`);
disabledToolsEnv = disabledToolsEnv.substring(0, 10000);
}
let tools = disabledToolsEnv
.split(',')
.map(t => t.trim())
.filter(Boolean);
// Safety limit: prevent abuse with too many tools
if (tools.length > 200) {
logger.warn(`DISABLED_TOOLS contains ${tools.length} tools, limiting to first 200`);
tools = tools.slice(0, 200);
}
if (tools.length > 0) {
logger.info(`Disabled tools configured: ${tools.join(', ')}`);
}
this.disabledToolsCache = new Set(tools);
return this.disabledToolsCache;
}
private setupHandlers(): void {
// Handle initialization
this.server.setRequestHandler(InitializeRequestSchema, async (request) => {
@@ -423,16 +371,8 @@ export class N8NDocumentationMCPServer {
// Handle tool listing
this.server.setRequestHandler(ListToolsRequestSchema, async (request) => {
// Get disabled tools from environment variable
const disabledTools = this.getDisabledTools();
// Filter documentation tools based on disabled list
const enabledDocTools = n8nDocumentationToolsFinal.filter(
tool => !disabledTools.has(tool.name)
);
// Combine documentation tools with management tools if API is configured
let tools = [...enabledDocTools];
let tools = [...n8nDocumentationToolsFinal];
// Check if n8n API tools should be available
// 1. Environment variables (backward compatibility)
@@ -445,31 +385,19 @@ export class N8NDocumentationMCPServer {
const shouldIncludeManagementTools = hasEnvConfig || hasInstanceConfig || isMultiTenantEnabled;
if (shouldIncludeManagementTools) {
// Filter management tools based on disabled list
const enabledMgmtTools = n8nManagementTools.filter(
tool => !disabledTools.has(tool.name)
);
tools.push(...enabledMgmtTools);
logger.debug(`Tool listing: ${tools.length} tools available (${enabledDocTools.length} documentation + ${enabledMgmtTools.length} management)`, {
tools.push(...n8nManagementTools);
logger.debug(`Tool listing: ${tools.length} tools available (${n8nDocumentationToolsFinal.length} documentation + ${n8nManagementTools.length} management)`, {
hasEnvConfig,
hasInstanceConfig,
isMultiTenantEnabled,
disabledToolsCount: disabledTools.size
isMultiTenantEnabled
});
} else {
logger.debug(`Tool listing: ${tools.length} tools available (documentation only)`, {
hasEnvConfig,
hasInstanceConfig,
isMultiTenantEnabled,
disabledToolsCount: disabledTools.size
isMultiTenantEnabled
});
}
// Log filtered tools count if any tools are disabled
if (disabledTools.size > 0) {
const totalAvailableTools = n8nDocumentationToolsFinal.length + (shouldIncludeManagementTools ? n8nManagementTools.length : 0);
logger.debug(`Filtered ${disabledTools.size} disabled tools, ${tools.length}/${totalAvailableTools} tools available`);
}
// Check if client is n8n (from initialization)
const clientInfo = this.clientInfo;
@@ -510,23 +438,7 @@ export class N8NDocumentationMCPServer {
configType: args && args.config ? typeof args.config : 'N/A',
rawRequest: JSON.stringify(request.params)
});
// Check if tool is disabled via DISABLED_TOOLS environment variable
const disabledTools = this.getDisabledTools();
if (disabledTools.has(name)) {
logger.warn(`Attempted to call disabled tool: ${name}`);
return {
content: [{
type: 'text',
text: JSON.stringify({
error: 'TOOL_DISABLED',
message: `Tool '${name}' is not available in this deployment. It has been disabled via DISABLED_TOOLS environment variable.`,
tool: name
}, null, 2)
}]
};
}
// Workaround for n8n's nested output bug
// Check if args contains nested 'output' structure from n8n's memory corruption
let processedArgs = args;
@@ -928,27 +840,19 @@ export class N8NDocumentationMCPServer {
async executeTool(name: string, args: any): Promise<any> {
// Ensure args is an object and validate it
args = args || {};
// Defense in depth: This should never be reached since CallToolRequestSchema
// handler already checks disabled tools (line 514-528), but we guard here
// in case of future refactoring or direct executeTool() calls
const disabledTools = this.getDisabledTools();
if (disabledTools.has(name)) {
throw new Error(`Tool '${name}' is disabled via DISABLED_TOOLS environment variable`);
}
// Log the tool call for debugging n8n issues
logger.info(`Tool execution: ${name}`, {
logger.info(`Tool execution: ${name}`, {
args: typeof args === 'object' ? JSON.stringify(args) : args,
argsType: typeof args,
argsKeys: typeof args === 'object' ? Object.keys(args) : 'not-object'
});
// Validate that args is actually an object
if (typeof args !== 'object' || args === null) {
throw new Error(`Invalid arguments for tool ${name}: expected object, got ${typeof args}`);
}
switch (name) {
case 'tools_documentation':
// No required parameters

View File

@@ -9,7 +9,6 @@ export const n8nUpdateFullWorkflowDoc: ToolDocumentation = {
example: 'n8n_update_full_workflow({id: "wf_123", nodes: [...], connections: {...}})',
performance: 'Network-dependent',
tips: [
'Include intent parameter in every call - helps to return better responses',
'Must provide complete workflow',
'Use update_partial for small changes',
'Validate before updating'
@@ -22,15 +21,13 @@ export const n8nUpdateFullWorkflowDoc: ToolDocumentation = {
name: { type: 'string', description: 'New workflow name (optional)' },
nodes: { type: 'array', description: 'Complete array of workflow nodes (required if modifying structure)' },
connections: { type: 'object', description: 'Complete connections object (required if modifying structure)' },
settings: { type: 'object', description: 'Workflow settings to update (timezone, error handling, etc.)' },
intent: { type: 'string', description: 'Intent of the change - helps to return better response. Include in every tool call. Example: "Migrate workflow to new node versions".' }
settings: { type: 'object', description: 'Workflow settings to update (timezone, error handling, etc.)' }
},
returns: 'Updated workflow object with all fields including the changes applied',
examples: [
'n8n_update_full_workflow({id: "abc", intent: "Rename workflow for clarity", name: "New Name"}) - Rename with intent',
'n8n_update_full_workflow({id: "abc", name: "New Name"}) - Rename only',
'n8n_update_full_workflow({id: "xyz", intent: "Add error handling nodes", nodes: [...], connections: {...}}) - Full structure update',
'const wf = n8n_get_workflow({id}); wf.nodes.push(newNode); n8n_update_full_workflow({...wf, intent: "Add data processing node"}); // Add node'
'n8n_update_full_workflow({id: "xyz", nodes: [...], connections: {...}}) - Full structure update',
'const wf = n8n_get_workflow({id}); wf.nodes.push(newNode); n8n_update_full_workflow(wf); // Add node'
],
useCases: [
'Major workflow restructuring',
@@ -41,7 +38,6 @@ export const n8nUpdateFullWorkflowDoc: ToolDocumentation = {
],
performance: 'Network-dependent - typically 200-500ms. Larger workflows take longer. Consider update_partial for better performance.',
bestPractices: [
'Always include intent parameter - it helps provide better responses',
'Get workflow first, modify, then update',
'Validate with validate_workflow before updating',
'Use update_partial for small changes',

View File

@@ -4,12 +4,11 @@ export const n8nUpdatePartialWorkflowDoc: ToolDocumentation = {
name: 'n8n_update_partial_workflow',
category: 'workflow_management',
essentials: {
description: 'Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, moveNode, enable/disableNode, addConnection, removeConnection, rewireConnection, cleanStaleConnections, replaceConnections, updateSettings, updateName, add/removeTag, activateWorkflow, deactivateWorkflow. Supports smart parameters (branch, case) for multi-output nodes. Full support for AI connections (ai_languageModel, ai_tool, ai_memory, ai_embedding, ai_vectorStore, ai_document, ai_textSplitter, ai_outputParser).',
description: 'Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, moveNode, enable/disableNode, addConnection, removeConnection, rewireConnection, cleanStaleConnections, replaceConnections, updateSettings, updateName, add/removeTag. Supports smart parameters (branch, case) for multi-output nodes. Full support for AI connections (ai_languageModel, ai_tool, ai_memory, ai_embedding, ai_vectorStore, ai_document, ai_textSplitter, ai_outputParser).',
keyParameters: ['id', 'operations', 'continueOnError'],
example: 'n8n_update_partial_workflow({id: "wf_123", operations: [{type: "rewireConnection", source: "IF", from: "Old", to: "New", branch: "true"}]})',
performance: 'Fast (50-200ms)',
tips: [
'Include intent parameter in every call - helps to return better responses',
'Use rewireConnection to change connection targets',
'Use branch="true"/"false" for IF nodes',
'Use case=N for Switch nodes',
@@ -20,12 +19,11 @@ export const n8nUpdatePartialWorkflowDoc: ToolDocumentation = {
'For AI connections, specify sourceOutput type (ai_languageModel, ai_tool, etc.)',
'Batch AI component connections for atomic updates',
'Auto-sanitization: ALL nodes auto-fixed during updates (operator structures, missing metadata)',
'Node renames automatically update all connection references - no manual connection operations needed',
'Activate/deactivate workflows: Use activateWorkflow/deactivateWorkflow operations (requires activatable triggers like webhook/schedule)'
'Node renames automatically update all connection references - no manual connection operations needed'
]
},
full: {
description: `Updates workflows using surgical diff operations instead of full replacement. Supports 17 operation types for precise modifications. Operations are validated and applied atomically by default - all succeed or none are applied.
description: `Updates workflows using surgical diff operations instead of full replacement. Supports 15 operation types for precise modifications. Operations are validated and applied atomically by default - all succeed or none are applied.
## Available Operations:
@@ -50,10 +48,6 @@ export const n8nUpdatePartialWorkflowDoc: ToolDocumentation = {
- **addTag**: Add a workflow tag
- **removeTag**: Remove a workflow tag
### Workflow Activation Operations (2 types):
- **activateWorkflow**: Activate the workflow to enable automatic execution via triggers
- **deactivateWorkflow**: Deactivate the workflow to prevent automatic execution
## Smart Parameters for Multi-Output Nodes
For **IF nodes**, use semantic 'branch' parameter instead of technical sourceIndex:
@@ -192,115 +186,7 @@ Please choose a different name.
- Simply rename nodes with updateNode - no manual connection operations needed
- Multiple renames in one call work atomically
- Can rename a node and add/remove connections using the new name in the same batch
- Use \`validateOnly: true\` to preview effects before applying
## Removing Properties with undefined
To remove a property from a node, set its value to \`undefined\` in the updates object. This is essential when migrating from deprecated properties or cleaning up optional configuration fields.
### Why Use undefined?
- **Property removal vs. null**: Setting a property to \`undefined\` removes it completely from the node object, while \`null\` sets the property to a null value
- **Validation constraints**: Some properties are mutually exclusive (e.g., \`continueOnFail\` and \`onError\`). Simply setting one without removing the other will fail validation
- **Deprecated property migration**: When n8n deprecates properties, you must remove the old property before the new one will work
### Basic Property Removal
\`\`\`javascript
// Remove error handling configuration
n8n_update_partial_workflow({
id: "wf_123",
operations: [{
type: "updateNode",
nodeName: "HTTP Request",
updates: { onError: undefined }
}]
});
// Remove disabled flag
n8n_update_partial_workflow({
id: "wf_456",
operations: [{
type: "updateNode",
nodeId: "node_abc",
updates: { disabled: undefined }
}]
});
\`\`\`
### Nested Property Removal
Use dot notation to remove nested properties:
\`\`\`javascript
// Remove nested parameter
n8n_update_partial_workflow({
id: "wf_789",
operations: [{
type: "updateNode",
nodeName: "API Request",
updates: { "parameters.authentication": undefined }
}]
});
// Remove entire array property
n8n_update_partial_workflow({
id: "wf_012",
operations: [{
type: "updateNode",
nodeName: "HTTP Request",
updates: { "parameters.headers": undefined }
}]
});
\`\`\`
### Migrating from Deprecated Properties
Common scenario: replacing \`continueOnFail\` with \`onError\`:
\`\`\`javascript
// WRONG: Setting only the new property leaves the old one
n8n_update_partial_workflow({
id: "wf_123",
operations: [{
type: "updateNode",
nodeName: "HTTP Request",
updates: { onError: "continueErrorOutput" }
}]
});
// Error: continueOnFail and onError are mutually exclusive
// CORRECT: Remove the old property first
n8n_update_partial_workflow({
id: "wf_123",
operations: [{
type: "updateNode",
nodeName: "HTTP Request",
updates: {
continueOnFail: undefined,
onError: "continueErrorOutput"
}
}]
});
\`\`\`
### Batch Property Removal
Remove multiple properties in one operation:
\`\`\`javascript
n8n_update_partial_workflow({
id: "wf_345",
operations: [{
type: "updateNode",
nodeName: "Data Processor",
updates: {
continueOnFail: undefined,
alwaysOutputData: undefined,
"parameters.legacy_option": undefined
}
}]
});
\`\`\`
### When to Use undefined
- Removing deprecated properties during migration
- Cleaning up optional configuration flags
- Resolving mutual exclusivity validation errors
- Removing stale or unnecessary node metadata
- Simplifying node configuration`,
- Use \`validateOnly: true\` to preview effects before applying`,
parameters: {
id: { type: 'string', required: true, description: 'Workflow ID to update' },
operations: {
@@ -309,12 +195,10 @@ n8n_update_partial_workflow({
description: 'Array of diff operations. Each must have "type" field and operation-specific properties. Nodes can be referenced by ID or name.'
},
validateOnly: { type: 'boolean', description: 'If true, only validate operations without applying them' },
continueOnError: { type: 'boolean', description: 'If true, apply valid operations even if some fail (best-effort mode). Returns applied and failed operation indices. Default: false (atomic)' },
intent: { type: 'string', description: 'Intent of the change - helps to return better response. Include in every tool call. Example: "Add error handling for API failures".' }
continueOnError: { type: 'boolean', description: 'If true, apply valid operations even if some fail (best-effort mode). Returns applied and failed operation indices. Default: false (atomic)' }
},
returns: 'Updated workflow object or validation results if validateOnly=true',
examples: [
'// Include intent parameter for better responses\nn8n_update_partial_workflow({id: "abc", intent: "Add error handling for API failures", operations: [{type: "addConnection", source: "HTTP Request", target: "Error Handler"}]})',
'// Add a basic node (minimal configuration)\nn8n_update_partial_workflow({id: "abc", operations: [{type: "addNode", node: {name: "Process Data", type: "n8n-nodes-base.set", position: [400, 300], parameters: {}}}]})',
'// Add node with full configuration\nn8n_update_partial_workflow({id: "def", operations: [{type: "addNode", node: {name: "Send Slack Alert", type: "n8n-nodes-base.slack", position: [600, 300], typeVersion: 2, parameters: {resource: "message", operation: "post", channel: "#alerts", text: "Success!"}}}]})',
'// Add node AND connect it (common pattern)\nn8n_update_partial_workflow({id: "ghi", operations: [\n {type: "addNode", node: {name: "HTTP Request", type: "n8n-nodes-base.httpRequest", position: [400, 300], parameters: {url: "https://api.example.com", method: "GET"}}},\n {type: "addConnection", source: "Webhook", target: "HTTP Request"}\n]})',
@@ -339,13 +223,7 @@ n8n_update_partial_workflow({
'// Vector Store setup: Connect embeddings and documents\nn8n_update_partial_workflow({id: "ai7", operations: [\n {type: "addConnection", source: "Embeddings OpenAI", target: "Pinecone Vector Store", sourceOutput: "ai_embedding"},\n {type: "addConnection", source: "Default Data Loader", target: "Pinecone Vector Store", sourceOutput: "ai_document"}\n]})',
'// Connect Vector Store Tool to AI Agent (retrieval setup)\nn8n_update_partial_workflow({id: "ai8", operations: [\n {type: "addConnection", source: "Pinecone Vector Store", target: "Vector Store Tool", sourceOutput: "ai_vectorStore"},\n {type: "addConnection", source: "Vector Store Tool", target: "AI Agent", sourceOutput: "ai_tool"}\n]})',
'// Rewire AI Agent to use different language model\nn8n_update_partial_workflow({id: "ai9", operations: [{type: "rewireConnection", source: "AI Agent", from: "OpenAI Chat Model", to: "Anthropic Chat Model", sourceOutput: "ai_languageModel"}]})',
'// Replace all AI tools for an agent\nn8n_update_partial_workflow({id: "ai10", operations: [\n {type: "removeConnection", source: "Old Tool 1", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "removeConnection", source: "Old Tool 2", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "addConnection", source: "New HTTP Tool", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "addConnection", source: "New Code Tool", target: "AI Agent", sourceOutput: "ai_tool"}\n]})',
'\n// ============ REMOVING PROPERTIES EXAMPLES ============',
'// Remove a simple property\nn8n_update_partial_workflow({id: "rm1", operations: [{type: "updateNode", nodeName: "HTTP Request", updates: {onError: undefined}}]})',
'// Migrate from deprecated continueOnFail to onError\nn8n_update_partial_workflow({id: "rm2", operations: [{type: "updateNode", nodeName: "HTTP Request", updates: {continueOnFail: undefined, onError: "continueErrorOutput"}}]})',
'// Remove nested property\nn8n_update_partial_workflow({id: "rm3", operations: [{type: "updateNode", nodeName: "API Request", updates: {"parameters.authentication": undefined}}]})',
'// Remove multiple properties\nn8n_update_partial_workflow({id: "rm4", operations: [{type: "updateNode", nodeName: "Data Processor", updates: {continueOnFail: undefined, alwaysOutputData: undefined, "parameters.legacy_option": undefined}}]})',
'// Remove entire array property\nn8n_update_partial_workflow({id: "rm5", operations: [{type: "updateNode", nodeName: "HTTP Request", updates: {"parameters.headers": undefined}}]})'
'// Replace all AI tools for an agent\nn8n_update_partial_workflow({id: "ai10", operations: [\n {type: "removeConnection", source: "Old Tool 1", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "removeConnection", source: "Old Tool 2", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "addConnection", source: "New HTTP Tool", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "addConnection", source: "New Code Tool", target: "AI Agent", sourceOutput: "ai_tool"}\n]})'
],
useCases: [
'Rewire connections when replacing nodes',
@@ -367,7 +245,6 @@ n8n_update_partial_workflow({
],
performance: 'Very fast - typically 50-200ms. Much faster than full updates as only changes are processed.',
bestPractices: [
'Always include intent parameter - it helps provide better responses',
'Use rewireConnection instead of remove+add for changing targets',
'Use branch="true"/"false" for IF nodes instead of sourceIndex',
'Use case=N for Switch nodes instead of sourceIndex',
@@ -382,11 +259,7 @@ n8n_update_partial_workflow({
'Connect language model BEFORE adding AI Agent to ensure validation passes',
'Use targetIndex for fallback models (primary=0, fallback=1)',
'Batch AI component connections in a single operation for atomicity',
'Validate AI workflows after connection changes to catch configuration errors',
'To remove properties, set them to undefined (not null) in the updates object',
'When migrating from deprecated properties, remove the old property and add the new one in the same operation',
'Use undefined to resolve mutual exclusivity validation errors between properties',
'Batch multiple property removals in a single updateNode operation for efficiency'
'Validate AI workflows after connection changes to catch configuration errors'
],
pitfalls: [
'**REQUIRES N8N_API_URL and N8N_API_KEY environment variables** - will not work without n8n API access',
@@ -399,19 +272,12 @@ n8n_update_partial_workflow({
'Use "updates" property for updateNode operations: {type: "updateNode", updates: {...}}',
'Smart parameters (branch, case) only work with IF and Switch nodes - ignored for other node types',
'Explicit sourceIndex overrides smart parameters (branch, case) if both provided',
'**CRITICAL**: For If nodes, ALWAYS use branch="true"/"false" instead of sourceIndex. Using sourceIndex=0 for multiple connections will put them ALL on the TRUE branch (main[0]), breaking your workflow logic!',
'**CRITICAL**: For Switch nodes, ALWAYS use case=N instead of sourceIndex. Using same sourceIndex for multiple connections will put them on the same case output.',
'cleanStaleConnections removes ALL broken connections - cannot be selective',
'replaceConnections overwrites entire connections object - all previous connections lost',
'**Auto-sanitization behavior**: Binary operators (equals, contains) automatically have singleValue removed; unary operators (isEmpty, isNotEmpty) automatically get singleValue:true added',
'**Auto-sanitization runs on ALL nodes**: When ANY update is made, ALL nodes in the workflow are sanitized (not just modified ones)',
'**Auto-sanitization cannot fix everything**: It fixes operator structures and missing metadata, but cannot fix broken connections or branch mismatches',
'**Corrupted workflows beyond repair**: Workflows in paradoxical states (API returns corrupt, API rejects updates) cannot be fixed via API - must be recreated',
'Setting a property to null does NOT remove it - use undefined instead',
'When properties are mutually exclusive (e.g., continueOnFail and onError), setting only the new property will fail - you must remove the old one with undefined',
'Removing a required property may cause validation errors - check node documentation first',
'Nested property removal with dot notation only removes the specific nested field, not the entire parent object',
'Array index notation (e.g., "parameters.headers[0]") is not supported - remove the entire array property instead'
'**Corrupted workflows beyond repair**: Workflows in paradoxical states (API returns corrupt, API rejects updates) cannot be fixed via API - must be recreated'
],
relatedTools: ['n8n_update_full_workflow', 'n8n_get_workflow', 'validate_workflow', 'tools_documentation']
}

View File

@@ -84,16 +84,14 @@ When working with Code nodes, always start by calling the relevant guide:
## Standard Workflow Pattern
⚠️ **CRITICAL**: Always call get_node_essentials() FIRST before configuring any node!
1. **Find** the node you need:
- search_nodes({query: "slack"}) - Search by keyword
- list_nodes({category: "communication"}) - List by category
- list_ai_tools() - List AI-capable nodes
2. **Configure** the node (ALWAYS START WITH ESSENTIALS):
- get_node_essentials("nodes-base.slack") - Get essential properties FIRST (5KB, shows required fields)
- get_node_info("nodes-base.slack") - Get complete schema only if essentials insufficient (100KB+)
2. **Configure** the node:
- get_node_essentials("nodes-base.slack") - Get essential properties only (5KB)
- get_node_info("nodes-base.slack") - Get complete schema (100KB+)
- search_node_properties("nodes-base.slack", "auth") - Find specific properties
3. **Validate** before deployment:
@@ -109,8 +107,8 @@ When working with Code nodes, always start by calling the relevant guide:
- list_ai_tools - List all AI-capable nodes with usage guidance
**Configuration Tools**
- get_node_essentials - ✅ CALL THIS FIRST! Returns 10-20 key properties with examples and required fields
- get_node_info - Returns complete node schema (only use if essentials is insufficient)
- get_node_essentials - Returns 10-20 key properties with examples
- get_node_info - Returns complete node schema with all properties
- search_node_properties - Search for specific properties within a node
- get_property_dependencies - Analyze property visibility dependencies

View File

@@ -75,15 +75,10 @@ async function fetchTemplatesRobust() {
// Fetch detail
const detail = await fetcher.fetchTemplateDetail(template.id);
if (detail !== null) {
// Save immediately
repository.saveTemplate(template, detail);
saved++;
} else {
errors++;
console.error(`\n❌ Failed to fetch template ${template.id} (${template.name}) after retries`);
}
// Save immediately
repository.saveTemplate(template, detail);
saved++;
// Rate limiting
await new Promise(resolve => setTimeout(resolve, 200));

View File

@@ -319,10 +319,6 @@ export class EnhancedConfigValidator extends ConfigValidator {
NodeSpecificValidators.validateMySQL(context);
break;
case 'nodes-langchain.agent':
NodeSpecificValidators.validateAIAgent(context);
break;
case 'nodes-base.set':
NodeSpecificValidators.validateSet(context);
break;
@@ -405,59 +401,7 @@ export class EnhancedConfigValidator extends ConfigValidator {
config: Record<string, any>,
result: EnhancedValidationResult
): void {
const url = String(config.url || '');
const options = config.options || {};
// 1. Suggest alwaysOutputData for better error handling (node-level property)
// Note: We can't check if it exists (it's node-level, not in parameters),
// but we can suggest it as a best practice
if (!result.suggestions.some(s => typeof s === 'string' && s.includes('alwaysOutputData'))) {
result.suggestions.push(
'Consider adding alwaysOutputData: true at node level (not in parameters) for better error handling. ' +
'This ensures the node produces output even when HTTP requests fail, allowing downstream error handling.'
);
}
// 2. Suggest responseFormat for API endpoints
const lowerUrl = url.toLowerCase();
const isApiEndpoint =
// Subdomain patterns (api.example.com)
/^https?:\/\/api\./i.test(url) ||
// Path patterns with word boundaries to prevent false positives like "therapist", "restaurant"
/\/api[\/\?]|\/api$/i.test(url) ||
/\/rest[\/\?]|\/rest$/i.test(url) ||
// Known API service domains
lowerUrl.includes('supabase.co') ||
lowerUrl.includes('firebase') ||
lowerUrl.includes('googleapis.com') ||
// Versioned API paths (e.g., example.com/v1, example.com/v2)
/\.com\/v\d+/i.test(url);
if (isApiEndpoint && !options.response?.response?.responseFormat) {
result.suggestions.push(
'API endpoints should explicitly set options.response.response.responseFormat to "json" or "text" ' +
'to prevent confusion about response parsing. Example: ' +
'{ "options": { "response": { "response": { "responseFormat": "json" } } } }'
);
}
// 3. Enhanced URL protocol validation for expressions
if (url && url.startsWith('=')) {
// Expression-based URL - check for common protocol issues
const expressionContent = url.slice(1); // Remove = prefix
const lowerExpression = expressionContent.toLowerCase();
// Check for missing protocol in expression (case-insensitive)
if (expressionContent.startsWith('www.') ||
(expressionContent.includes('{{') && !lowerExpression.includes('http'))) {
result.warnings.push({
type: 'invalid_value',
property: 'url',
message: 'URL expression appears to be missing http:// or https:// protocol',
suggestion: 'Include protocol in your expression. Example: ={{ "https://" + $json.domain + ".com" }}'
});
}
}
// Examples removed - validation provides error messages and fixes instead
}
/**

View File

@@ -170,41 +170,10 @@ export class N8nApiClient {
}
}
async activateWorkflow(id: string): Promise<Workflow> {
try {
const response = await this.client.post(`/workflows/${id}/activate`);
return response.data;
} catch (error) {
throw handleN8nApiError(error);
}
}
async deactivateWorkflow(id: string): Promise<Workflow> {
try {
const response = await this.client.post(`/workflows/${id}/deactivate`);
return response.data;
} catch (error) {
throw handleN8nApiError(error);
}
}
/**
* Lists workflows from n8n instance.
*
* @param params - Query parameters for filtering and pagination
* @returns Paginated list of workflows
*
* @remarks
* This method handles two response formats for backwards compatibility:
* - Modern (n8n v0.200.0+): {data: Workflow[], nextCursor?: string}
* - Legacy (older versions): Workflow[] (wrapped automatically)
*
* @see https://github.com/czlonkowski/n8n-mcp/issues/349
*/
async listWorkflows(params: WorkflowListParams = {}): Promise<WorkflowListResponse> {
try {
const response = await this.client.get('/workflows', { params });
return this.validateListResponse<Workflow>(response.data, 'workflows');
return response.data;
} catch (error) {
throw handleN8nApiError(error);
}
@@ -222,23 +191,10 @@ export class N8nApiClient {
}
}
/**
* Lists executions from n8n instance.
*
* @param params - Query parameters for filtering and pagination
* @returns Paginated list of executions
*
* @remarks
* This method handles two response formats for backwards compatibility:
* - Modern (n8n v0.200.0+): {data: Execution[], nextCursor?: string}
* - Legacy (older versions): Execution[] (wrapped automatically)
*
* @see https://github.com/czlonkowski/n8n-mcp/issues/349
*/
async listExecutions(params: ExecutionListParams = {}): Promise<ExecutionListResponse> {
try {
const response = await this.client.get('/executions', { params });
return this.validateListResponse<Execution>(response.data, 'executions');
return response.data;
} catch (error) {
throw handleN8nApiError(error);
}
@@ -305,23 +261,10 @@ export class N8nApiClient {
}
// Credential Management
/**
* Lists credentials from n8n instance.
*
* @param params - Query parameters for filtering and pagination
* @returns Paginated list of credentials
*
* @remarks
* This method handles two response formats for backwards compatibility:
* - Modern (n8n v0.200.0+): {data: Credential[], nextCursor?: string}
* - Legacy (older versions): Credential[] (wrapped automatically)
*
* @see https://github.com/czlonkowski/n8n-mcp/issues/349
*/
async listCredentials(params: CredentialListParams = {}): Promise<CredentialListResponse> {
try {
const response = await this.client.get('/credentials', { params });
return this.validateListResponse<Credential>(response.data, 'credentials');
return response.data;
} catch (error) {
throw handleN8nApiError(error);
}
@@ -363,23 +306,10 @@ export class N8nApiClient {
}
// Tag Management
/**
* Lists tags from n8n instance.
*
* @param params - Query parameters for filtering and pagination
* @returns Paginated list of tags
*
* @remarks
* This method handles two response formats for backwards compatibility:
* - Modern (n8n v0.200.0+): {data: Tag[], nextCursor?: string}
* - Legacy (older versions): Tag[] (wrapped automatically)
*
* @see https://github.com/czlonkowski/n8n-mcp/issues/349
*/
async listTags(params: TagListParams = {}): Promise<TagListResponse> {
try {
const response = await this.client.get('/tags', { params });
return this.validateListResponse<Tag>(response.data, 'tags');
return response.data;
} catch (error) {
throw handleN8nApiError(error);
}
@@ -482,49 +412,4 @@ export class N8nApiClient {
throw handleN8nApiError(error);
}
}
/**
* Validates and normalizes n8n API list responses.
* Handles both modern format {data: [], nextCursor?: string} and legacy array format.
*
* @param responseData - Raw response data from n8n API
* @param resourceType - Resource type for error messages (e.g., 'workflows', 'executions')
* @returns Normalized response in modern format
* @throws Error if response structure is invalid
*/
private validateListResponse<T>(
responseData: any,
resourceType: string
): { data: T[]; nextCursor?: string | null } {
// Validate response structure
if (!responseData || typeof responseData !== 'object') {
throw new Error(`Invalid response from n8n API for ${resourceType}: response is not an object`);
}
// Handle legacy case where API returns array directly (older n8n versions)
if (Array.isArray(responseData)) {
logger.warn(
`n8n API returned array directly instead of {data, nextCursor} object for ${resourceType}. ` +
'Wrapping in expected format for backwards compatibility.'
);
return {
data: responseData,
nextCursor: null
};
}
// Validate expected format {data: [], nextCursor?: string}
if (!Array.isArray(responseData.data)) {
const keys = Object.keys(responseData).slice(0, 5);
const keysPreview = keys.length < Object.keys(responseData).length
? `${keys.join(', ')}...`
: keys.join(', ');
throw new Error(
`Invalid response from n8n API for ${resourceType}: expected {data: [], nextCursor?: string}, ` +
`got object with keys: [${keysPreview}]`
);
}
return responseData;
}
}

View File

@@ -133,7 +133,6 @@ export function cleanWorkflowForUpdate(workflow: Workflow): Partial<Workflow> {
createdAt,
updatedAt,
versionId,
versionCounter, // Added: n8n 1.118.1+ returns this but rejects it in updates
meta,
staticData,
// Remove fields that cause API errors

View File

@@ -718,110 +718,9 @@ export class NodeSpecificValidators {
});
}
}
/**
* Validate AI Agent node configuration
* Note: This provides basic model connection validation at the node level.
* Full AI workflow validation (tools, memory, etc.) is handled by workflow-validator.
*/
static validateAIAgent(context: NodeValidationContext): void {
const { config, errors, warnings, suggestions, autofix } = context;
// Check for language model configuration
// AI Agent nodes receive model connections via ai_languageModel connection type
// We validate this during workflow validation, but provide hints here for common issues
// Check prompt type configuration
if (config.promptType === 'define') {
if (!config.text || (typeof config.text === 'string' && config.text.trim() === '')) {
errors.push({
type: 'missing_required',
property: 'text',
message: 'Custom prompt text is required when promptType is "define"',
fix: 'Provide a custom prompt in the text field, or change promptType to "auto"'
});
}
}
// Check system message (RECOMMENDED)
if (!config.systemMessage || (typeof config.systemMessage === 'string' && config.systemMessage.trim() === '')) {
suggestions.push('AI Agent works best with a system message that defines the agent\'s role, capabilities, and constraints. Set systemMessage to provide context.');
} else if (typeof config.systemMessage === 'string' && config.systemMessage.trim().length < 20) {
warnings.push({
type: 'inefficient',
property: 'systemMessage',
message: 'System message is very short (< 20 characters)',
suggestion: 'Consider a more detailed system message to guide the agent\'s behavior'
});
}
// Check output parser configuration
if (config.hasOutputParser === true) {
warnings.push({
type: 'best_practice',
property: 'hasOutputParser',
message: 'Output parser is enabled. Ensure an ai_outputParser connection is configured in the workflow.',
suggestion: 'Connect an output parser node (e.g., Structured Output Parser) via ai_outputParser connection type'
});
}
// Check fallback model configuration
if (config.needsFallback === true) {
warnings.push({
type: 'best_practice',
property: 'needsFallback',
message: 'Fallback model is enabled. Ensure 2 language models are connected via ai_languageModel connections.',
suggestion: 'Connect a primary model and a fallback model to handle failures gracefully'
});
}
// Check maxIterations
if (config.maxIterations !== undefined) {
const maxIter = Number(config.maxIterations);
if (isNaN(maxIter) || maxIter < 1) {
errors.push({
type: 'invalid_value',
property: 'maxIterations',
message: 'maxIterations must be a positive number',
fix: 'Set maxIterations to a value >= 1 (e.g., 10)'
});
} else if (maxIter > 50) {
warnings.push({
type: 'inefficient',
property: 'maxIterations',
message: `maxIterations is set to ${maxIter}. High values can lead to long execution times and high costs.`,
suggestion: 'Consider reducing maxIterations to 10-20 for most use cases'
});
}
}
// Error handling for AI operations
if (!config.onError && !config.retryOnFail && !config.continueOnFail) {
warnings.push({
type: 'best_practice',
property: 'errorHandling',
message: 'AI models can fail due to API limits, rate limits, or invalid responses',
suggestion: 'Add onError: "continueRegularOutput" with retryOnFail for resilience'
});
autofix.onError = 'continueRegularOutput';
autofix.retryOnFail = true;
autofix.maxTries = 2;
autofix.waitBetweenTries = 5000; // AI models may have rate limits
}
// Check for deprecated continueOnFail
if (config.continueOnFail !== undefined) {
warnings.push({
type: 'deprecated',
property: 'continueOnFail',
message: 'continueOnFail is deprecated. Use onError instead',
suggestion: 'Replace with onError: "continueRegularOutput" or "stopWorkflow"'
});
}
}
/**
* Validate MySQL node configuration
* Validate MySQL node configuration
*/
static validateMySQL(context: NodeValidationContext): void {
const { config, errors, warnings, suggestions } = context;

View File

@@ -25,8 +25,6 @@ import {
UpdateNameOperation,
AddTagOperation,
RemoveTagOperation,
ActivateWorkflowOperation,
DeactivateWorkflowOperation,
CleanStaleConnectionsOperation,
ReplaceConnectionsOperation
} from '../types/workflow-diff';
@@ -34,15 +32,12 @@ import { Workflow, WorkflowNode, WorkflowConnection } from '../types/n8n-api';
import { Logger } from '../utils/logger';
import { validateWorkflowNode, validateWorkflowConnections } from './n8n-validation';
import { sanitizeNode, sanitizeWorkflowNodes } from './node-sanitizer';
import { isActivatableTrigger } from '../utils/node-type-utils';
const logger = new Logger({ prefix: '[WorkflowDiffEngine]' });
export class WorkflowDiffEngine {
// Track node name changes during operations for connection reference updates
private renameMap: Map<string, string> = new Map();
// Track warnings during operation processing
private warnings: WorkflowDiffValidationError[] = [];
/**
* Apply diff operations to a workflow
@@ -52,9 +47,8 @@ export class WorkflowDiffEngine {
request: WorkflowDiffRequest
): Promise<WorkflowDiffResult> {
try {
// Reset tracking for this diff operation
// Reset rename tracking for this diff operation
this.renameMap.clear();
this.warnings = [];
// Clone workflow to avoid modifying original
const workflowCopy = JSON.parse(JSON.stringify(workflow));
@@ -120,7 +114,6 @@ export class WorkflowDiffEngine {
? 'Validation successful. All operations are valid.'
: `Validation completed with ${errors.length} errors.`,
errors: errors.length > 0 ? errors : undefined,
warnings: this.warnings.length > 0 ? this.warnings : undefined,
applied: appliedIndices,
failed: failedIndices
};
@@ -133,7 +126,6 @@ export class WorkflowDiffEngine {
operationsApplied: appliedIndices.length,
message: `Applied ${appliedIndices.length} operations, ${failedIndices.length} failed (continueOnError mode)`,
errors: errors.length > 0 ? errors : undefined,
warnings: this.warnings.length > 0 ? this.warnings : undefined,
applied: appliedIndices,
failed: failedIndices
};
@@ -217,23 +209,11 @@ export class WorkflowDiffEngine {
}
const operationsApplied = request.operations.length;
// Extract activation flags from workflow object
const shouldActivate = (workflowCopy as any)._shouldActivate === true;
const shouldDeactivate = (workflowCopy as any)._shouldDeactivate === true;
// Clean up temporary flags
delete (workflowCopy as any)._shouldActivate;
delete (workflowCopy as any)._shouldDeactivate;
return {
success: true,
workflow: workflowCopy,
operationsApplied,
message: `Successfully applied ${operationsApplied} operations (${nodeOperations.length} node ops, ${otherOperations.length} other ops)`,
warnings: this.warnings.length > 0 ? this.warnings : undefined,
shouldActivate: shouldActivate || undefined,
shouldDeactivate: shouldDeactivate || undefined
message: `Successfully applied ${operationsApplied} operations (${nodeOperations.length} node ops, ${otherOperations.length} other ops)`
};
}
} catch (error) {
@@ -276,10 +256,6 @@ export class WorkflowDiffEngine {
case 'addTag':
case 'removeTag':
return null; // These are always valid
case 'activateWorkflow':
return this.validateActivateWorkflow(workflow, operation);
case 'deactivateWorkflow':
return this.validateDeactivateWorkflow(workflow, operation);
case 'cleanStaleConnections':
return this.validateCleanStaleConnections(workflow, operation);
case 'replaceConnections':
@@ -333,12 +309,6 @@ export class WorkflowDiffEngine {
case 'removeTag':
this.applyRemoveTag(workflow, operation);
break;
case 'activateWorkflow':
this.applyActivateWorkflow(workflow, operation);
break;
case 'deactivateWorkflow':
this.applyDeactivateWorkflow(workflow, operation);
break;
case 'cleanStaleConnections':
this.applyCleanStaleConnections(workflow, operation);
break;
@@ -397,17 +367,6 @@ export class WorkflowDiffEngine {
}
private validateUpdateNode(workflow: Workflow, operation: UpdateNodeOperation): string | null {
// Check for common parameter mistake: "changes" instead of "updates" (Issue #392)
const operationAny = operation as any;
if (operationAny.changes && !operation.updates) {
return `Invalid parameter 'changes'. The updateNode operation requires 'updates' (not 'changes'). Example: {type: "updateNode", nodeId: "abc", updates: {name: "New Name", "parameters.url": "https://example.com"}}`;
}
// Check for missing required parameter
if (!operation.updates) {
return `Missing required parameter 'updates'. The updateNode operation requires an 'updates' object containing properties to modify. Example: {type: "updateNode", nodeId: "abc", updates: {name: "New Name"}}`;
}
const node = this.findNode(workflow, operation.nodeId, operation.nodeName);
if (!node) {
return this.formatNodeNotFoundError(workflow, operation.nodeId || operation.nodeName || '', 'updateNode');
@@ -726,24 +685,6 @@ export class WorkflowDiffEngine {
sourceIndex = operation.case;
}
// Validation: Warn if using sourceIndex with If/Switch nodes without smart parameters
if (sourceNode && operation.sourceIndex !== undefined && operation.branch === undefined && operation.case === undefined) {
if (sourceNode.type === 'n8n-nodes-base.if') {
this.warnings.push({
operation: -1, // Not tied to specific operation index in request
message: `Connection to If node "${operation.source}" uses sourceIndex=${operation.sourceIndex}. ` +
`Consider using branch="true" or branch="false" for better clarity. ` +
`If node outputs: main[0]=TRUE branch, main[1]=FALSE branch.`
});
} else if (sourceNode.type === 'n8n-nodes-base.switch') {
this.warnings.push({
operation: -1, // Not tied to specific operation index in request
message: `Connection to Switch node "${operation.source}" uses sourceIndex=${operation.sourceIndex}. ` +
`Consider using case=N for better clarity (case=0 for first output, case=1 for second, etc.).`
});
}
}
return { sourceOutput, sourceIndex };
}
@@ -882,46 +823,13 @@ export class WorkflowDiffEngine {
private applyRemoveTag(workflow: Workflow, operation: RemoveTagOperation): void {
if (!workflow.tags) return;
const index = workflow.tags.indexOf(operation.tag);
if (index !== -1) {
workflow.tags.splice(index, 1);
}
}
// Workflow activation operation validators
private validateActivateWorkflow(workflow: Workflow, operation: ActivateWorkflowOperation): string | null {
// Check if workflow has at least one activatable trigger
// Issue #351: executeWorkflowTrigger cannot activate workflows
const activatableTriggers = workflow.nodes.filter(
node => !node.disabled && isActivatableTrigger(node.type)
);
if (activatableTriggers.length === 0) {
return 'Cannot activate workflow: No activatable trigger nodes found. Workflows must have at least one enabled trigger node (webhook, schedule, email, etc.). Note: executeWorkflowTrigger cannot activate workflows as they can only be invoked by other workflows.';
}
return null;
}
private validateDeactivateWorkflow(workflow: Workflow, operation: DeactivateWorkflowOperation): string | null {
// Deactivation is always valid - any workflow can be deactivated
return null;
}
// Workflow activation operation appliers
private applyActivateWorkflow(workflow: Workflow, operation: ActivateWorkflowOperation): void {
// Set flag in workflow object to indicate activation intent
// The handler will call the API method after workflow update
(workflow as any)._shouldActivate = true;
}
private applyDeactivateWorkflow(workflow: Workflow, operation: DeactivateWorkflowOperation): void {
// Set flag in workflow object to indicate deactivation intent
// The handler will call the API method after workflow update
(workflow as any)._shouldDeactivate = true;
}
// Connection cleanup operation validators
private validateCleanStaleConnections(workflow: Workflow, operation: CleanStaleConnectionsOperation): string | null {
// This operation is always valid - it just cleans up what it finds

View File

@@ -3,7 +3,6 @@
* Validates complete workflow structure, connections, and node configurations
*/
import crypto from 'crypto';
import { NodeRepository } from '../database/node-repository';
import { EnhancedConfigValidator } from './enhanced-config-validator';
import { ExpressionValidator } from './expression-validator';
@@ -298,11 +297,8 @@ export class WorkflowValidator {
// Check for duplicate node names
const nodeNames = new Set<string>();
const nodeIds = new Set<string>();
const nodeIdToIndex = new Map<string, number>(); // Track which node index has which ID
for (let i = 0; i < workflow.nodes.length; i++) {
const node = workflow.nodes[i];
for (const node of workflow.nodes) {
if (nodeNames.has(node.name)) {
result.errors.push({
type: 'error',
@@ -314,18 +310,13 @@ export class WorkflowValidator {
nodeNames.add(node.name);
if (nodeIds.has(node.id)) {
const firstNodeIndex = nodeIdToIndex.get(node.id);
const firstNode = firstNodeIndex !== undefined ? workflow.nodes[firstNodeIndex] : undefined;
result.errors.push({
type: 'error',
nodeId: node.id,
message: `Duplicate node ID: "${node.id}". Node at index ${i} (name: "${node.name}", type: "${node.type}") conflicts with node at index ${firstNodeIndex} (name: "${firstNode?.name || 'unknown'}", type: "${firstNode?.type || 'unknown'}"). Each node must have a unique ID. Generate a new UUID using crypto.randomUUID() - Example: {id: "${crypto.randomUUID()}", name: "${node.name}", type: "${node.type}", ...}`
message: `Duplicate node ID: "${node.id}"`
});
} else {
nodeIds.add(node.id);
nodeIdToIndex.set(node.id, i);
}
nodeIds.add(node.id);
}
// Count trigger nodes using shared trigger detection

View File

@@ -4,36 +4,14 @@
*/
import { SupabaseClient } from '@supabase/supabase-js';
import { TelemetryEvent, WorkflowTelemetry, WorkflowMutationRecord, TELEMETRY_CONFIG, TelemetryMetrics } from './telemetry-types';
import { TelemetryEvent, WorkflowTelemetry, TELEMETRY_CONFIG, TelemetryMetrics } from './telemetry-types';
import { TelemetryError, TelemetryErrorType, TelemetryCircuitBreaker } from './telemetry-error';
import { logger } from '../utils/logger';
/**
* Convert camelCase object keys to snake_case
* Needed because Supabase PostgREST doesn't auto-convert
*/
function toSnakeCase(obj: any): any {
if (obj === null || obj === undefined) return obj;
if (Array.isArray(obj)) return obj.map(toSnakeCase);
if (typeof obj !== 'object') return obj;
const result: any = {};
for (const key in obj) {
if (obj.hasOwnProperty(key)) {
// Convert camelCase to snake_case
const snakeKey = key.replace(/[A-Z]/g, letter => `_${letter.toLowerCase()}`);
// Recursively convert nested objects
result[snakeKey] = toSnakeCase(obj[key]);
}
}
return result;
}
export class TelemetryBatchProcessor {
private flushTimer?: NodeJS.Timeout;
private isFlushingEvents: boolean = false;
private isFlushingWorkflows: boolean = false;
private isFlushingMutations: boolean = false;
private circuitBreaker: TelemetryCircuitBreaker;
private metrics: TelemetryMetrics = {
eventsTracked: 0,
@@ -45,7 +23,7 @@ export class TelemetryBatchProcessor {
rateLimitHits: 0
};
private flushTimes: number[] = [];
private deadLetterQueue: (TelemetryEvent | WorkflowTelemetry | WorkflowMutationRecord)[] = [];
private deadLetterQueue: (TelemetryEvent | WorkflowTelemetry)[] = [];
private readonly maxDeadLetterSize = 100;
constructor(
@@ -98,15 +76,15 @@ export class TelemetryBatchProcessor {
}
/**
* Flush events, workflows, and mutations to Supabase
* Flush events and workflows to Supabase
*/
async flush(events?: TelemetryEvent[], workflows?: WorkflowTelemetry[], mutations?: WorkflowMutationRecord[]): Promise<void> {
async flush(events?: TelemetryEvent[], workflows?: WorkflowTelemetry[]): Promise<void> {
if (!this.isEnabled() || !this.supabase) return;
// Check circuit breaker
if (!this.circuitBreaker.shouldAllow()) {
logger.debug('Circuit breaker open - skipping flush');
this.metrics.eventsDropped += (events?.length || 0) + (workflows?.length || 0) + (mutations?.length || 0);
this.metrics.eventsDropped += (events?.length || 0) + (workflows?.length || 0);
return;
}
@@ -123,11 +101,6 @@ export class TelemetryBatchProcessor {
hasErrors = !(await this.flushWorkflows(workflows)) || hasErrors;
}
// Flush mutations if provided
if (mutations && mutations.length > 0) {
hasErrors = !(await this.flushMutations(mutations)) || hasErrors;
}
// Record flush time
const flushTime = Date.now() - startTime;
this.recordFlushTime(flushTime);
@@ -251,71 +224,6 @@ export class TelemetryBatchProcessor {
}
}
/**
* Flush workflow mutations with batching
*/
private async flushMutations(mutations: WorkflowMutationRecord[]): Promise<boolean> {
if (this.isFlushingMutations || mutations.length === 0) return true;
this.isFlushingMutations = true;
try {
// Batch mutations
const batches = this.createBatches(mutations, TELEMETRY_CONFIG.MAX_BATCH_SIZE);
for (const batch of batches) {
const result = await this.executeWithRetry(async () => {
// Convert camelCase to snake_case for Supabase
const snakeCaseBatch = batch.map(mutation => toSnakeCase(mutation));
const { error } = await this.supabase!
.from('workflow_mutations')
.insert(snakeCaseBatch);
if (error) {
// Enhanced error logging for mutation flushes
logger.error('Mutation insert error details:', {
code: (error as any).code,
message: (error as any).message,
details: (error as any).details,
hint: (error as any).hint,
fullError: String(error)
});
throw error;
}
logger.debug(`Flushed batch of ${batch.length} workflow mutations`);
return true;
}, 'Flush workflow mutations');
if (result) {
this.metrics.eventsTracked += batch.length;
this.metrics.batchesSent++;
} else {
this.metrics.eventsFailed += batch.length;
this.metrics.batchesFailed++;
this.addToDeadLetterQueue(batch);
return false;
}
}
return true;
} catch (error) {
logger.error('Failed to flush mutations with details:', {
errorMsg: error instanceof Error ? error.message : String(error),
errorType: error instanceof Error ? error.constructor.name : typeof error
});
throw new TelemetryError(
TelemetryErrorType.NETWORK_ERROR,
'Failed to flush workflow mutations',
{ error: error instanceof Error ? error.message : String(error) },
true
);
} finally {
this.isFlushingMutations = false;
}
}
/**
* Execute operation with exponential backoff retry
*/
@@ -397,7 +305,7 @@ export class TelemetryBatchProcessor {
/**
* Add failed items to dead letter queue
*/
private addToDeadLetterQueue(items: (TelemetryEvent | WorkflowTelemetry | WorkflowMutationRecord)[]): void {
private addToDeadLetterQueue(items: (TelemetryEvent | WorkflowTelemetry)[]): void {
for (const item of items) {
this.deadLetterQueue.push(item);

View File

@@ -4,7 +4,7 @@
* Now uses shared sanitization utilities to avoid code duplication
*/
import { TelemetryEvent, WorkflowTelemetry, WorkflowMutationRecord } from './telemetry-types';
import { TelemetryEvent, WorkflowTelemetry } from './telemetry-types';
import { WorkflowSanitizer } from './workflow-sanitizer';
import { TelemetryRateLimiter } from './rate-limiter';
import { TelemetryEventValidator } from './event-validator';
@@ -19,7 +19,6 @@ export class TelemetryEventTracker {
private validator: TelemetryEventValidator;
private eventQueue: TelemetryEvent[] = [];
private workflowQueue: WorkflowTelemetry[] = [];
private mutationQueue: WorkflowMutationRecord[] = [];
private previousTool?: string;
private previousToolTimestamp: number = 0;
private performanceMetrics: Map<string, number[]> = new Map();
@@ -326,13 +325,6 @@ export class TelemetryEventTracker {
return [...this.workflowQueue];
}
/**
* Get queued mutations
*/
getMutationQueue(): WorkflowMutationRecord[] {
return [...this.mutationQueue];
}
/**
* Clear event queue
*/
@@ -347,28 +339,6 @@ export class TelemetryEventTracker {
this.workflowQueue = [];
}
/**
* Clear mutation queue
*/
clearMutationQueue(): void {
this.mutationQueue = [];
}
/**
* Enqueue mutation for batch processing
*/
enqueueMutation(mutation: WorkflowMutationRecord): void {
if (!this.isEnabled()) return;
this.mutationQueue.push(mutation);
}
/**
* Get mutation queue size
*/
getMutationQueueSize(): number {
return this.mutationQueue.length;
}
/**
* Get tracking statistics
*/
@@ -378,7 +348,6 @@ export class TelemetryEventTracker {
validator: this.validator.getStats(),
eventQueueSize: this.eventQueue.length,
workflowQueueSize: this.workflowQueue.length,
mutationQueueSize: this.mutationQueue.length,
performanceMetrics: this.getPerformanceStats()
};
}

View File

@@ -1,243 +0,0 @@
/**
* Intent classifier for workflow mutations
* Analyzes operations to determine the intent/pattern of the mutation
*/
import { DiffOperation } from '../types/workflow-diff.js';
import { IntentClassification } from './mutation-types.js';
/**
* Classifies the intent of a workflow mutation based on operations performed
*/
export class IntentClassifier {
/**
* Classify mutation intent from operations and optional user intent text
*/
classify(operations: DiffOperation[], userIntent?: string): IntentClassification {
if (operations.length === 0) {
return IntentClassification.UNKNOWN;
}
// First, try to classify from user intent text if provided
if (userIntent) {
const textClassification = this.classifyFromText(userIntent);
if (textClassification !== IntentClassification.UNKNOWN) {
return textClassification;
}
}
// Fall back to operation pattern analysis
return this.classifyFromOperations(operations);
}
/**
* Classify from user intent text using keyword matching
*/
private classifyFromText(intent: string): IntentClassification {
const lowerIntent = intent.toLowerCase();
// Fix validation errors
if (
lowerIntent.includes('fix') ||
lowerIntent.includes('resolve') ||
lowerIntent.includes('correct') ||
lowerIntent.includes('repair') ||
lowerIntent.includes('error')
) {
return IntentClassification.FIX_VALIDATION;
}
// Add new functionality
if (
lowerIntent.includes('add') ||
lowerIntent.includes('create') ||
lowerIntent.includes('insert') ||
lowerIntent.includes('new node')
) {
return IntentClassification.ADD_FUNCTIONALITY;
}
// Modify configuration
if (
lowerIntent.includes('update') ||
lowerIntent.includes('change') ||
lowerIntent.includes('modify') ||
lowerIntent.includes('configure') ||
lowerIntent.includes('set')
) {
return IntentClassification.MODIFY_CONFIGURATION;
}
// Rewire logic
if (
lowerIntent.includes('connect') ||
lowerIntent.includes('reconnect') ||
lowerIntent.includes('rewire') ||
lowerIntent.includes('reroute') ||
lowerIntent.includes('link')
) {
return IntentClassification.REWIRE_LOGIC;
}
// Cleanup
if (
lowerIntent.includes('remove') ||
lowerIntent.includes('delete') ||
lowerIntent.includes('clean') ||
lowerIntent.includes('disable')
) {
return IntentClassification.CLEANUP;
}
return IntentClassification.UNKNOWN;
}
/**
* Classify from operation patterns
*/
private classifyFromOperations(operations: DiffOperation[]): IntentClassification {
const opTypes = operations.map((op) => op.type);
const opTypeSet = new Set(opTypes);
// Pattern: Adding nodes and connections (add functionality)
if (opTypeSet.has('addNode') && opTypeSet.has('addConnection')) {
return IntentClassification.ADD_FUNCTIONALITY;
}
// Pattern: Only adding nodes (add functionality)
if (opTypeSet.has('addNode') && !opTypeSet.has('removeNode')) {
return IntentClassification.ADD_FUNCTIONALITY;
}
// Pattern: Removing nodes or connections (cleanup)
if (opTypeSet.has('removeNode') || opTypeSet.has('removeConnection')) {
return IntentClassification.CLEANUP;
}
// Pattern: Disabling nodes (cleanup)
if (opTypeSet.has('disableNode')) {
return IntentClassification.CLEANUP;
}
// Pattern: Rewiring connections
if (
opTypeSet.has('rewireConnection') ||
opTypeSet.has('replaceConnections') ||
(opTypeSet.has('addConnection') && opTypeSet.has('removeConnection'))
) {
return IntentClassification.REWIRE_LOGIC;
}
// Pattern: Only updating nodes (modify configuration)
if (opTypeSet.has('updateNode') && opTypes.every((t) => t === 'updateNode')) {
return IntentClassification.MODIFY_CONFIGURATION;
}
// Pattern: Updating settings or metadata (modify configuration)
if (
opTypeSet.has('updateSettings') ||
opTypeSet.has('updateName') ||
opTypeSet.has('addTag') ||
opTypeSet.has('removeTag')
) {
return IntentClassification.MODIFY_CONFIGURATION;
}
// Pattern: Mix of updates with some additions/removals (modify configuration)
if (opTypeSet.has('updateNode')) {
return IntentClassification.MODIFY_CONFIGURATION;
}
// Pattern: Moving nodes (modify configuration)
if (opTypeSet.has('moveNode')) {
return IntentClassification.MODIFY_CONFIGURATION;
}
// Pattern: Enabling nodes (could be fixing)
if (opTypeSet.has('enableNode')) {
return IntentClassification.FIX_VALIDATION;
}
// Pattern: Clean stale connections (cleanup)
if (opTypeSet.has('cleanStaleConnections')) {
return IntentClassification.CLEANUP;
}
return IntentClassification.UNKNOWN;
}
/**
* Get confidence score for classification (0-1)
* Higher score means more confident in the classification
*/
getConfidence(
classification: IntentClassification,
operations: DiffOperation[],
userIntent?: string
): number {
// High confidence if user intent matches operation pattern
if (userIntent && this.classifyFromText(userIntent) === classification) {
return 0.9;
}
// Medium-high confidence for clear operation patterns
if (classification !== IntentClassification.UNKNOWN) {
const opTypes = new Set(operations.map((op) => op.type));
// Very clear patterns get high confidence
if (
classification === IntentClassification.ADD_FUNCTIONALITY &&
opTypes.has('addNode')
) {
return 0.8;
}
if (
classification === IntentClassification.CLEANUP &&
(opTypes.has('removeNode') || opTypes.has('removeConnection'))
) {
return 0.8;
}
if (
classification === IntentClassification.REWIRE_LOGIC &&
opTypes.has('rewireConnection')
) {
return 0.8;
}
// Other patterns get medium confidence
return 0.6;
}
// Low confidence for unknown classification
return 0.3;
}
/**
* Get human-readable description of the classification
*/
getDescription(classification: IntentClassification): string {
switch (classification) {
case IntentClassification.ADD_FUNCTIONALITY:
return 'Adding new nodes or functionality to the workflow';
case IntentClassification.MODIFY_CONFIGURATION:
return 'Modifying configuration of existing nodes';
case IntentClassification.REWIRE_LOGIC:
return 'Changing workflow execution flow by rewiring connections';
case IntentClassification.FIX_VALIDATION:
return 'Fixing validation errors or issues';
case IntentClassification.CLEANUP:
return 'Removing or disabling nodes and connections';
case IntentClassification.UNKNOWN:
return 'Unknown or complex mutation pattern';
default:
return 'Unclassified mutation';
}
}
}
/**
* Singleton instance for easy access
*/
export const intentClassifier = new IntentClassifier();

View File

@@ -1,187 +0,0 @@
/**
* Intent sanitizer for removing PII from user intent strings
* Ensures privacy by masking sensitive information
*/
/**
* Patterns for detecting and removing PII
*/
const PII_PATTERNS = {
// Email addresses
email: /\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b/gi,
// URLs with domains
url: /https?:\/\/[^\s]+/gi,
// IP addresses
ip: /\b(?:\d{1,3}\.){3}\d{1,3}\b/g,
// Phone numbers (various formats)
phone: /\b(?:\+?\d{1,3}[-.\s]?)?\(?\d{3}\)?[-.\s]?\d{3}[-.\s]?\d{4}\b/g,
// Credit card-like numbers (groups of 4 digits)
creditCard: /\b\d{4}[-\s]?\d{4}[-\s]?\d{4}[-\s]?\d{4}\b/g,
// API keys and tokens (long alphanumeric strings)
apiKey: /\b[A-Za-z0-9_-]{32,}\b/g,
// UUIDs
uuid: /\b[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}\b/gi,
// File paths (Unix and Windows)
filePath: /(?:\/[\w.-]+)+\/?|(?:[A-Z]:\\(?:[\w.-]+\\)*[\w.-]+)/g,
// Potential passwords or secrets (common patterns)
secret: /\b(?:password|passwd|pwd|secret|token|key)[:=\s]+[^\s]+/gi,
};
/**
* Company/organization name patterns to anonymize
* These are common patterns that might appear in workflow intents
*/
const COMPANY_PATTERNS = {
// Company suffixes
companySuffix: /\b\w+(?:\s+(?:Inc|LLC|Corp|Corporation|Ltd|Limited|GmbH|AG)\.?)\b/gi,
// Common business terms that might indicate company names
businessContext: /\b(?:company|organization|client|customer)\s+(?:named?|called)\s+\w+/gi,
};
/**
* Sanitizes user intent by removing PII and sensitive information
*/
export class IntentSanitizer {
/**
* Sanitize user intent string
*/
sanitize(intent: string): string {
if (!intent) {
return intent;
}
let sanitized = intent;
// Remove email addresses
sanitized = sanitized.replace(PII_PATTERNS.email, '[EMAIL]');
// Remove URLs
sanitized = sanitized.replace(PII_PATTERNS.url, '[URL]');
// Remove IP addresses
sanitized = sanitized.replace(PII_PATTERNS.ip, '[IP_ADDRESS]');
// Remove phone numbers
sanitized = sanitized.replace(PII_PATTERNS.phone, '[PHONE]');
// Remove credit card numbers
sanitized = sanitized.replace(PII_PATTERNS.creditCard, '[CARD_NUMBER]');
// Remove API keys and long tokens
sanitized = sanitized.replace(PII_PATTERNS.apiKey, '[API_KEY]');
// Remove UUIDs
sanitized = sanitized.replace(PII_PATTERNS.uuid, '[UUID]');
// Remove file paths
sanitized = sanitized.replace(PII_PATTERNS.filePath, '[FILE_PATH]');
// Remove secrets/passwords
sanitized = sanitized.replace(PII_PATTERNS.secret, '[SECRET]');
// Anonymize company names
sanitized = sanitized.replace(COMPANY_PATTERNS.companySuffix, '[COMPANY]');
sanitized = sanitized.replace(COMPANY_PATTERNS.businessContext, '[COMPANY_CONTEXT]');
// Clean up multiple spaces
sanitized = sanitized.replace(/\s{2,}/g, ' ').trim();
return sanitized;
}
/**
* Check if intent contains potential PII
*/
containsPII(intent: string): boolean {
if (!intent) {
return false;
}
return Object.values(PII_PATTERNS).some((pattern) => pattern.test(intent));
}
/**
* Get list of PII types detected in the intent
*/
detectPIITypes(intent: string): string[] {
if (!intent) {
return [];
}
const detected: string[] = [];
if (PII_PATTERNS.email.test(intent)) detected.push('email');
if (PII_PATTERNS.url.test(intent)) detected.push('url');
if (PII_PATTERNS.ip.test(intent)) detected.push('ip_address');
if (PII_PATTERNS.phone.test(intent)) detected.push('phone');
if (PII_PATTERNS.creditCard.test(intent)) detected.push('credit_card');
if (PII_PATTERNS.apiKey.test(intent)) detected.push('api_key');
if (PII_PATTERNS.uuid.test(intent)) detected.push('uuid');
if (PII_PATTERNS.filePath.test(intent)) detected.push('file_path');
if (PII_PATTERNS.secret.test(intent)) detected.push('secret');
// Reset lastIndex for global regexes
Object.values(PII_PATTERNS).forEach((pattern) => {
pattern.lastIndex = 0;
});
return detected;
}
/**
* Truncate intent to maximum length while preserving meaning
*/
truncate(intent: string, maxLength: number = 1000): string {
if (!intent || intent.length <= maxLength) {
return intent;
}
// Try to truncate at sentence boundary
const truncated = intent.substring(0, maxLength);
const lastSentence = truncated.lastIndexOf('.');
const lastSpace = truncated.lastIndexOf(' ');
if (lastSentence > maxLength * 0.8) {
return truncated.substring(0, lastSentence + 1);
} else if (lastSpace > maxLength * 0.9) {
return truncated.substring(0, lastSpace) + '...';
}
return truncated + '...';
}
/**
* Validate intent is safe for telemetry
*/
isSafeForTelemetry(intent: string): boolean {
if (!intent) {
return true;
}
// Check length
if (intent.length > 5000) {
return false;
}
// Check for null bytes or control characters
if (/[\x00-\x08\x0B\x0C\x0E-\x1F]/.test(intent)) {
return false;
}
return true;
}
}
/**
* Singleton instance for easy access
*/
export const intentSanitizer = new IntentSanitizer();

View File

@@ -1,369 +0,0 @@
/**
* Core mutation tracker for workflow transformations
* Coordinates validation, classification, and metric calculation
*/
import { DiffOperation } from '../types/workflow-diff.js';
import {
WorkflowMutationData,
WorkflowMutationRecord,
MutationChangeMetrics,
MutationValidationMetrics,
IntentClassification,
} from './mutation-types.js';
import { intentClassifier } from './intent-classifier.js';
import { mutationValidator } from './mutation-validator.js';
import { intentSanitizer } from './intent-sanitizer.js';
import { WorkflowSanitizer } from './workflow-sanitizer.js';
import { logger } from '../utils/logger.js';
/**
* Tracks workflow mutations and prepares data for telemetry
*/
export class MutationTracker {
private recentMutations: Array<{
hashBefore: string;
hashAfter: string;
operations: DiffOperation[];
}> = [];
private readonly RECENT_MUTATIONS_LIMIT = 100;
/**
* Process and prepare mutation data for tracking
*/
async processMutation(data: WorkflowMutationData, userId: string): Promise<WorkflowMutationRecord | null> {
try {
// Validate data quality
if (!this.validateMutationData(data)) {
logger.debug('Mutation data validation failed');
return null;
}
// Sanitize workflows to remove credentials and sensitive data
const workflowBefore = this.sanitizeFullWorkflow(data.workflowBefore);
const workflowAfter = this.sanitizeFullWorkflow(data.workflowAfter);
// Sanitize user intent
const sanitizedIntent = intentSanitizer.sanitize(data.userIntent);
// Check if should be excluded
if (mutationValidator.shouldExclude(data)) {
logger.debug('Mutation excluded from tracking based on quality criteria');
return null;
}
// Check for duplicates
if (
mutationValidator.isDuplicate(
workflowBefore,
workflowAfter,
data.operations,
this.recentMutations
)
) {
logger.debug('Duplicate mutation detected, skipping tracking');
return null;
}
// Generate hashes
const hashBefore = mutationValidator.hashWorkflow(workflowBefore);
const hashAfter = mutationValidator.hashWorkflow(workflowAfter);
// Classify intent
const intentClassification = intentClassifier.classify(data.operations, sanitizedIntent);
// Calculate metrics
const changeMetrics = this.calculateChangeMetrics(data.operations);
const validationMetrics = this.calculateValidationMetrics(
data.validationBefore,
data.validationAfter
);
// Create mutation record
const record: WorkflowMutationRecord = {
userId,
sessionId: data.sessionId,
workflowBefore,
workflowAfter,
workflowHashBefore: hashBefore,
workflowHashAfter: hashAfter,
userIntent: sanitizedIntent,
intentClassification,
toolName: data.toolName,
operations: data.operations,
operationCount: data.operations.length,
operationTypes: this.extractOperationTypes(data.operations),
validationBefore: data.validationBefore,
validationAfter: data.validationAfter,
...validationMetrics,
...changeMetrics,
mutationSuccess: data.mutationSuccess,
mutationError: data.mutationError,
durationMs: data.durationMs,
};
// Store in recent mutations for deduplication
this.addToRecentMutations(hashBefore, hashAfter, data.operations);
return record;
} catch (error) {
logger.error('Error processing mutation:', error);
return null;
}
}
/**
* Validate mutation data
*/
private validateMutationData(data: WorkflowMutationData): boolean {
const validationResult = mutationValidator.validate(data);
if (!validationResult.valid) {
logger.warn('Mutation data validation failed:', validationResult.errors);
return false;
}
if (validationResult.warnings.length > 0) {
logger.debug('Mutation data validation warnings:', validationResult.warnings);
}
return true;
}
/**
* Calculate change metrics from operations
*/
private calculateChangeMetrics(operations: DiffOperation[]): MutationChangeMetrics {
const metrics: MutationChangeMetrics = {
nodesAdded: 0,
nodesRemoved: 0,
nodesModified: 0,
connectionsAdded: 0,
connectionsRemoved: 0,
propertiesChanged: 0,
};
for (const op of operations) {
switch (op.type) {
case 'addNode':
metrics.nodesAdded++;
break;
case 'removeNode':
metrics.nodesRemoved++;
break;
case 'updateNode':
metrics.nodesModified++;
if ('updates' in op && op.updates) {
metrics.propertiesChanged += Object.keys(op.updates as any).length;
}
break;
case 'addConnection':
metrics.connectionsAdded++;
break;
case 'removeConnection':
metrics.connectionsRemoved++;
break;
case 'rewireConnection':
// Rewiring is effectively removing + adding
metrics.connectionsRemoved++;
metrics.connectionsAdded++;
break;
case 'replaceConnections':
// Count how many connections are being replaced
if ('connections' in op && op.connections) {
metrics.connectionsRemoved++;
metrics.connectionsAdded++;
}
break;
case 'updateSettings':
if ('settings' in op && op.settings) {
metrics.propertiesChanged += Object.keys(op.settings as any).length;
}
break;
case 'moveNode':
case 'enableNode':
case 'disableNode':
case 'updateName':
case 'addTag':
case 'removeTag':
case 'activateWorkflow':
case 'deactivateWorkflow':
case 'cleanStaleConnections':
// These don't directly affect node/connection counts
// but count as property changes
metrics.propertiesChanged++;
break;
}
}
return metrics;
}
/**
* Sanitize a full workflow while preserving structure
* Removes credentials and sensitive data but keeps all nodes, connections, parameters
*/
private sanitizeFullWorkflow(workflow: any): any {
if (!workflow) return workflow;
// Deep clone to avoid modifying original
const sanitized = JSON.parse(JSON.stringify(workflow));
// Remove sensitive workflow-level fields
delete sanitized.credentials;
delete sanitized.sharedWorkflows;
delete sanitized.ownedBy;
delete sanitized.createdBy;
delete sanitized.updatedBy;
// Sanitize each node
if (sanitized.nodes && Array.isArray(sanitized.nodes)) {
sanitized.nodes = sanitized.nodes.map((node: any) => {
const sanitizedNode = { ...node };
// Remove credentials field
delete sanitizedNode.credentials;
// Sanitize parameters if present
if (sanitizedNode.parameters && typeof sanitizedNode.parameters === 'object') {
sanitizedNode.parameters = this.sanitizeParameters(sanitizedNode.parameters);
}
return sanitizedNode;
});
}
return sanitized;
}
/**
* Recursively sanitize parameters object
*/
private sanitizeParameters(params: any): any {
if (!params || typeof params !== 'object') return params;
const sensitiveKeys = [
'apiKey', 'api_key', 'token', 'secret', 'password', 'credential',
'auth', 'authorization', 'privateKey', 'accessToken', 'refreshToken'
];
const sanitized: any = Array.isArray(params) ? [] : {};
for (const [key, value] of Object.entries(params)) {
const lowerKey = key.toLowerCase();
// Check if key is sensitive
if (sensitiveKeys.some(sk => lowerKey.includes(sk.toLowerCase()))) {
sanitized[key] = '[REDACTED]';
} else if (typeof value === 'object' && value !== null) {
// Recursively sanitize nested objects
sanitized[key] = this.sanitizeParameters(value);
} else if (typeof value === 'string') {
// Sanitize string values that might contain sensitive data
sanitized[key] = this.sanitizeStringValue(value);
} else {
sanitized[key] = value;
}
}
return sanitized;
}
/**
* Sanitize string values that might contain sensitive data
*/
private sanitizeStringValue(value: string): string {
if (!value || typeof value !== 'string') return value;
let sanitized = value;
// Redact URLs with authentication
sanitized = sanitized.replace(/https?:\/\/[^:]+:[^@]+@[^\s/]+/g, '[REDACTED_URL_WITH_AUTH]');
// Redact long API keys/tokens (20+ alphanumeric chars)
sanitized = sanitized.replace(/\b[A-Za-z0-9_-]{32,}\b/g, '[REDACTED_TOKEN]');
// Redact OpenAI-style keys
sanitized = sanitized.replace(/\bsk-[A-Za-z0-9]{32,}\b/g, '[REDACTED_APIKEY]');
// Redact Bearer tokens
sanitized = sanitized.replace(/Bearer\s+[^\s]+/gi, 'Bearer [REDACTED]');
return sanitized;
}
/**
* Calculate validation improvement metrics
*/
private calculateValidationMetrics(
validationBefore: any,
validationAfter: any
): MutationValidationMetrics {
// If validation data is missing, return nulls
if (!validationBefore || !validationAfter) {
return {
validationImproved: null,
errorsResolved: 0,
errorsIntroduced: 0,
};
}
const errorsBefore = validationBefore.errors?.length || 0;
const errorsAfter = validationAfter.errors?.length || 0;
const errorsResolved = Math.max(0, errorsBefore - errorsAfter);
const errorsIntroduced = Math.max(0, errorsAfter - errorsBefore);
const validationImproved = errorsBefore > errorsAfter;
return {
validationImproved,
errorsResolved,
errorsIntroduced,
};
}
/**
* Extract unique operation types from operations
*/
private extractOperationTypes(operations: DiffOperation[]): string[] {
const types = new Set(operations.map((op) => op.type));
return Array.from(types);
}
/**
* Add mutation to recent list for deduplication
*/
private addToRecentMutations(
hashBefore: string,
hashAfter: string,
operations: DiffOperation[]
): void {
this.recentMutations.push({ hashBefore, hashAfter, operations });
// Keep only recent mutations
if (this.recentMutations.length > this.RECENT_MUTATIONS_LIMIT) {
this.recentMutations.shift();
}
}
/**
* Clear recent mutations (useful for testing)
*/
clearRecentMutations(): void {
this.recentMutations = [];
}
/**
* Get statistics about tracked mutations
*/
getRecentMutationsCount(): number {
return this.recentMutations.length;
}
}
/**
* Singleton instance for easy access
*/
export const mutationTracker = new MutationTracker();

View File

@@ -1,154 +0,0 @@
/**
* Types and interfaces for workflow mutation tracking
* Purpose: Track workflow transformations to improve partial updates tooling
*/
import { DiffOperation } from '../types/workflow-diff.js';
/**
* Intent classification for workflow mutations
*/
export enum IntentClassification {
ADD_FUNCTIONALITY = 'add_functionality',
MODIFY_CONFIGURATION = 'modify_configuration',
REWIRE_LOGIC = 'rewire_logic',
FIX_VALIDATION = 'fix_validation',
CLEANUP = 'cleanup',
UNKNOWN = 'unknown',
}
/**
* Tool names that perform workflow mutations
*/
export enum MutationToolName {
UPDATE_PARTIAL = 'n8n_update_partial_workflow',
UPDATE_FULL = 'n8n_update_full_workflow',
}
/**
* Validation result structure
*/
export interface ValidationResult {
valid: boolean;
errors: Array<{
type: string;
message: string;
severity?: string;
location?: string;
}>;
warnings?: Array<{
type: string;
message: string;
}>;
}
/**
* Change metrics calculated from workflow mutation
*/
export interface MutationChangeMetrics {
nodesAdded: number;
nodesRemoved: number;
nodesModified: number;
connectionsAdded: number;
connectionsRemoved: number;
propertiesChanged: number;
}
/**
* Validation improvement metrics
*/
export interface MutationValidationMetrics {
validationImproved: boolean | null;
errorsResolved: number;
errorsIntroduced: number;
}
/**
* Input data for tracking a workflow mutation
*/
export interface WorkflowMutationData {
sessionId: string;
toolName: MutationToolName;
userIntent: string;
operations: DiffOperation[];
workflowBefore: any;
workflowAfter: any;
validationBefore?: ValidationResult;
validationAfter?: ValidationResult;
mutationSuccess: boolean;
mutationError?: string;
durationMs: number;
}
/**
* Complete mutation record for database storage
*/
export interface WorkflowMutationRecord {
id?: string;
userId: string;
sessionId: string;
workflowBefore: any;
workflowAfter: any;
workflowHashBefore: string;
workflowHashAfter: string;
userIntent: string;
intentClassification: IntentClassification;
toolName: MutationToolName;
operations: DiffOperation[];
operationCount: number;
operationTypes: string[];
validationBefore?: ValidationResult;
validationAfter?: ValidationResult;
validationImproved: boolean | null;
errorsResolved: number;
errorsIntroduced: number;
nodesAdded: number;
nodesRemoved: number;
nodesModified: number;
connectionsAdded: number;
connectionsRemoved: number;
propertiesChanged: number;
mutationSuccess: boolean;
mutationError?: string;
durationMs: number;
createdAt?: Date;
}
/**
* Options for mutation tracking
*/
export interface MutationTrackingOptions {
/** Whether to track this mutation (default: true) */
enabled?: boolean;
/** Maximum workflow size in KB to track (default: 500) */
maxWorkflowSizeKb?: number;
/** Whether to validate data quality before tracking (default: true) */
validateQuality?: boolean;
/** Whether to sanitize workflows for PII (default: true) */
sanitize?: boolean;
}
/**
* Mutation tracking statistics for monitoring
*/
export interface MutationTrackingStats {
totalMutationsTracked: number;
successfulMutations: number;
failedMutations: number;
mutationsWithValidationImprovement: number;
averageDurationMs: number;
intentClassificationBreakdown: Record<IntentClassification, number>;
operationTypeBreakdown: Record<string, number>;
}
/**
* Data quality validation result
*/
export interface MutationDataQualityResult {
valid: boolean;
errors: string[];
warnings: string[];
}

View File

@@ -1,237 +0,0 @@
/**
* Data quality validator for workflow mutations
* Ensures mutation data meets quality standards before tracking
*/
import { createHash } from 'crypto';
import {
WorkflowMutationData,
MutationDataQualityResult,
MutationTrackingOptions,
} from './mutation-types.js';
/**
* Default options for mutation tracking
*/
export const DEFAULT_MUTATION_TRACKING_OPTIONS: Required<MutationTrackingOptions> = {
enabled: true,
maxWorkflowSizeKb: 500,
validateQuality: true,
sanitize: true,
};
/**
* Validates workflow mutation data quality
*/
export class MutationValidator {
private options: Required<MutationTrackingOptions>;
constructor(options: MutationTrackingOptions = {}) {
this.options = { ...DEFAULT_MUTATION_TRACKING_OPTIONS, ...options };
}
/**
* Validate mutation data quality
*/
validate(data: WorkflowMutationData): MutationDataQualityResult {
const errors: string[] = [];
const warnings: string[] = [];
// Check workflow structure
if (!this.isValidWorkflow(data.workflowBefore)) {
errors.push('Invalid workflow_before structure');
}
if (!this.isValidWorkflow(data.workflowAfter)) {
errors.push('Invalid workflow_after structure');
}
// Check workflow size
const beforeSizeKb = this.getWorkflowSizeKb(data.workflowBefore);
const afterSizeKb = this.getWorkflowSizeKb(data.workflowAfter);
if (beforeSizeKb > this.options.maxWorkflowSizeKb) {
errors.push(
`workflow_before size (${beforeSizeKb}KB) exceeds maximum (${this.options.maxWorkflowSizeKb}KB)`
);
}
if (afterSizeKb > this.options.maxWorkflowSizeKb) {
errors.push(
`workflow_after size (${afterSizeKb}KB) exceeds maximum (${this.options.maxWorkflowSizeKb}KB)`
);
}
// Check for meaningful change
if (!this.hasMeaningfulChange(data.workflowBefore, data.workflowAfter)) {
warnings.push('No meaningful change detected between before and after workflows');
}
// Check intent quality
if (!data.userIntent || data.userIntent.trim().length === 0) {
warnings.push('User intent is empty');
} else if (data.userIntent.trim().length < 5) {
warnings.push('User intent is too short (less than 5 characters)');
} else if (data.userIntent.length > 1000) {
warnings.push('User intent is very long (over 1000 characters)');
}
// Check operations
if (!data.operations || data.operations.length === 0) {
errors.push('No operations provided');
}
// Check validation data consistency
if (data.validationBefore && data.validationAfter) {
if (typeof data.validationBefore.valid !== 'boolean') {
warnings.push('Invalid validation_before structure');
}
if (typeof data.validationAfter.valid !== 'boolean') {
warnings.push('Invalid validation_after structure');
}
}
// Check duration sanity
if (data.durationMs !== undefined) {
if (data.durationMs < 0) {
errors.push('Duration cannot be negative');
}
if (data.durationMs > 300000) {
// 5 minutes
warnings.push('Duration is very long (over 5 minutes)');
}
}
return {
valid: errors.length === 0,
errors,
warnings,
};
}
/**
* Check if workflow has valid structure
*/
private isValidWorkflow(workflow: any): boolean {
if (!workflow || typeof workflow !== 'object') {
return false;
}
// Must have nodes array
if (!Array.isArray(workflow.nodes)) {
return false;
}
// Must have connections object
if (!workflow.connections || typeof workflow.connections !== 'object') {
return false;
}
return true;
}
/**
* Get workflow size in KB
*/
private getWorkflowSizeKb(workflow: any): number {
try {
const json = JSON.stringify(workflow);
return json.length / 1024;
} catch {
return 0;
}
}
/**
* Check if there's meaningful change between workflows
*/
private hasMeaningfulChange(workflowBefore: any, workflowAfter: any): boolean {
try {
// Compare hashes
const hashBefore = this.hashWorkflow(workflowBefore);
const hashAfter = this.hashWorkflow(workflowAfter);
return hashBefore !== hashAfter;
} catch {
return false;
}
}
/**
* Hash workflow for comparison
*/
hashWorkflow(workflow: any): string {
try {
const json = JSON.stringify(workflow);
return createHash('sha256').update(json).digest('hex').substring(0, 16);
} catch {
return '';
}
}
/**
* Check if mutation should be excluded from tracking
*/
shouldExclude(data: WorkflowMutationData): boolean {
// Exclude if not successful and no error message
if (!data.mutationSuccess && !data.mutationError) {
return true;
}
// Exclude if workflows are identical
if (!this.hasMeaningfulChange(data.workflowBefore, data.workflowAfter)) {
return true;
}
// Exclude if workflow size exceeds limits
const beforeSizeKb = this.getWorkflowSizeKb(data.workflowBefore);
const afterSizeKb = this.getWorkflowSizeKb(data.workflowAfter);
if (
beforeSizeKb > this.options.maxWorkflowSizeKb ||
afterSizeKb > this.options.maxWorkflowSizeKb
) {
return true;
}
return false;
}
/**
* Check for duplicate mutation (same hash + operations)
*/
isDuplicate(
workflowBefore: any,
workflowAfter: any,
operations: any[],
recentMutations: Array<{ hashBefore: string; hashAfter: string; operations: any[] }>
): boolean {
const hashBefore = this.hashWorkflow(workflowBefore);
const hashAfter = this.hashWorkflow(workflowAfter);
const operationsHash = this.hashOperations(operations);
return recentMutations.some(
(m) =>
m.hashBefore === hashBefore &&
m.hashAfter === hashAfter &&
this.hashOperations(m.operations) === operationsHash
);
}
/**
* Hash operations for deduplication
*/
private hashOperations(operations: any[]): string {
try {
const json = JSON.stringify(operations);
return createHash('sha256').update(json).digest('hex').substring(0, 16);
} catch {
return '';
}
}
}
/**
* Singleton instance for easy access
*/
export const mutationValidator = new MutationValidator();

View File

@@ -148,50 +148,6 @@ export class TelemetryManager {
}
}
/**
* Track workflow mutation from partial updates
*/
async trackWorkflowMutation(data: any): Promise<void> {
this.ensureInitialized();
if (!this.isEnabled()) {
logger.debug('Telemetry disabled, skipping mutation tracking');
return;
}
this.performanceMonitor.startOperation('trackWorkflowMutation');
try {
const { mutationTracker } = await import('./mutation-tracker.js');
const userId = this.configManager.getUserId();
const mutationRecord = await mutationTracker.processMutation(data, userId);
if (mutationRecord) {
// Queue for batch processing
this.eventTracker.enqueueMutation(mutationRecord);
// Auto-flush if queue reaches threshold
// Lower threshold (2) for mutations since they're less frequent than regular events
const queueSize = this.eventTracker.getMutationQueueSize();
if (queueSize >= 2) {
await this.flushMutations();
}
}
} catch (error) {
const telemetryError = error instanceof TelemetryError
? error
: new TelemetryError(
TelemetryErrorType.UNKNOWN_ERROR,
'Failed to track workflow mutation',
{ error: String(error) }
);
this.errorAggregator.record(telemetryError);
logger.debug('Error tracking workflow mutation:', error);
} finally {
this.performanceMonitor.endOperation('trackWorkflowMutation');
}
}
/**
* Track an error event
@@ -265,16 +221,14 @@ export class TelemetryManager {
// Get queued data from event tracker
const events = this.eventTracker.getEventQueue();
const workflows = this.eventTracker.getWorkflowQueue();
const mutations = this.eventTracker.getMutationQueue();
// Clear queues immediately to prevent duplicate processing
this.eventTracker.clearEventQueue();
this.eventTracker.clearWorkflowQueue();
this.eventTracker.clearMutationQueue();
try {
// Use batch processor to flush
await this.batchProcessor.flush(events, workflows, mutations);
await this.batchProcessor.flush(events, workflows);
} catch (error) {
const telemetryError = error instanceof TelemetryError
? error
@@ -294,21 +248,6 @@ export class TelemetryManager {
}
}
/**
* Flush queued mutations only
*/
async flushMutations(): Promise<void> {
this.ensureInitialized();
if (!this.isEnabled() || !this.supabase) return;
const mutations = this.eventTracker.getMutationQueue();
this.eventTracker.clearMutationQueue();
if (mutations.length > 0) {
await this.batchProcessor.flush([], [], mutations);
}
}
/**
* Check if telemetry is enabled

View File

@@ -131,9 +131,4 @@ export interface TelemetryErrorContext {
context?: Record<string, any>;
timestamp: number;
retryable: boolean;
}
/**
* Re-export workflow mutation types
*/
export type { WorkflowMutationRecord, WorkflowMutationData } from './mutation-types.js';
}

View File

@@ -40,37 +40,7 @@ export interface TemplateDetail {
export class TemplateFetcher {
private readonly baseUrl = 'https://api.n8n.io/api/templates';
private readonly pageSize = 250; // Maximum allowed by API
private readonly maxRetries = 3;
private readonly retryDelay = 1000; // 1 second base delay
/**
* Retry helper for API calls
*/
private async retryWithBackoff<T>(
fn: () => Promise<T>,
context: string,
maxRetries: number = this.maxRetries
): Promise<T | null> {
let lastError: any;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await fn();
} catch (error: any) {
lastError = error;
if (attempt < maxRetries) {
const delay = this.retryDelay * attempt; // Exponential backoff
logger.warn(`${context} - Attempt ${attempt}/${maxRetries} failed, retrying in ${delay}ms...`);
await this.sleep(delay);
}
}
}
logger.error(`${context} - All ${maxRetries} attempts failed, skipping`, lastError);
return null;
}
/**
* Fetch all templates and filter to last 12 months
* This fetches ALL pages first, then applies date filter locally
@@ -103,105 +73,93 @@ export class TemplateFetcher {
let page = 1;
let hasMore = true;
let totalWorkflows = 0;
logger.info('Starting complete template fetch from n8n.io API');
while (hasMore) {
const result = await this.retryWithBackoff(
async () => {
const response = await axios.get(`${this.baseUrl}/search`, {
params: {
page,
rows: this.pageSize
// Note: sort_by parameter doesn't work, templates come in popularity order
}
});
return response.data;
},
`Fetching templates page ${page}`
);
if (result === null) {
// All retries failed for this page, skip it and continue
logger.warn(`Skipping page ${page} after ${this.maxRetries} failed attempts`);
try {
const response = await axios.get(`${this.baseUrl}/search`, {
params: {
page,
rows: this.pageSize
// Note: sort_by parameter doesn't work, templates come in popularity order
}
});
const { workflows } = response.data;
totalWorkflows = response.data.totalWorkflows || totalWorkflows;
allTemplates.push(...workflows);
// Calculate total pages for better progress reporting
const totalPages = Math.ceil(totalWorkflows / this.pageSize);
if (progressCallback) {
// Enhanced progress with page information
progressCallback(allTemplates.length, totalWorkflows);
}
logger.debug(`Fetched page ${page}/${totalPages}: ${workflows.length} templates (total so far: ${allTemplates.length}/${totalWorkflows})`);
// Check if there are more pages
if (workflows.length < this.pageSize) {
hasMore = false;
}
page++;
continue;
}
const { workflows } = result;
totalWorkflows = result.totalWorkflows || totalWorkflows;
allTemplates.push(...workflows);
// Calculate total pages for better progress reporting
const totalPages = Math.ceil(totalWorkflows / this.pageSize);
if (progressCallback) {
// Enhanced progress with page information
progressCallback(allTemplates.length, totalWorkflows);
}
logger.debug(`Fetched page ${page}/${totalPages}: ${workflows.length} templates (total so far: ${allTemplates.length}/${totalWorkflows})`);
// Check if there are more pages
if (workflows.length < this.pageSize) {
hasMore = false;
}
page++;
// Rate limiting - be nice to the API (slightly faster with 250 rows/page)
if (hasMore) {
await this.sleep(300); // 300ms between requests (was 500ms with 100 rows)
// Rate limiting - be nice to the API (slightly faster with 250 rows/page)
if (hasMore) {
await this.sleep(300); // 300ms between requests (was 500ms with 100 rows)
}
} catch (error) {
logger.error(`Error fetching templates page ${page}:`, error);
throw error;
}
}
logger.info(`Fetched all ${allTemplates.length} templates from n8n.io`);
return allTemplates;
}
async fetchTemplateDetail(workflowId: number): Promise<TemplateDetail | null> {
const result = await this.retryWithBackoff(
async () => {
const response = await axios.get(`${this.baseUrl}/workflows/${workflowId}`);
return response.data.workflow;
},
`Fetching template detail for workflow ${workflowId}`
);
return result;
async fetchTemplateDetail(workflowId: number): Promise<TemplateDetail> {
try {
const response = await axios.get(`${this.baseUrl}/workflows/${workflowId}`);
return response.data.workflow;
} catch (error) {
logger.error(`Error fetching template detail for ${workflowId}:`, error);
throw error;
}
}
async fetchAllTemplateDetails(
workflows: TemplateWorkflow[],
workflows: TemplateWorkflow[],
progressCallback?: (current: number, total: number) => void
): Promise<Map<number, TemplateDetail>> {
const details = new Map<number, TemplateDetail>();
let skipped = 0;
logger.info(`Fetching details for ${workflows.length} templates`);
for (let i = 0; i < workflows.length; i++) {
const workflow = workflows[i];
const detail = await this.fetchTemplateDetail(workflow.id);
if (detail !== null) {
try {
const detail = await this.fetchTemplateDetail(workflow.id);
details.set(workflow.id, detail);
} else {
skipped++;
logger.warn(`Skipped workflow ${workflow.id} after ${this.maxRetries} failed attempts`);
if (progressCallback) {
progressCallback(i + 1, workflows.length);
}
// Rate limiting (conservative to avoid API throttling)
await this.sleep(150); // 150ms between requests
} catch (error) {
logger.error(`Failed to fetch details for workflow ${workflow.id}:`, error);
// Continue with other templates
}
if (progressCallback) {
progressCallback(i + 1, workflows.length);
}
// Rate limiting (conservative to avoid API throttling)
await this.sleep(150); // 150ms between requests
}
logger.info(`Successfully fetched ${details.size} template details (${skipped} skipped)`);
logger.info(`Successfully fetched ${details.size} template details`);
return details;
}

View File

@@ -496,17 +496,10 @@ export class TemplateRepository {
// Count node usage
const nodeCount: Record<string, number> = {};
topNodes.forEach(t => {
if (!t.nodes_used) return;
try {
const nodes = JSON.parse(t.nodes_used);
if (Array.isArray(nodes)) {
nodes.forEach((n: string) => {
nodeCount[n] = (nodeCount[n] || 0) + 1;
});
}
} catch (error) {
logger.warn(`Failed to parse nodes_used for template stats:`, error);
}
const nodes = JSON.parse(t.nodes_used);
nodes.forEach((n: string) => {
nodeCount[n] = (nodeCount[n] || 0) + 1;
});
});
// Get top 10 most used nodes

View File

@@ -66,7 +66,6 @@ export interface Workflow {
updatedAt?: string;
createdAt?: string;
versionId?: string;
versionCounter?: number; // Added: n8n 1.118.1+ returns this in GET responses
meta?: {
instanceId?: string;
};
@@ -153,7 +152,6 @@ export interface WorkflowExport {
tags?: string[];
pinData?: Record<string, unknown>;
versionId?: string;
versionCounter?: number; // Added: n8n 1.118.1+
meta?: Record<string, unknown>;
}

View File

@@ -114,16 +114,6 @@ export interface RemoveTagOperation extends DiffOperation {
tag: string;
}
export interface ActivateWorkflowOperation extends DiffOperation {
type: 'activateWorkflow';
// No additional properties needed - just activates the workflow
}
export interface DeactivateWorkflowOperation extends DiffOperation {
type: 'deactivateWorkflow';
// No additional properties needed - just deactivates the workflow
}
// Connection Cleanup Operations
export interface CleanStaleConnectionsOperation extends DiffOperation {
type: 'cleanStaleConnections';
@@ -158,8 +148,6 @@ export type WorkflowDiffOperation =
| UpdateNameOperation
| AddTagOperation
| RemoveTagOperation
| ActivateWorkflowOperation
| DeactivateWorkflowOperation
| CleanStaleConnectionsOperation
| ReplaceConnectionsOperation;
@@ -182,14 +170,11 @@ export interface WorkflowDiffResult {
success: boolean;
workflow?: any; // Updated workflow if successful
errors?: WorkflowDiffValidationError[];
warnings?: WorkflowDiffValidationError[]; // Non-blocking warnings (e.g., parameter suggestions)
operationsApplied?: number;
message?: string;
applied?: number[]; // Indices of successfully applied operations (when continueOnError is true)
failed?: number[]; // Indices of failed operations (when continueOnError is true)
staleConnectionsRemoved?: Array<{ from: string; to: string }>; // For cleanStaleConnections operation
shouldActivate?: boolean; // Flag to activate workflow after update (for activateWorkflow operation)
shouldDeactivate?: boolean; // Flag to deactivate workflow after update (for deactivateWorkflow operation)
}
// Helper type for node reference (supports both ID and name)

View File

@@ -101,6 +101,7 @@ describe('Integration: handleListAvailableTools', () => {
// Common known limitations
const limitationsText = data.limitations.join(' ');
expect(limitationsText).toContain('Cannot activate');
expect(limitationsText).toContain('Cannot execute workflows directly');
});
});

View File

@@ -1,431 +0,0 @@
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
import { N8NDocumentationMCPServer } from '../../../src/mcp/server';
// Mock the database and dependencies
vi.mock('../../../src/database/database-adapter');
vi.mock('../../../src/database/node-repository');
vi.mock('../../../src/templates/template-service');
vi.mock('../../../src/utils/logger');
/**
* Test wrapper class that exposes private methods for unit testing.
* This pattern is preferred over modifying production code visibility
* or using reflection-based testing utilities.
*/
class TestableN8NMCPServer extends N8NDocumentationMCPServer {
/**
* Expose getDisabledTools() for testing environment variable parsing.
* @returns Set of disabled tool names from DISABLED_TOOLS env var
*/
public testGetDisabledTools(): Set<string> {
return (this as any).getDisabledTools();
}
/**
* Expose executeTool() for testing the defense-in-depth guard.
* @param name - Tool name to execute
* @param args - Tool arguments
* @returns Tool execution result
*/
public async testExecuteTool(name: string, args: any): Promise<any> {
return (this as any).executeTool(name, args);
}
}
describe('Disabled Tools Additional Coverage (Issue #410)', () => {
let server: TestableN8NMCPServer;
beforeEach(() => {
// Set environment variable to use in-memory database
process.env.NODE_DB_PATH = ':memory:';
});
afterEach(() => {
delete process.env.NODE_DB_PATH;
delete process.env.DISABLED_TOOLS;
delete process.env.ENABLE_MULTI_TENANT;
delete process.env.N8N_API_URL;
delete process.env.N8N_API_KEY;
});
describe('Error Response Structure Validation', () => {
it('should throw error with specific message format', async () => {
process.env.DISABLED_TOOLS = 'test_tool';
server = new TestableN8NMCPServer();
let thrownError: Error | null = null;
try {
await server.testExecuteTool('test_tool', {});
} catch (error) {
thrownError = error as Error;
}
// Verify error was thrown
expect(thrownError).not.toBeNull();
expect(thrownError?.message).toBe(
"Tool 'test_tool' is disabled via DISABLED_TOOLS environment variable"
);
});
it('should include tool name in error message', async () => {
const toolName = 'my_special_tool';
process.env.DISABLED_TOOLS = toolName;
server = new TestableN8NMCPServer();
let errorMessage = '';
try {
await server.testExecuteTool(toolName, {});
} catch (error: any) {
errorMessage = error.message;
}
expect(errorMessage).toContain(toolName);
expect(errorMessage).toContain('disabled via DISABLED_TOOLS');
});
it('should throw consistent error format for all disabled tools', async () => {
const tools = ['tool1', 'tool2', 'tool3'];
process.env.DISABLED_TOOLS = tools.join(',');
server = new TestableN8NMCPServer();
for (const tool of tools) {
let errorMessage = '';
try {
await server.testExecuteTool(tool, {});
} catch (error: any) {
errorMessage = error.message;
}
// Verify consistent error format
expect(errorMessage).toMatch(/^Tool '.*' is disabled via DISABLED_TOOLS environment variable$/);
expect(errorMessage).toContain(tool);
}
});
});
describe('Multi-Tenant Mode Interaction', () => {
it('should respect DISABLED_TOOLS in multi-tenant mode', () => {
process.env.ENABLE_MULTI_TENANT = 'true';
process.env.DISABLED_TOOLS = 'n8n_delete_workflow,n8n_update_full_workflow';
delete process.env.N8N_API_URL;
delete process.env.N8N_API_KEY;
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
// Even in multi-tenant mode, disabled tools should be filtered
expect(disabledTools.has('n8n_delete_workflow')).toBe(true);
expect(disabledTools.has('n8n_update_full_workflow')).toBe(true);
expect(disabledTools.size).toBe(2);
});
it('should parse DISABLED_TOOLS regardless of N8N_API_URL setting', () => {
process.env.DISABLED_TOOLS = 'tool1,tool2';
process.env.N8N_API_URL = 'http://localhost:5678';
process.env.N8N_API_KEY = 'test-key';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(2);
expect(disabledTools.has('tool1')).toBe(true);
expect(disabledTools.has('tool2')).toBe(true);
});
it('should work when only ENABLE_MULTI_TENANT is set', () => {
process.env.ENABLE_MULTI_TENANT = 'true';
process.env.DISABLED_TOOLS = 'restricted_tool';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has('restricted_tool')).toBe(true);
});
});
describe('Edge Cases - Special Characters and Unicode', () => {
it('should handle unicode tool names correctly', () => {
process.env.DISABLED_TOOLS = 'tool_测试,tool_münchen,tool_العربية';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(3);
expect(disabledTools.has('tool_测试')).toBe(true);
expect(disabledTools.has('tool_münchen')).toBe(true);
expect(disabledTools.has('tool_العربية')).toBe(true);
});
it('should handle emoji in tool names', () => {
process.env.DISABLED_TOOLS = 'tool_🎯,tool_✅,tool_❌';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(3);
expect(disabledTools.has('tool_🎯')).toBe(true);
expect(disabledTools.has('tool_✅')).toBe(true);
expect(disabledTools.has('tool_❌')).toBe(true);
});
it('should treat regex special characters as literals', () => {
process.env.DISABLED_TOOLS = 'tool.*,tool[0-9],tool(test)';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
// These should be treated as literal strings, not regex patterns
expect(disabledTools.has('tool.*')).toBe(true);
expect(disabledTools.has('tool[0-9]')).toBe(true);
expect(disabledTools.has('tool(test)')).toBe(true);
expect(disabledTools.size).toBe(3);
});
it('should handle tool names with dots and colons', () => {
process.env.DISABLED_TOOLS = 'org.example.tool,namespace:tool:v1';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has('org.example.tool')).toBe(true);
expect(disabledTools.has('namespace:tool:v1')).toBe(true);
});
it('should handle tool names with @ symbols', () => {
process.env.DISABLED_TOOLS = '@scope/tool,user@tool';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has('@scope/tool')).toBe(true);
expect(disabledTools.has('user@tool')).toBe(true);
});
});
describe('Performance and Scale', () => {
it('should handle 100 disabled tools efficiently', () => {
const manyTools = Array.from({ length: 100 }, (_, i) => `tool_${i}`);
process.env.DISABLED_TOOLS = manyTools.join(',');
const start = Date.now();
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
const duration = Date.now() - start;
expect(disabledTools.size).toBe(100);
expect(duration).toBeLessThan(50); // Should be very fast
});
it('should handle 1000 disabled tools efficiently and enforce 200 tool limit', () => {
const manyTools = Array.from({ length: 1000 }, (_, i) => `tool_${i}`);
process.env.DISABLED_TOOLS = manyTools.join(',');
const start = Date.now();
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
const duration = Date.now() - start;
// Safety limit: max 200 tools enforced
expect(disabledTools.size).toBe(200);
expect(duration).toBeLessThan(100); // Should still be fast
});
it('should efficiently check membership in large disabled set', () => {
const manyTools = Array.from({ length: 500 }, (_, i) => `tool_${i}`);
process.env.DISABLED_TOOLS = manyTools.join(',');
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
// Test membership check performance (Set.has() is O(1))
const start = Date.now();
for (let i = 0; i < 1000; i++) {
disabledTools.has(`tool_${i % 500}`);
}
const duration = Date.now() - start;
expect(duration).toBeLessThan(10); // Should be very fast
});
});
describe('Environment Variable Edge Cases', () => {
it('should handle very long tool names', () => {
const longToolName = 'tool_' + 'a'.repeat(500);
process.env.DISABLED_TOOLS = longToolName;
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has(longToolName)).toBe(true);
});
it('should handle newlines in tool names (after trim)', () => {
process.env.DISABLED_TOOLS = 'tool1\n,tool2\r\n,tool3\r';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
// Newlines should be trimmed
expect(disabledTools.has('tool1')).toBe(true);
expect(disabledTools.has('tool2')).toBe(true);
expect(disabledTools.has('tool3')).toBe(true);
});
it('should handle tabs in tool names (after trim)', () => {
process.env.DISABLED_TOOLS = '\ttool1\t,\ttool2\t';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has('tool1')).toBe(true);
expect(disabledTools.has('tool2')).toBe(true);
});
it('should handle mixed whitespace correctly', () => {
process.env.DISABLED_TOOLS = ' \t tool1 \n , tool2 \r\n, tool3 ';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(3);
expect(disabledTools.has('tool1')).toBe(true);
expect(disabledTools.has('tool2')).toBe(true);
expect(disabledTools.has('tool3')).toBe(true);
});
it('should enforce 10KB limit on DISABLED_TOOLS environment variable', () => {
// Create a very long env var (15KB) by repeating tool names
const longTools = Array.from({ length: 1500 }, (_, i) => `tool_${i}`);
const longValue = longTools.join(',');
// Verify we created >10KB string
expect(longValue.length).toBeGreaterThan(10000);
process.env.DISABLED_TOOLS = longValue;
server = new TestableN8NMCPServer();
// Should succeed and truncate to 10KB
const disabledTools = server.testGetDisabledTools();
// Should have parsed some tools (at least the first ones)
expect(disabledTools.size).toBeGreaterThan(0);
// First few tools should be present (they're in the first 10KB)
expect(disabledTools.has('tool_0')).toBe(true);
expect(disabledTools.has('tool_1')).toBe(true);
expect(disabledTools.has('tool_2')).toBe(true);
// Last tools should NOT be present (they were truncated)
expect(disabledTools.has('tool_1499')).toBe(false);
expect(disabledTools.has('tool_1498')).toBe(false);
});
});
describe('Defense in Depth - Multiple Layers', () => {
it('should prevent execution at executeTool level', async () => {
process.env.DISABLED_TOOLS = 'blocked_tool';
server = new TestableN8NMCPServer();
// The executeTool method should throw immediately
await expect(async () => {
await server.testExecuteTool('blocked_tool', {});
}).rejects.toThrow('disabled via DISABLED_TOOLS');
});
it('should be case-sensitive in tool name matching', async () => {
process.env.DISABLED_TOOLS = 'BlockedTool';
server = new TestableN8NMCPServer();
// 'blockedtool' should NOT be blocked (case-sensitive)
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has('BlockedTool')).toBe(true);
expect(disabledTools.has('blockedtool')).toBe(false);
});
it('should check disabled status on every executeTool call', async () => {
process.env.DISABLED_TOOLS = 'tool1';
server = new TestableN8NMCPServer();
// First call should fail
await expect(async () => {
await server.testExecuteTool('tool1', {});
}).rejects.toThrow('disabled');
// Second call should also fail (consistent behavior)
await expect(async () => {
await server.testExecuteTool('tool1', {});
}).rejects.toThrow('disabled');
// Non-disabled tool should work (or fail for other reasons)
try {
await server.testExecuteTool('other_tool', {});
} catch (error: any) {
// Should not be disabled error
expect(error.message).not.toContain('disabled via DISABLED_TOOLS');
}
});
it('should not leak list of disabled tools in error response', async () => {
// Set multiple disabled tools including some "secret" ones
process.env.DISABLED_TOOLS = 'secret_tool_1,secret_tool_2,secret_tool_3,attempted_tool';
server = new TestableN8NMCPServer();
// Try to execute one of the disabled tools
let errorMessage = '';
try {
await server.testExecuteTool('attempted_tool', {});
} catch (error: any) {
errorMessage = error.message;
}
// Error message should mention the attempted tool
expect(errorMessage).toContain('attempted_tool');
expect(errorMessage).toContain('disabled via DISABLED_TOOLS');
// Error message should NOT leak the other disabled tools
expect(errorMessage).not.toContain('secret_tool_1');
expect(errorMessage).not.toContain('secret_tool_2');
expect(errorMessage).not.toContain('secret_tool_3');
// Should not contain any arrays or lists
expect(errorMessage).not.toContain('[');
expect(errorMessage).not.toContain(']');
});
});
describe('Real-World Deployment Verification', () => {
it('should support common security hardening scenario', () => {
// Disable all write/delete operations in production
const dangerousTools = [
'n8n_delete_workflow',
'n8n_update_full_workflow',
'n8n_delete_execution',
];
process.env.DISABLED_TOOLS = dangerousTools.join(',');
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
dangerousTools.forEach(tool => {
expect(disabledTools.has(tool)).toBe(true);
});
});
it('should support staging environment scenario', () => {
// In staging, disable only production-specific tools
process.env.DISABLED_TOOLS = 'n8n_trigger_webhook_workflow';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has('n8n_trigger_webhook_workflow')).toBe(true);
expect(disabledTools.size).toBe(1);
});
it('should support development environment scenario', () => {
// In dev, maybe disable resource-intensive tools
process.env.DISABLED_TOOLS = 'search_templates_by_metadata,fetch_large_datasets';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(2);
});
});
});

View File

@@ -1,311 +0,0 @@
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
import { N8NDocumentationMCPServer } from '../../../src/mcp/server';
import { n8nDocumentationToolsFinal } from '../../../src/mcp/tools';
import { n8nManagementTools } from '../../../src/mcp/tools-n8n-manager';
// Mock the database and dependencies
vi.mock('../../../src/database/database-adapter');
vi.mock('../../../src/database/node-repository');
vi.mock('../../../src/templates/template-service');
vi.mock('../../../src/utils/logger');
/**
* Test wrapper class that exposes private methods for unit testing.
* This pattern is preferred over modifying production code visibility
* or using reflection-based testing utilities.
*/
class TestableN8NMCPServer extends N8NDocumentationMCPServer {
/**
* Expose getDisabledTools() for testing environment variable parsing.
* @returns Set of disabled tool names from DISABLED_TOOLS env var
*/
public testGetDisabledTools(): Set<string> {
return (this as any).getDisabledTools();
}
/**
* Expose executeTool() for testing the defense-in-depth guard.
* @param name - Tool name to execute
* @param args - Tool arguments
* @returns Tool execution result
*/
public async testExecuteTool(name: string, args: any): Promise<any> {
return (this as any).executeTool(name, args);
}
}
describe('Disabled Tools Feature (Issue #410)', () => {
let server: TestableN8NMCPServer;
beforeEach(() => {
// Set environment variable to use in-memory database
process.env.NODE_DB_PATH = ':memory:';
});
afterEach(() => {
delete process.env.NODE_DB_PATH;
delete process.env.DISABLED_TOOLS;
});
describe('getDisabledTools() - Environment Variable Parsing', () => {
it('should return empty set when DISABLED_TOOLS is not set', () => {
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(0);
});
it('should return empty set when DISABLED_TOOLS is empty string', () => {
process.env.DISABLED_TOOLS = '';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(0);
});
it('should parse single disabled tool correctly', () => {
process.env.DISABLED_TOOLS = 'n8n_diagnostic';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(1);
expect(disabledTools.has('n8n_diagnostic')).toBe(true);
});
it('should parse multiple disabled tools correctly', () => {
process.env.DISABLED_TOOLS = 'n8n_diagnostic,n8n_health_check,list_nodes';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(3);
expect(disabledTools.has('n8n_diagnostic')).toBe(true);
expect(disabledTools.has('n8n_health_check')).toBe(true);
expect(disabledTools.has('list_nodes')).toBe(true);
});
it('should trim whitespace from tool names', () => {
process.env.DISABLED_TOOLS = ' n8n_diagnostic , n8n_health_check ';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(2);
expect(disabledTools.has('n8n_diagnostic')).toBe(true);
expect(disabledTools.has('n8n_health_check')).toBe(true);
});
it('should filter out empty entries from comma-separated list', () => {
process.env.DISABLED_TOOLS = 'n8n_diagnostic,,n8n_health_check,,,list_nodes';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(3);
expect(disabledTools.has('n8n_diagnostic')).toBe(true);
expect(disabledTools.has('n8n_health_check')).toBe(true);
expect(disabledTools.has('list_nodes')).toBe(true);
});
it('should handle single comma correctly', () => {
process.env.DISABLED_TOOLS = ',';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(0);
});
it('should handle multiple commas without values', () => {
process.env.DISABLED_TOOLS = ',,,';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(0);
});
});
describe('executeTool() - Disabled Tool Guard', () => {
it('should throw error when calling disabled tool', async () => {
process.env.DISABLED_TOOLS = 'tools_documentation';
server = new TestableN8NMCPServer();
await expect(async () => {
await server.testExecuteTool('tools_documentation', {});
}).rejects.toThrow("Tool 'tools_documentation' is disabled via DISABLED_TOOLS environment variable");
});
it('should allow calling enabled tool when others are disabled', async () => {
process.env.DISABLED_TOOLS = 'n8n_diagnostic,n8n_health_check';
server = new TestableN8NMCPServer();
// This should not throw - tools_documentation is not disabled
// The tool execution may fail for other reasons (like missing data),
// but it should NOT fail due to being disabled
try {
await server.testExecuteTool('tools_documentation', {});
} catch (error: any) {
// Ensure the error is NOT about the tool being disabled
expect(error.message).not.toContain('disabled via DISABLED_TOOLS');
}
});
it('should throw error for all disabled tools in list', async () => {
process.env.DISABLED_TOOLS = 'tool1,tool2,tool3';
server = new TestableN8NMCPServer();
for (const toolName of ['tool1', 'tool2', 'tool3']) {
await expect(async () => {
await server.testExecuteTool(toolName, {});
}).rejects.toThrow(`Tool '${toolName}' is disabled via DISABLED_TOOLS environment variable`);
}
});
});
describe('Tool Filtering - Documentation Tools', () => {
it('should filter disabled documentation tools from list', () => {
// Find a documentation tool to disable
const docTool = n8nDocumentationToolsFinal[0];
if (!docTool) {
throw new Error('No documentation tools available for testing');
}
process.env.DISABLED_TOOLS = docTool.name;
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has(docTool.name)).toBe(true);
expect(disabledTools.size).toBe(1);
});
it('should filter multiple disabled documentation tools', () => {
const tool1 = n8nDocumentationToolsFinal[0];
const tool2 = n8nDocumentationToolsFinal[1];
if (!tool1 || !tool2) {
throw new Error('Not enough documentation tools available for testing');
}
process.env.DISABLED_TOOLS = `${tool1.name},${tool2.name}`;
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has(tool1.name)).toBe(true);
expect(disabledTools.has(tool2.name)).toBe(true);
expect(disabledTools.size).toBe(2);
});
});
describe('Tool Filtering - Management Tools', () => {
it('should filter disabled management tools from list', () => {
// Find a management tool to disable
const mgmtTool = n8nManagementTools[0];
if (!mgmtTool) {
throw new Error('No management tools available for testing');
}
process.env.DISABLED_TOOLS = mgmtTool.name;
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has(mgmtTool.name)).toBe(true);
expect(disabledTools.size).toBe(1);
});
it('should filter multiple disabled management tools', () => {
const tool1 = n8nManagementTools[0];
const tool2 = n8nManagementTools[1];
if (!tool1 || !tool2) {
throw new Error('Not enough management tools available for testing');
}
process.env.DISABLED_TOOLS = `${tool1.name},${tool2.name}`;
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has(tool1.name)).toBe(true);
expect(disabledTools.has(tool2.name)).toBe(true);
expect(disabledTools.size).toBe(2);
});
});
describe('Tool Filtering - Mixed Tools', () => {
it('should filter disabled tools from both documentation and management lists', () => {
const docTool = n8nDocumentationToolsFinal[0];
const mgmtTool = n8nManagementTools[0];
if (!docTool || !mgmtTool) {
throw new Error('Tools not available for testing');
}
process.env.DISABLED_TOOLS = `${docTool.name},${mgmtTool.name}`;
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has(docTool.name)).toBe(true);
expect(disabledTools.has(mgmtTool.name)).toBe(true);
expect(disabledTools.size).toBe(2);
});
});
describe('Invalid Tool Names', () => {
it('should gracefully handle non-existent tool names', () => {
process.env.DISABLED_TOOLS = 'non_existent_tool,another_fake_tool';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
// Should still parse and store them, even if they don't exist
expect(disabledTools.size).toBe(2);
expect(disabledTools.has('non_existent_tool')).toBe(true);
expect(disabledTools.has('another_fake_tool')).toBe(true);
});
it('should handle special characters in tool names', () => {
process.env.DISABLED_TOOLS = 'tool-with-dashes,tool_with_underscores,tool.with.dots';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.size).toBe(3);
expect(disabledTools.has('tool-with-dashes')).toBe(true);
expect(disabledTools.has('tool_with_underscores')).toBe(true);
expect(disabledTools.has('tool.with.dots')).toBe(true);
});
});
describe('Real-World Use Cases', () => {
it('should support multi-tenant deployment use case - disable diagnostic tools', () => {
process.env.DISABLED_TOOLS = 'n8n_diagnostic,n8n_health_check';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has('n8n_diagnostic')).toBe(true);
expect(disabledTools.has('n8n_health_check')).toBe(true);
expect(disabledTools.size).toBe(2);
});
it('should support security hardening use case - disable management tools', () => {
// Disable potentially dangerous management tools
const dangerousTools = [
'n8n_delete_workflow',
'n8n_update_full_workflow'
];
process.env.DISABLED_TOOLS = dangerousTools.join(',');
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
dangerousTools.forEach(tool => {
expect(disabledTools.has(tool)).toBe(true);
});
expect(disabledTools.size).toBe(dangerousTools.length);
});
it('should support feature flag use case - disable experimental tools', () => {
// Example: Disable experimental or beta features
process.env.DISABLED_TOOLS = 'experimental_tool_1,beta_feature';
server = new TestableN8NMCPServer();
const disabledTools = server.testGetDisabledTools();
expect(disabledTools.has('experimental_tool_1')).toBe(true);
expect(disabledTools.has('beta_feature')).toBe(true);
expect(disabledTools.size).toBe(2);
});
});
});

View File

@@ -156,11 +156,9 @@ describe('handlers-workflow-diff', () => {
operationsApplied: 1,
workflowId: 'test-workflow-id',
workflowName: 'Test Workflow',
active: true,
applied: [0],
failed: [],
errors: [],
warnings: undefined,
},
});
@@ -190,7 +188,6 @@ describe('handlers-workflow-diff', () => {
operationsApplied: 1,
message: 'Validation successful',
errors: [],
warnings: []
});
const result = await handleUpdatePartialWorkflow(diffRequest, mockRepository);
@@ -202,9 +199,6 @@ describe('handlers-workflow-diff', () => {
valid: true,
operationsToApply: 1,
},
details: {
warnings: []
}
});
expect(mockApiClient.updateWorkflow).not.toHaveBeenCalled();
@@ -635,211 +629,5 @@ describe('handlers-workflow-diff', () => {
},
});
});
describe('Workflow Activation/Deactivation', () => {
it('should activate workflow after successful update', async () => {
const testWorkflow = createTestWorkflow({ active: false });
const updatedWorkflow = { ...testWorkflow, active: false };
const activatedWorkflow = { ...testWorkflow, active: true };
mockApiClient.getWorkflow.mockResolvedValue(testWorkflow);
mockDiffEngine.applyDiff.mockResolvedValue({
success: true,
workflow: updatedWorkflow,
operationsApplied: 1,
message: 'Success',
errors: [],
shouldActivate: true,
});
mockApiClient.updateWorkflow.mockResolvedValue(updatedWorkflow);
mockApiClient.activateWorkflow = vi.fn().mockResolvedValue(activatedWorkflow);
const result = await handleUpdatePartialWorkflow({
id: 'test-workflow-id',
operations: [{ type: 'activateWorkflow' }],
}, mockRepository);
expect(result.success).toBe(true);
expect(result.data).toEqual(activatedWorkflow);
expect(result.message).toContain('Workflow activated');
expect(result.details?.active).toBe(true);
expect(mockApiClient.activateWorkflow).toHaveBeenCalledWith('test-workflow-id');
});
it('should deactivate workflow after successful update', async () => {
const testWorkflow = createTestWorkflow({ active: true });
const updatedWorkflow = { ...testWorkflow, active: true };
const deactivatedWorkflow = { ...testWorkflow, active: false };
mockApiClient.getWorkflow.mockResolvedValue(testWorkflow);
mockDiffEngine.applyDiff.mockResolvedValue({
success: true,
workflow: updatedWorkflow,
operationsApplied: 1,
message: 'Success',
errors: [],
shouldDeactivate: true,
});
mockApiClient.updateWorkflow.mockResolvedValue(updatedWorkflow);
mockApiClient.deactivateWorkflow = vi.fn().mockResolvedValue(deactivatedWorkflow);
const result = await handleUpdatePartialWorkflow({
id: 'test-workflow-id',
operations: [{ type: 'deactivateWorkflow' }],
}, mockRepository);
expect(result.success).toBe(true);
expect(result.data).toEqual(deactivatedWorkflow);
expect(result.message).toContain('Workflow deactivated');
expect(result.details?.active).toBe(false);
expect(mockApiClient.deactivateWorkflow).toHaveBeenCalledWith('test-workflow-id');
});
it('should handle activation failure after successful update', async () => {
const testWorkflow = createTestWorkflow({ active: false });
const updatedWorkflow = { ...testWorkflow, active: false };
mockApiClient.getWorkflow.mockResolvedValue(testWorkflow);
mockDiffEngine.applyDiff.mockResolvedValue({
success: true,
workflow: updatedWorkflow,
operationsApplied: 1,
message: 'Success',
errors: [],
shouldActivate: true,
});
mockApiClient.updateWorkflow.mockResolvedValue(updatedWorkflow);
mockApiClient.activateWorkflow = vi.fn().mockRejectedValue(new Error('Activation failed: No trigger nodes'));
const result = await handleUpdatePartialWorkflow({
id: 'test-workflow-id',
operations: [{ type: 'activateWorkflow' }],
}, mockRepository);
expect(result.success).toBe(false);
expect(result.error).toBe('Workflow updated successfully but activation failed');
expect(result.details).toEqual({
workflowUpdated: true,
activationError: 'Activation failed: No trigger nodes',
});
});
it('should handle deactivation failure after successful update', async () => {
const testWorkflow = createTestWorkflow({ active: true });
const updatedWorkflow = { ...testWorkflow, active: true };
mockApiClient.getWorkflow.mockResolvedValue(testWorkflow);
mockDiffEngine.applyDiff.mockResolvedValue({
success: true,
workflow: updatedWorkflow,
operationsApplied: 1,
message: 'Success',
errors: [],
shouldDeactivate: true,
});
mockApiClient.updateWorkflow.mockResolvedValue(updatedWorkflow);
mockApiClient.deactivateWorkflow = vi.fn().mockRejectedValue(new Error('Deactivation failed'));
const result = await handleUpdatePartialWorkflow({
id: 'test-workflow-id',
operations: [{ type: 'deactivateWorkflow' }],
}, mockRepository);
expect(result.success).toBe(false);
expect(result.error).toBe('Workflow updated successfully but deactivation failed');
expect(result.details).toEqual({
workflowUpdated: true,
deactivationError: 'Deactivation failed',
});
});
it('should update workflow without activation when shouldActivate is false', async () => {
const testWorkflow = createTestWorkflow({ active: false });
const updatedWorkflow = { ...testWorkflow, active: false };
mockApiClient.getWorkflow.mockResolvedValue(testWorkflow);
mockDiffEngine.applyDiff.mockResolvedValue({
success: true,
workflow: updatedWorkflow,
operationsApplied: 1,
message: 'Success',
errors: [],
shouldActivate: false,
shouldDeactivate: false,
});
mockApiClient.updateWorkflow.mockResolvedValue(updatedWorkflow);
mockApiClient.activateWorkflow = vi.fn();
mockApiClient.deactivateWorkflow = vi.fn();
const result = await handleUpdatePartialWorkflow({
id: 'test-workflow-id',
operations: [{ type: 'updateName', name: 'Updated' }],
}, mockRepository);
expect(result.success).toBe(true);
expect(result.message).not.toContain('activated');
expect(result.message).not.toContain('deactivated');
expect(mockApiClient.activateWorkflow).not.toHaveBeenCalled();
expect(mockApiClient.deactivateWorkflow).not.toHaveBeenCalled();
});
it('should handle non-Error activation failures', async () => {
const testWorkflow = createTestWorkflow({ active: false });
const updatedWorkflow = { ...testWorkflow, active: false };
mockApiClient.getWorkflow.mockResolvedValue(testWorkflow);
mockDiffEngine.applyDiff.mockResolvedValue({
success: true,
workflow: updatedWorkflow,
operationsApplied: 1,
message: 'Success',
errors: [],
shouldActivate: true,
});
mockApiClient.updateWorkflow.mockResolvedValue(updatedWorkflow);
mockApiClient.activateWorkflow = vi.fn().mockRejectedValue('String error');
const result = await handleUpdatePartialWorkflow({
id: 'test-workflow-id',
operations: [{ type: 'activateWorkflow' }],
}, mockRepository);
expect(result.success).toBe(false);
expect(result.error).toBe('Workflow updated successfully but activation failed');
expect(result.details).toEqual({
workflowUpdated: true,
activationError: 'Unknown error',
});
});
it('should handle non-Error deactivation failures', async () => {
const testWorkflow = createTestWorkflow({ active: true });
const updatedWorkflow = { ...testWorkflow, active: true };
mockApiClient.getWorkflow.mockResolvedValue(testWorkflow);
mockDiffEngine.applyDiff.mockResolvedValue({
success: true,
workflow: updatedWorkflow,
operationsApplied: 1,
message: 'Success',
errors: [],
shouldDeactivate: true,
});
mockApiClient.updateWorkflow.mockResolvedValue(updatedWorkflow);
mockApiClient.deactivateWorkflow = vi.fn().mockRejectedValue({ code: 'UNKNOWN' });
const result = await handleUpdatePartialWorkflow({
id: 'test-workflow-id',
operations: [{ type: 'deactivateWorkflow' }],
}, mockRepository);
expect(result.success).toBe(false);
expect(result.error).toBe('Workflow updated successfully but deactivation failed');
expect(result.details).toEqual({
workflowUpdated: true,
deactivationError: 'Unknown error',
});
});
});
});
});

View File

@@ -14,8 +14,7 @@ vi.mock('@/services/node-specific-validators', () => ({
validateMongoDB: vi.fn(),
validateWebhook: vi.fn(),
validatePostgres: vi.fn(),
validateMySQL: vi.fn(),
validateAIAgent: vi.fn()
validateMySQL: vi.fn()
}
}));
@@ -803,369 +802,4 @@ describe('EnhancedConfigValidator', () => {
expect(result.errors[0].property).toBe('test');
});
});
describe('enhanceHttpRequestValidation', () => {
it('should suggest alwaysOutputData for HTTP Request nodes', () => {
const nodeType = 'nodes-base.httpRequest';
const config = {
url: 'https://api.example.com/data',
method: 'GET'
};
const properties = [
{ name: 'url', type: 'string', required: true },
{ name: 'method', type: 'options', required: false }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
expect(result.valid).toBe(true);
expect(result.suggestions).toContainEqual(
expect.stringContaining('alwaysOutputData: true at node level')
);
expect(result.suggestions).toContainEqual(
expect.stringContaining('ensures the node produces output even when HTTP requests fail')
);
});
it('should suggest responseFormat for API endpoint URLs', () => {
const nodeType = 'nodes-base.httpRequest';
const config = {
url: 'https://api.example.com/data',
method: 'GET',
options: {} // Empty options, no responseFormat
};
const properties = [
{ name: 'url', type: 'string', required: true },
{ name: 'method', type: 'options', required: false },
{ name: 'options', type: 'collection', required: false }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
expect(result.valid).toBe(true);
expect(result.suggestions).toContainEqual(
expect.stringContaining('responseFormat')
);
expect(result.suggestions).toContainEqual(
expect.stringContaining('options.response.response.responseFormat')
);
});
it('should suggest responseFormat for Supabase URLs', () => {
const nodeType = 'nodes-base.httpRequest';
const config = {
url: 'https://xxciwnthnnywanbplqwg.supabase.co/rest/v1/messages',
method: 'GET',
options: {}
};
const properties = [
{ name: 'url', type: 'string', required: true }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
expect(result.suggestions).toContainEqual(
expect.stringContaining('responseFormat')
);
});
it('should NOT suggest responseFormat when already configured', () => {
const nodeType = 'nodes-base.httpRequest';
const config = {
url: 'https://api.example.com/data',
method: 'GET',
options: {
response: {
response: {
responseFormat: 'json'
}
}
}
};
const properties = [
{ name: 'url', type: 'string', required: true },
{ name: 'options', type: 'collection', required: false }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
const responseFormatSuggestion = result.suggestions.find(
(s: string) => s.includes('responseFormat')
);
expect(responseFormatSuggestion).toBeUndefined();
});
it('should warn about missing protocol in expression-based URLs', () => {
const nodeType = 'nodes-base.httpRequest';
const config = {
url: '=www.{{ $json.domain }}.com',
method: 'GET'
};
const properties = [
{ name: 'url', type: 'string', required: true }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
expect(result.warnings).toContainEqual(
expect.objectContaining({
type: 'invalid_value',
property: 'url',
message: expect.stringContaining('missing http:// or https://')
})
);
});
it('should warn about missing protocol in expressions with template markers', () => {
const nodeType = 'nodes-base.httpRequest';
const config = {
url: '={{ $json.domain }}/api/data',
method: 'GET'
};
const properties = [
{ name: 'url', type: 'string', required: true }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
expect(result.warnings).toContainEqual(
expect.objectContaining({
type: 'invalid_value',
property: 'url',
message: expect.stringContaining('missing http:// or https://')
})
);
});
it('should NOT warn when expression includes http protocol', () => {
const nodeType = 'nodes-base.httpRequest';
const config = {
url: '={{ "https://" + $json.domain + ".com" }}',
method: 'GET'
};
const properties = [
{ name: 'url', type: 'string', required: true }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
const urlWarning = result.warnings.find(
(w: any) => w.property === 'url' && w.message.includes('protocol')
);
expect(urlWarning).toBeUndefined();
});
it('should NOT suggest responseFormat for non-API URLs', () => {
const nodeType = 'nodes-base.httpRequest';
const config = {
url: 'https://example.com/page.html',
method: 'GET',
options: {}
};
const properties = [
{ name: 'url', type: 'string', required: true }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
const responseFormatSuggestion = result.suggestions.find(
(s: string) => s.includes('responseFormat')
);
expect(responseFormatSuggestion).toBeUndefined();
});
it('should detect missing protocol in expressions with uppercase HTTP', () => {
const nodeType = 'nodes-base.httpRequest';
const config = {
url: '={{ "HTTP://" + $json.domain + ".com" }}',
method: 'GET'
};
const properties = [
{ name: 'url', type: 'string', required: true }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
// Should NOT warn because HTTP:// is present (case-insensitive)
expect(result.warnings).toHaveLength(0);
});
it('should NOT suggest responseFormat for false positive URLs', () => {
const nodeType = 'nodes-base.httpRequest';
const testUrls = [
'https://example.com/therapist-directory',
'https://restaurant-bookings.com/reserve',
'https://forest-management.org/data'
];
testUrls.forEach(url => {
const config = {
url,
method: 'GET',
options: {}
};
const properties = [
{ name: 'url', type: 'string', required: true }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
const responseFormatSuggestion = result.suggestions.find(
(s: string) => s.includes('responseFormat')
);
expect(responseFormatSuggestion).toBeUndefined();
});
});
it('should suggest responseFormat for case-insensitive API paths', () => {
const nodeType = 'nodes-base.httpRequest';
const testUrls = [
'https://example.com/API/users',
'https://example.com/Rest/data',
'https://example.com/REST/v1/items'
];
testUrls.forEach(url => {
const config = {
url,
method: 'GET',
options: {}
};
const properties = [
{ name: 'url', type: 'string', required: true }
];
const result = EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
expect(result.suggestions).toContainEqual(
expect.stringContaining('responseFormat')
);
});
});
it('should handle null and undefined URLs gracefully', () => {
const nodeType = 'nodes-base.httpRequest';
const testConfigs = [
{ url: null, method: 'GET' },
{ url: undefined, method: 'GET' },
{ url: '', method: 'GET' }
];
testConfigs.forEach(config => {
const properties = [
{ name: 'url', type: 'string', required: true }
];
expect(() => {
EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
}).not.toThrow();
});
});
describe('AI Agent node validation', () => {
it('should call validateAIAgent for AI Agent nodes', () => {
const nodeType = 'nodes-langchain.agent';
const config = {
promptType: 'define',
text: 'You are a helpful assistant'
};
const properties = [
{ name: 'promptType', type: 'options', required: true },
{ name: 'text', type: 'string', required: false }
];
EnhancedConfigValidator.validateWithMode(
nodeType,
config,
properties,
'operation',
'ai-friendly'
);
// Verify the validator was called (fix for issue where it wasn't being called at all)
expect(NodeSpecificValidators.validateAIAgent).toHaveBeenCalledTimes(1);
// Verify it was called with a context object containing our config
const callArgs = (NodeSpecificValidators.validateAIAgent as any).mock.calls[0][0];
expect(callArgs).toHaveProperty('config');
expect(callArgs.config).toEqual(config);
expect(callArgs).toHaveProperty('errors');
expect(callArgs).toHaveProperty('warnings');
expect(callArgs).toHaveProperty('suggestions');
expect(callArgs).toHaveProperty('autofix');
});
});
});
});

View File

@@ -362,19 +362,19 @@ describe('N8nApiClient', () => {
it('should delete workflow successfully', async () => {
mockAxiosInstance.delete.mockResolvedValue({ data: {} });
await client.deleteWorkflow('123');
expect(mockAxiosInstance.delete).toHaveBeenCalledWith('/workflows/123');
});
it('should handle deletion error', async () => {
const error = {
const error = {
message: 'Request failed',
response: { status: 404, data: { message: 'Not found' } }
response: { status: 404, data: { message: 'Not found' } }
};
await mockAxiosInstance.simulateError('delete', error);
try {
await client.deleteWorkflow('123');
expect.fail('Should have thrown an error');
@@ -386,178 +386,6 @@ describe('N8nApiClient', () => {
});
});
describe('activateWorkflow', () => {
beforeEach(() => {
client = new N8nApiClient(defaultConfig);
});
it('should activate workflow successfully', async () => {
const workflow = { id: '123', name: 'Test', active: false, nodes: [], connections: {} };
const activatedWorkflow = { ...workflow, active: true };
mockAxiosInstance.post.mockResolvedValue({ data: activatedWorkflow });
const result = await client.activateWorkflow('123');
expect(mockAxiosInstance.post).toHaveBeenCalledWith('/workflows/123/activate');
expect(result).toEqual(activatedWorkflow);
expect(result.active).toBe(true);
});
it('should handle activation error - no trigger nodes', async () => {
const error = {
message: 'Request failed',
response: { status: 400, data: { message: 'Workflow must have at least one trigger node' } }
};
await mockAxiosInstance.simulateError('post', error);
try {
await client.activateWorkflow('123');
expect.fail('Should have thrown an error');
} catch (err) {
expect(err).toBeInstanceOf(N8nValidationError);
expect((err as N8nValidationError).message).toContain('trigger node');
expect((err as N8nValidationError).statusCode).toBe(400);
}
});
it('should handle activation error - workflow not found', async () => {
const error = {
message: 'Request failed',
response: { status: 404, data: { message: 'Workflow not found' } }
};
await mockAxiosInstance.simulateError('post', error);
try {
await client.activateWorkflow('non-existent');
expect.fail('Should have thrown an error');
} catch (err) {
expect(err).toBeInstanceOf(N8nNotFoundError);
expect((err as N8nNotFoundError).message).toContain('not found');
expect((err as N8nNotFoundError).statusCode).toBe(404);
}
});
it('should handle activation error - workflow already active', async () => {
const error = {
message: 'Request failed',
response: { status: 400, data: { message: 'Workflow is already active' } }
};
await mockAxiosInstance.simulateError('post', error);
try {
await client.activateWorkflow('123');
expect.fail('Should have thrown an error');
} catch (err) {
expect(err).toBeInstanceOf(N8nValidationError);
expect((err as N8nValidationError).message).toContain('already active');
expect((err as N8nValidationError).statusCode).toBe(400);
}
});
it('should handle server error during activation', async () => {
const error = {
message: 'Request failed',
response: { status: 500, data: { message: 'Internal server error' } }
};
await mockAxiosInstance.simulateError('post', error);
try {
await client.activateWorkflow('123');
expect.fail('Should have thrown an error');
} catch (err) {
expect(err).toBeInstanceOf(N8nServerError);
expect((err as N8nServerError).message).toBe('Internal server error');
expect((err as N8nServerError).statusCode).toBe(500);
}
});
});
describe('deactivateWorkflow', () => {
beforeEach(() => {
client = new N8nApiClient(defaultConfig);
});
it('should deactivate workflow successfully', async () => {
const workflow = { id: '123', name: 'Test', active: true, nodes: [], connections: {} };
const deactivatedWorkflow = { ...workflow, active: false };
mockAxiosInstance.post.mockResolvedValue({ data: deactivatedWorkflow });
const result = await client.deactivateWorkflow('123');
expect(mockAxiosInstance.post).toHaveBeenCalledWith('/workflows/123/deactivate');
expect(result).toEqual(deactivatedWorkflow);
expect(result.active).toBe(false);
});
it('should handle deactivation error - workflow not found', async () => {
const error = {
message: 'Request failed',
response: { status: 404, data: { message: 'Workflow not found' } }
};
await mockAxiosInstance.simulateError('post', error);
try {
await client.deactivateWorkflow('non-existent');
expect.fail('Should have thrown an error');
} catch (err) {
expect(err).toBeInstanceOf(N8nNotFoundError);
expect((err as N8nNotFoundError).message).toContain('not found');
expect((err as N8nNotFoundError).statusCode).toBe(404);
}
});
it('should handle deactivation error - workflow already inactive', async () => {
const error = {
message: 'Request failed',
response: { status: 400, data: { message: 'Workflow is already inactive' } }
};
await mockAxiosInstance.simulateError('post', error);
try {
await client.deactivateWorkflow('123');
expect.fail('Should have thrown an error');
} catch (err) {
expect(err).toBeInstanceOf(N8nValidationError);
expect((err as N8nValidationError).message).toContain('already inactive');
expect((err as N8nValidationError).statusCode).toBe(400);
}
});
it('should handle server error during deactivation', async () => {
const error = {
message: 'Request failed',
response: { status: 500, data: { message: 'Internal server error' } }
};
await mockAxiosInstance.simulateError('post', error);
try {
await client.deactivateWorkflow('123');
expect.fail('Should have thrown an error');
} catch (err) {
expect(err).toBeInstanceOf(N8nServerError);
expect((err as N8nServerError).message).toBe('Internal server error');
expect((err as N8nServerError).statusCode).toBe(500);
}
});
it('should handle authentication error during deactivation', async () => {
const error = {
message: 'Request failed',
response: { status: 401, data: { message: 'Invalid API key' } }
};
await mockAxiosInstance.simulateError('post', error);
try {
await client.deactivateWorkflow('123');
expect.fail('Should have thrown an error');
} catch (err) {
expect(err).toBeInstanceOf(N8nAuthenticationError);
expect((err as N8nAuthenticationError).message).toBe('Invalid API key');
expect((err as N8nAuthenticationError).statusCode).toBe(401);
}
});
});
describe('listWorkflows', () => {
beforeEach(() => {
client = new N8nApiClient(defaultConfig);
@@ -585,242 +413,6 @@ describe('N8nApiClient', () => {
});
});
describe('Response Format Validation (PR #367)', () => {
beforeEach(() => {
client = new N8nApiClient(defaultConfig);
});
describe('listWorkflows - validation', () => {
it('should handle modern format with data and nextCursor', async () => {
const response = { data: [{ id: '1', name: 'Test' }], nextCursor: 'abc123' };
mockAxiosInstance.get.mockResolvedValue({ data: response });
const result = await client.listWorkflows();
expect(result).toEqual(response);
expect(result.data).toHaveLength(1);
expect(result.nextCursor).toBe('abc123');
});
it('should wrap legacy array format and log warning', async () => {
const workflows = [{ id: '1', name: 'Test' }];
mockAxiosInstance.get.mockResolvedValue({ data: workflows });
const result = await client.listWorkflows();
expect(result).toEqual({ data: workflows, nextCursor: null });
expect(logger.warn).toHaveBeenCalledWith(
expect.stringContaining('n8n API returned array directly')
);
expect(logger.warn).toHaveBeenCalledWith(
expect.stringContaining('workflows')
);
});
it('should throw error on null response', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: null });
await expect(client.listWorkflows()).rejects.toThrow(
'Invalid response from n8n API for workflows: response is not an object'
);
});
it('should throw error on undefined response', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: undefined });
await expect(client.listWorkflows()).rejects.toThrow(
'Invalid response from n8n API for workflows: response is not an object'
);
});
it('should throw error on string response', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: 'invalid' });
await expect(client.listWorkflows()).rejects.toThrow(
'Invalid response from n8n API for workflows: response is not an object'
);
});
it('should throw error on number response', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: 42 });
await expect(client.listWorkflows()).rejects.toThrow(
'Invalid response from n8n API for workflows: response is not an object'
);
});
it('should throw error on invalid structure with different keys', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: { items: [], total: 10 } });
await expect(client.listWorkflows()).rejects.toThrow(
'Invalid response from n8n API for workflows: expected {data: [], nextCursor?: string}, got object with keys: [items, total]'
);
});
it('should throw error when data is not an array', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: { data: 'invalid' } });
await expect(client.listWorkflows()).rejects.toThrow(
'Invalid response from n8n API for workflows: expected {data: [], nextCursor?: string}'
);
});
it('should limit exposed keys to first 5 when many keys present', async () => {
const manyKeys = { items: [], total: 10, page: 1, limit: 20, hasMore: true, metadata: {} };
mockAxiosInstance.get.mockResolvedValue({ data: manyKeys });
try {
await client.listWorkflows();
expect.fail('Should have thrown error');
} catch (error: any) {
expect(error.message).toContain('items, total, page, limit, hasMore...');
expect(error.message).not.toContain('metadata');
}
});
});
describe('listExecutions - validation', () => {
it('should handle modern format with data and nextCursor', async () => {
const response = { data: [{ id: '1' }], nextCursor: 'abc123' };
mockAxiosInstance.get.mockResolvedValue({ data: response });
const result = await client.listExecutions();
expect(result).toEqual(response);
});
it('should wrap legacy array format and log warning', async () => {
const executions = [{ id: '1' }];
mockAxiosInstance.get.mockResolvedValue({ data: executions });
const result = await client.listExecutions();
expect(result).toEqual({ data: executions, nextCursor: null });
expect(logger.warn).toHaveBeenCalledWith(
expect.stringContaining('executions')
);
});
it('should throw error on null response', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: null });
await expect(client.listExecutions()).rejects.toThrow(
'Invalid response from n8n API for executions: response is not an object'
);
});
it('should throw error on invalid structure', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: { items: [] } });
await expect(client.listExecutions()).rejects.toThrow(
'Invalid response from n8n API for executions'
);
});
it('should throw error when data is not an array', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: { data: 'invalid' } });
await expect(client.listExecutions()).rejects.toThrow(
'Invalid response from n8n API for executions'
);
});
});
describe('listCredentials - validation', () => {
it('should handle modern format with data and nextCursor', async () => {
const response = { data: [{ id: '1' }], nextCursor: 'abc123' };
mockAxiosInstance.get.mockResolvedValue({ data: response });
const result = await client.listCredentials();
expect(result).toEqual(response);
});
it('should wrap legacy array format and log warning', async () => {
const credentials = [{ id: '1' }];
mockAxiosInstance.get.mockResolvedValue({ data: credentials });
const result = await client.listCredentials();
expect(result).toEqual({ data: credentials, nextCursor: null });
expect(logger.warn).toHaveBeenCalledWith(
expect.stringContaining('credentials')
);
});
it('should throw error on null response', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: null });
await expect(client.listCredentials()).rejects.toThrow(
'Invalid response from n8n API for credentials: response is not an object'
);
});
it('should throw error on invalid structure', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: { items: [] } });
await expect(client.listCredentials()).rejects.toThrow(
'Invalid response from n8n API for credentials'
);
});
it('should throw error when data is not an array', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: { data: 'invalid' } });
await expect(client.listCredentials()).rejects.toThrow(
'Invalid response from n8n API for credentials'
);
});
});
describe('listTags - validation', () => {
it('should handle modern format with data and nextCursor', async () => {
const response = { data: [{ id: '1' }], nextCursor: 'abc123' };
mockAxiosInstance.get.mockResolvedValue({ data: response });
const result = await client.listTags();
expect(result).toEqual(response);
});
it('should wrap legacy array format and log warning', async () => {
const tags = [{ id: '1' }];
mockAxiosInstance.get.mockResolvedValue({ data: tags });
const result = await client.listTags();
expect(result).toEqual({ data: tags, nextCursor: null });
expect(logger.warn).toHaveBeenCalledWith(
expect.stringContaining('tags')
);
});
it('should throw error on null response', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: null });
await expect(client.listTags()).rejects.toThrow(
'Invalid response from n8n API for tags: response is not an object'
);
});
it('should throw error on invalid structure', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: { items: [] } });
await expect(client.listTags()).rejects.toThrow(
'Invalid response from n8n API for tags'
);
});
it('should throw error when data is not an array', async () => {
mockAxiosInstance.get.mockResolvedValue({ data: { data: 'invalid' } });
await expect(client.listTags()).rejects.toThrow(
'Invalid response from n8n API for tags'
);
});
});
});
describe('getExecution', () => {
beforeEach(() => {
client = new N8nApiClient(defaultConfig);

View File

@@ -313,7 +313,6 @@ describe('n8n-validation', () => {
createdAt: '2023-01-01',
updatedAt: '2023-01-01',
versionId: 'v123',
versionCounter: 5, // n8n 1.118.1+ field
meta: { test: 'data' },
staticData: { some: 'data' },
pinData: { pin: 'data' },
@@ -334,7 +333,6 @@ describe('n8n-validation', () => {
expect(cleaned).not.toHaveProperty('createdAt');
expect(cleaned).not.toHaveProperty('updatedAt');
expect(cleaned).not.toHaveProperty('versionId');
expect(cleaned).not.toHaveProperty('versionCounter'); // n8n 1.118.1+ compatibility
expect(cleaned).not.toHaveProperty('meta');
expect(cleaned).not.toHaveProperty('staticData');
expect(cleaned).not.toHaveProperty('pinData');
@@ -351,22 +349,6 @@ describe('n8n-validation', () => {
expect(cleaned.settings).toEqual({ executionOrder: 'v1' });
});
it('should exclude versionCounter for n8n 1.118.1+ compatibility', () => {
const workflow = {
name: 'Test Workflow',
nodes: [],
connections: {},
versionId: 'v123',
versionCounter: 5, // n8n 1.118.1 returns this but rejects it in PUT
} as any;
const cleaned = cleanWorkflowForUpdate(workflow);
expect(cleaned).not.toHaveProperty('versionCounter');
expect(cleaned).not.toHaveProperty('versionId');
expect(cleaned.name).toBe('Test Workflow');
});
it('should add empty settings object for cloud API compatibility', () => {
const workflow = {
name: 'Test Workflow',

View File

@@ -2303,416 +2303,9 @@ return [{"json": {"result": result}}]
message: 'Code nodes can throw errors - consider error handling',
suggestion: 'Add onError: "continueRegularOutput" to handle errors gracefully'
});
expect(context.autofix.onError).toBe('continueRegularOutput');
});
});
});
describe('validateAIAgent', () => {
let context: NodeValidationContext;
beforeEach(() => {
context = {
config: {},
errors: [],
warnings: [],
suggestions: [],
autofix: {}
};
});
describe('prompt configuration', () => {
it('should require text when promptType is "define"', () => {
context.config.promptType = 'define';
context.config.text = '';
NodeSpecificValidators.validateAIAgent(context);
expect(context.errors).toContainEqual({
type: 'missing_required',
property: 'text',
message: 'Custom prompt text is required when promptType is "define"',
fix: 'Provide a custom prompt in the text field, or change promptType to "auto"'
});
});
it('should not require text when promptType is "auto"', () => {
context.config.promptType = 'auto';
NodeSpecificValidators.validateAIAgent(context);
const textErrors = context.errors.filter(e => e.property === 'text');
expect(textErrors).toHaveLength(0);
});
it('should accept valid text with promptType "define"', () => {
context.config.promptType = 'define';
context.config.text = 'You are a helpful assistant that analyzes data.';
NodeSpecificValidators.validateAIAgent(context);
const textErrors = context.errors.filter(e => e.property === 'text');
expect(textErrors).toHaveLength(0);
});
it('should reject whitespace-only text with promptType "define"', () => {
// Edge case: Text is only whitespace
context.config.promptType = 'define';
context.config.text = ' \n\t ';
NodeSpecificValidators.validateAIAgent(context);
expect(context.errors).toContainEqual({
type: 'missing_required',
property: 'text',
message: 'Custom prompt text is required when promptType is "define"',
fix: 'Provide a custom prompt in the text field, or change promptType to "auto"'
});
});
it('should accept very long text with promptType "define"', () => {
// Edge case: Very long prompt text (common for complex AI agents)
context.config.promptType = 'define';
context.config.text = 'You are a helpful assistant. '.repeat(100); // 3200 characters
NodeSpecificValidators.validateAIAgent(context);
const textErrors = context.errors.filter(e => e.property === 'text');
expect(textErrors).toHaveLength(0);
});
it('should handle undefined text with promptType "define"', () => {
// Edge case: Text is undefined
context.config.promptType = 'define';
context.config.text = undefined;
NodeSpecificValidators.validateAIAgent(context);
expect(context.errors).toContainEqual({
type: 'missing_required',
property: 'text',
message: 'Custom prompt text is required when promptType is "define"',
fix: 'Provide a custom prompt in the text field, or change promptType to "auto"'
});
});
it('should handle null text with promptType "define"', () => {
// Edge case: Text is null
context.config.promptType = 'define';
context.config.text = null;
NodeSpecificValidators.validateAIAgent(context);
expect(context.errors).toContainEqual({
type: 'missing_required',
property: 'text',
message: 'Custom prompt text is required when promptType is "define"',
fix: 'Provide a custom prompt in the text field, or change promptType to "auto"'
});
});
});
describe('system message validation', () => {
it('should suggest adding system message when missing', () => {
context.config = {};
NodeSpecificValidators.validateAIAgent(context);
// Should contain a suggestion about system message
const hasSysMessageSuggestion = context.suggestions.some(s =>
s.toLowerCase().includes('system message')
);
expect(hasSysMessageSuggestion).toBe(true);
});
it('should warn when system message is too short', () => {
context.config.systemMessage = 'Help';
NodeSpecificValidators.validateAIAgent(context);
expect(context.warnings).toContainEqual({
type: 'inefficient',
property: 'systemMessage',
message: 'System message is very short (< 20 characters)',
suggestion: 'Consider a more detailed system message to guide the agent\'s behavior'
});
});
it('should accept adequate system message', () => {
context.config.systemMessage = 'You are a helpful assistant that analyzes customer feedback.';
NodeSpecificValidators.validateAIAgent(context);
const systemWarnings = context.warnings.filter(w => w.property === 'systemMessage');
expect(systemWarnings).toHaveLength(0);
});
it('should suggest adding system message when empty string', () => {
// Edge case: Empty string system message
context.config.systemMessage = '';
NodeSpecificValidators.validateAIAgent(context);
// Should contain a suggestion about system message
const hasSysMessageSuggestion = context.suggestions.some(s =>
s.toLowerCase().includes('system message')
);
expect(hasSysMessageSuggestion).toBe(true);
});
it('should suggest adding system message when whitespace only', () => {
// Edge case: Whitespace-only system message
context.config.systemMessage = ' \n\t ';
NodeSpecificValidators.validateAIAgent(context);
// Should contain a suggestion about system message
const hasSysMessageSuggestion = context.suggestions.some(s =>
s.toLowerCase().includes('system message')
);
expect(hasSysMessageSuggestion).toBe(true);
});
it('should accept very long system messages', () => {
// Edge case: Very long system message (>1000 chars) for complex agents
context.config.systemMessage = 'You are a highly specialized assistant. '.repeat(30); // ~1260 chars
NodeSpecificValidators.validateAIAgent(context);
const systemWarnings = context.warnings.filter(w => w.property === 'systemMessage');
expect(systemWarnings).toHaveLength(0);
});
it('should handle system messages with special characters', () => {
// Edge case: System message with special characters, emojis, unicode
context.config.systemMessage = 'You are an assistant 🤖 that handles data with special chars: @#$%^&*(){}[]|\\/<>~`';
NodeSpecificValidators.validateAIAgent(context);
const systemWarnings = context.warnings.filter(w => w.property === 'systemMessage');
expect(systemWarnings).toHaveLength(0);
});
it('should handle system messages with newlines and formatting', () => {
// Edge case: Multi-line system message with formatting
context.config.systemMessage = `You are a helpful assistant.
Your responsibilities include:
1. Analyzing customer feedback
2. Generating reports
3. Providing insights
Always be professional and concise.`;
NodeSpecificValidators.validateAIAgent(context);
const systemWarnings = context.warnings.filter(w => w.property === 'systemMessage');
expect(systemWarnings).toHaveLength(0);
});
it('should warn about exactly 19 character system message', () => {
// Edge case: Just under the 20 character threshold
context.config.systemMessage = 'Be a good assistant'; // 19 chars
NodeSpecificValidators.validateAIAgent(context);
expect(context.warnings).toContainEqual({
type: 'inefficient',
property: 'systemMessage',
message: 'System message is very short (< 20 characters)',
suggestion: 'Consider a more detailed system message to guide the agent\'s behavior'
});
});
it('should not warn about exactly 20 character system message', () => {
// Edge case: Exactly at the 20 character threshold
context.config.systemMessage = 'Be a great assistant'; // 20 chars
NodeSpecificValidators.validateAIAgent(context);
const systemWarnings = context.warnings.filter(w => w.property === 'systemMessage');
expect(systemWarnings).toHaveLength(0);
});
});
describe('maxIterations validation', () => {
it('should reject invalid maxIterations values', () => {
context.config.maxIterations = -5;
NodeSpecificValidators.validateAIAgent(context);
expect(context.errors).toContainEqual({
type: 'invalid_value',
property: 'maxIterations',
message: 'maxIterations must be a positive number',
fix: 'Set maxIterations to a value >= 1 (e.g., 10)'
});
});
it('should warn about very high maxIterations', () => {
context.config.maxIterations = 100;
NodeSpecificValidators.validateAIAgent(context);
expect(context.warnings).toContainEqual(
expect.objectContaining({
type: 'inefficient',
property: 'maxIterations'
})
);
});
it('should accept reasonable maxIterations', () => {
context.config.maxIterations = 15;
NodeSpecificValidators.validateAIAgent(context);
const maxIterErrors = context.errors.filter(e => e.property === 'maxIterations');
expect(maxIterErrors).toHaveLength(0);
});
it('should reject maxIterations of 0', () => {
// Edge case: Zero iterations is invalid
context.config.maxIterations = 0;
NodeSpecificValidators.validateAIAgent(context);
expect(context.errors).toContainEqual({
type: 'invalid_value',
property: 'maxIterations',
message: 'maxIterations must be a positive number',
fix: 'Set maxIterations to a value >= 1 (e.g., 10)'
});
});
it('should accept maxIterations of 1', () => {
// Edge case: Minimum valid value
context.config.maxIterations = 1;
NodeSpecificValidators.validateAIAgent(context);
const maxIterErrors = context.errors.filter(e => e.property === 'maxIterations');
expect(maxIterErrors).toHaveLength(0);
});
it('should warn about maxIterations of 51', () => {
// Edge case: Just above the threshold (50)
context.config.maxIterations = 51;
NodeSpecificValidators.validateAIAgent(context);
expect(context.warnings).toContainEqual(
expect.objectContaining({
type: 'inefficient',
property: 'maxIterations',
message: expect.stringContaining('51')
})
);
});
it('should handle extreme maxIterations values', () => {
// Edge case: Very large number
context.config.maxIterations = Number.MAX_SAFE_INTEGER;
NodeSpecificValidators.validateAIAgent(context);
expect(context.warnings).toContainEqual(
expect.objectContaining({
type: 'inefficient',
property: 'maxIterations'
})
);
});
it('should reject NaN maxIterations', () => {
// Edge case: Not a number
context.config.maxIterations = 'invalid';
NodeSpecificValidators.validateAIAgent(context);
expect(context.errors).toContainEqual({
type: 'invalid_value',
property: 'maxIterations',
message: 'maxIterations must be a positive number',
fix: 'Set maxIterations to a value >= 1 (e.g., 10)'
});
});
it('should reject negative decimal maxIterations', () => {
// Edge case: Negative decimal
context.config.maxIterations = -0.5;
NodeSpecificValidators.validateAIAgent(context);
expect(context.errors).toContainEqual({
type: 'invalid_value',
property: 'maxIterations',
message: 'maxIterations must be a positive number',
fix: 'Set maxIterations to a value >= 1 (e.g., 10)'
});
});
});
describe('error handling', () => {
it('should suggest error handling when not configured', () => {
context.config = {};
NodeSpecificValidators.validateAIAgent(context);
expect(context.warnings).toContainEqual({
type: 'best_practice',
property: 'errorHandling',
message: 'AI models can fail due to API limits, rate limits, or invalid responses',
suggestion: 'Add onError: "continueRegularOutput" with retryOnFail for resilience'
});
expect(context.autofix).toMatchObject({
onError: 'continueRegularOutput',
retryOnFail: true,
maxTries: 2,
waitBetweenTries: 5000
});
});
it('should warn about deprecated continueOnFail', () => {
context.config.continueOnFail = true;
NodeSpecificValidators.validateAIAgent(context);
expect(context.warnings).toContainEqual({
type: 'deprecated',
property: 'continueOnFail',
message: 'continueOnFail is deprecated. Use onError instead',
suggestion: 'Replace with onError: "continueRegularOutput" or "stopWorkflow"'
});
});
});
describe('output parser and fallback warnings', () => {
it('should warn when output parser is enabled', () => {
context.config.hasOutputParser = true;
NodeSpecificValidators.validateAIAgent(context);
expect(context.warnings).toContainEqual(
expect.objectContaining({
property: 'hasOutputParser'
})
);
});
it('should warn when fallback model is enabled', () => {
context.config.needsFallback = true;
NodeSpecificValidators.validateAIAgent(context);
expect(context.warnings).toContainEqual(
expect.objectContaining({
property: 'needsFallback'
})
);
});
});
});
});

View File

@@ -380,52 +380,10 @@ describe('WorkflowDiffEngine', () => {
};
const result = await diffEngine.applyDiff(baseWorkflow, request);
expect(result.success).toBe(false);
expect(result.errors![0].message).toContain('Node not found');
});
it('should provide helpful error when using "changes" instead of "updates" (Issue #392)', async () => {
// Simulate the common mistake of using "changes" instead of "updates"
const operation: any = {
type: 'updateNode',
nodeId: 'http-1',
changes: { // Wrong property name
'parameters.url': 'https://example.com'
}
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [operation]
};
const result = await diffEngine.applyDiff(baseWorkflow, request);
expect(result.success).toBe(false);
expect(result.errors![0].message).toContain('Invalid parameter \'changes\'');
expect(result.errors![0].message).toContain('requires \'updates\'');
expect(result.errors![0].message).toContain('Example:');
});
it('should provide helpful error when "updates" parameter is missing', async () => {
const operation: any = {
type: 'updateNode',
nodeId: 'http-1'
// Missing "updates" property
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [operation]
};
const result = await diffEngine.applyDiff(baseWorkflow, request);
expect(result.success).toBe(false);
expect(result.errors![0].message).toContain('Missing required parameter \'updates\'');
expect(result.errors![0].message).toContain('Example:');
});
});
describe('MoveNode Operation', () => {
@@ -1460,113 +1418,6 @@ describe('WorkflowDiffEngine', () => {
expect(result.workflow!.connections['Switch']['main'][2][0].node).toBe('Handler');
expect(result.workflow!.connections['Switch']['main'][1]).toEqual([]);
});
it('should warn when using sourceIndex with If node (issue #360)', async () => {
const addIF: any = {
type: 'addNode',
node: {
name: 'Check Condition',
type: 'n8n-nodes-base.if',
position: [400, 300]
}
};
const addSuccess: any = {
type: 'addNode',
node: {
name: 'Success Handler',
type: 'n8n-nodes-base.set',
position: [600, 200]
}
};
const addError: any = {
type: 'addNode',
node: {
name: 'Error Handler',
type: 'n8n-nodes-base.set',
position: [600, 400]
}
};
// BAD: Using sourceIndex with If node (reproduces issue #360)
const connectSuccess: any = {
type: 'addConnection',
source: 'Check Condition',
target: 'Success Handler',
sourceIndex: 0 // Should use branch="true" instead
};
const connectError: any = {
type: 'addConnection',
source: 'Check Condition',
target: 'Error Handler',
sourceIndex: 0 // Should use branch="false" instead - both will end up in main[0]!
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [addIF, addSuccess, addError, connectSuccess, connectError]
};
const result = await diffEngine.applyDiff(baseWorkflow, request);
expect(result.success).toBe(true);
// Should produce warnings
expect(result.warnings).toBeDefined();
expect(result.warnings!.length).toBe(2);
expect(result.warnings![0].message).toContain('Consider using branch="true" or branch="false"');
expect(result.warnings![0].message).toContain('If node outputs: main[0]=TRUE branch, main[1]=FALSE branch');
expect(result.warnings![1].message).toContain('Consider using branch="true" or branch="false"');
// Both connections end up in main[0] (the bug behavior)
expect(result.workflow!.connections['Check Condition']['main'][0].length).toBe(2);
expect(result.workflow!.connections['Check Condition']['main'][0][0].node).toBe('Success Handler');
expect(result.workflow!.connections['Check Condition']['main'][0][1].node).toBe('Error Handler');
});
it('should warn when using sourceIndex with Switch node', async () => {
const addSwitch: any = {
type: 'addNode',
node: {
name: 'Switch',
type: 'n8n-nodes-base.switch',
position: [400, 300]
}
};
const addHandler: any = {
type: 'addNode',
node: {
name: 'Handler',
type: 'n8n-nodes-base.set',
position: [600, 300]
}
};
// BAD: Using sourceIndex with Switch node
const connect: any = {
type: 'addConnection',
source: 'Switch',
target: 'Handler',
sourceIndex: 1 // Should use case=1 instead
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [addSwitch, addHandler, connect]
};
const result = await diffEngine.applyDiff(baseWorkflow, request);
expect(result.success).toBe(true);
// Should produce warning
expect(result.warnings).toBeDefined();
expect(result.warnings!.length).toBe(1);
expect(result.warnings![0].message).toContain('Consider using case=N for better clarity');
});
});
describe('AddConnection with sourceIndex (Phase 0 Fix)', () => {
@@ -4311,358 +4162,4 @@ describe('WorkflowDiffEngine', () => {
expect(result.workflow.connections["When clicking 'Execute workflow'"]).toBeDefined();
});
});
describe('Workflow Activation/Deactivation Operations', () => {
it('should activate workflow with activatable trigger nodes', async () => {
// Create workflow with webhook trigger (activatable)
const workflowWithTrigger = createWorkflow('Test Workflow')
.addWebhookNode({ id: 'webhook-1', name: 'Webhook Trigger' })
.addHttpRequestNode({ id: 'http-1', name: 'HTTP Request' })
.connect('webhook-1', 'http-1')
.build() as Workflow;
// Fix connections to use node names
const newConnections: any = {};
for (const [nodeId, outputs] of Object.entries(workflowWithTrigger.connections)) {
const node = workflowWithTrigger.nodes.find((n: any) => n.id === nodeId);
if (node) {
newConnections[node.name] = {};
for (const [outputName, connections] of Object.entries(outputs)) {
newConnections[node.name][outputName] = (connections as any[]).map((conns: any) =>
conns.map((conn: any) => {
const targetNode = workflowWithTrigger.nodes.find((n: any) => n.id === conn.node);
return { ...conn, node: targetNode ? targetNode.name : conn.node };
})
);
}
}
}
workflowWithTrigger.connections = newConnections;
const operation: any = {
type: 'activateWorkflow'
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [operation]
};
const result = await diffEngine.applyDiff(workflowWithTrigger, request);
expect(result.success).toBe(true);
expect(result.shouldActivate).toBe(true);
expect((result.workflow as any)._shouldActivate).toBeUndefined(); // Flag should be cleaned up
});
it('should reject activation if no activatable trigger nodes', async () => {
// Create workflow with no trigger nodes at all
const workflowWithoutActivatableTrigger = createWorkflow('Test Workflow')
.addNode({
id: 'set-1',
name: 'Set Node',
type: 'n8n-nodes-base.set',
typeVersion: 1,
position: [100, 100],
parameters: {}
})
.addHttpRequestNode({ id: 'http-1', name: 'HTTP Request' })
.connect('set-1', 'http-1')
.build() as Workflow;
// Fix connections to use node names
const newConnections: any = {};
for (const [nodeId, outputs] of Object.entries(workflowWithoutActivatableTrigger.connections)) {
const node = workflowWithoutActivatableTrigger.nodes.find((n: any) => n.id === nodeId);
if (node) {
newConnections[node.name] = {};
for (const [outputName, connections] of Object.entries(outputs)) {
newConnections[node.name][outputName] = (connections as any[]).map((conns: any) =>
conns.map((conn: any) => {
const targetNode = workflowWithoutActivatableTrigger.nodes.find((n: any) => n.id === conn.node);
return { ...conn, node: targetNode ? targetNode.name : conn.node };
})
);
}
}
}
workflowWithoutActivatableTrigger.connections = newConnections;
const operation: any = {
type: 'activateWorkflow'
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [operation]
};
const result = await diffEngine.applyDiff(workflowWithoutActivatableTrigger, request);
expect(result.success).toBe(false);
expect(result.errors).toBeDefined();
expect(result.errors![0].message).toContain('No activatable trigger nodes found');
expect(result.errors![0].message).toContain('executeWorkflowTrigger cannot activate workflows');
});
it('should reject activation if all trigger nodes are disabled', async () => {
// Create workflow with disabled webhook trigger
const workflowWithDisabledTrigger = createWorkflow('Test Workflow')
.addWebhookNode({ id: 'webhook-1', name: 'Webhook Trigger', disabled: true })
.addHttpRequestNode({ id: 'http-1', name: 'HTTP Request' })
.connect('webhook-1', 'http-1')
.build() as Workflow;
// Fix connections to use node names
const newConnections: any = {};
for (const [nodeId, outputs] of Object.entries(workflowWithDisabledTrigger.connections)) {
const node = workflowWithDisabledTrigger.nodes.find((n: any) => n.id === nodeId);
if (node) {
newConnections[node.name] = {};
for (const [outputName, connections] of Object.entries(outputs)) {
newConnections[node.name][outputName] = (connections as any[]).map((conns: any) =>
conns.map((conn: any) => {
const targetNode = workflowWithDisabledTrigger.nodes.find((n: any) => n.id === conn.node);
return { ...conn, node: targetNode ? targetNode.name : conn.node };
})
);
}
}
}
workflowWithDisabledTrigger.connections = newConnections;
const operation: any = {
type: 'activateWorkflow'
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [operation]
};
const result = await diffEngine.applyDiff(workflowWithDisabledTrigger, request);
expect(result.success).toBe(false);
expect(result.errors).toBeDefined();
expect(result.errors![0].message).toContain('No activatable trigger nodes found');
});
it('should activate workflow with schedule trigger', async () => {
// Create workflow with schedule trigger (activatable)
const workflowWithSchedule = createWorkflow('Test Workflow')
.addNode({
id: 'schedule-1',
name: 'Schedule',
type: 'n8n-nodes-base.scheduleTrigger',
typeVersion: 1,
position: [100, 100],
parameters: { rule: { interval: [{ field: 'hours', hoursInterval: 1 }] } }
})
.addHttpRequestNode({ id: 'http-1', name: 'HTTP Request' })
.connect('schedule-1', 'http-1')
.build() as Workflow;
// Fix connections
const newConnections: any = {};
for (const [nodeId, outputs] of Object.entries(workflowWithSchedule.connections)) {
const node = workflowWithSchedule.nodes.find((n: any) => n.id === nodeId);
if (node) {
newConnections[node.name] = {};
for (const [outputName, connections] of Object.entries(outputs)) {
newConnections[node.name][outputName] = (connections as any[]).map((conns: any) =>
conns.map((conn: any) => {
const targetNode = workflowWithSchedule.nodes.find((n: any) => n.id === conn.node);
return { ...conn, node: targetNode ? targetNode.name : conn.node };
})
);
}
}
}
workflowWithSchedule.connections = newConnections;
const operation: any = {
type: 'activateWorkflow'
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [operation]
};
const result = await diffEngine.applyDiff(workflowWithSchedule, request);
expect(result.success).toBe(true);
expect(result.shouldActivate).toBe(true);
});
it('should deactivate workflow successfully', async () => {
// Any workflow can be deactivated
const operation: any = {
type: 'deactivateWorkflow'
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [operation]
};
const result = await diffEngine.applyDiff(baseWorkflow, request);
expect(result.success).toBe(true);
expect(result.shouldDeactivate).toBe(true);
expect((result.workflow as any)._shouldDeactivate).toBeUndefined(); // Flag should be cleaned up
});
it('should deactivate workflow without trigger nodes', async () => {
// Create workflow without any trigger nodes
const workflowWithoutTrigger = createWorkflow('Test Workflow')
.addHttpRequestNode({ id: 'http-1', name: 'HTTP Request' })
.addNode({
id: 'set-1',
name: 'Set',
type: 'n8n-nodes-base.set',
typeVersion: 1,
position: [300, 100],
parameters: {}
})
.connect('http-1', 'set-1')
.build() as Workflow;
// Fix connections
const newConnections: any = {};
for (const [nodeId, outputs] of Object.entries(workflowWithoutTrigger.connections)) {
const node = workflowWithoutTrigger.nodes.find((n: any) => n.id === nodeId);
if (node) {
newConnections[node.name] = {};
for (const [outputName, connections] of Object.entries(outputs)) {
newConnections[node.name][outputName] = (connections as any[]).map((conns: any) =>
conns.map((conn: any) => {
const targetNode = workflowWithoutTrigger.nodes.find((n: any) => n.id === conn.node);
return { ...conn, node: targetNode ? targetNode.name : conn.node };
})
);
}
}
}
workflowWithoutTrigger.connections = newConnections;
const operation: any = {
type: 'deactivateWorkflow'
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [operation]
};
const result = await diffEngine.applyDiff(workflowWithoutTrigger, request);
expect(result.success).toBe(true);
expect(result.shouldDeactivate).toBe(true);
});
it('should combine activation with other operations', async () => {
// Create workflow with webhook trigger
const workflowWithTrigger = createWorkflow('Test Workflow')
.addWebhookNode({ id: 'webhook-1', name: 'Webhook Trigger' })
.addHttpRequestNode({ id: 'http-1', name: 'HTTP Request' })
.connect('webhook-1', 'http-1')
.build() as Workflow;
// Fix connections
const newConnections: any = {};
for (const [nodeId, outputs] of Object.entries(workflowWithTrigger.connections)) {
const node = workflowWithTrigger.nodes.find((n: any) => n.id === nodeId);
if (node) {
newConnections[node.name] = {};
for (const [outputName, connections] of Object.entries(outputs)) {
newConnections[node.name][outputName] = (connections as any[]).map((conns: any) =>
conns.map((conn: any) => {
const targetNode = workflowWithTrigger.nodes.find((n: any) => n.id === conn.node);
return { ...conn, node: targetNode ? targetNode.name : conn.node };
})
);
}
}
}
workflowWithTrigger.connections = newConnections;
const operations: any[] = [
{
type: 'updateName',
name: 'Updated Workflow Name'
},
{
type: 'addTag',
tag: 'production'
},
{
type: 'activateWorkflow'
}
];
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations
};
const result = await diffEngine.applyDiff(workflowWithTrigger, request);
expect(result.success).toBe(true);
expect(result.operationsApplied).toBe(3);
expect(result.workflow!.name).toBe('Updated Workflow Name');
expect(result.workflow!.tags).toContain('production');
expect(result.shouldActivate).toBe(true);
});
it('should reject activation if workflow has executeWorkflowTrigger only', async () => {
// Create workflow with executeWorkflowTrigger (not activatable - Issue #351)
const workflowWithExecuteTrigger = createWorkflow('Test Workflow')
.addNode({
id: 'execute-1',
name: 'Execute Workflow Trigger',
type: 'n8n-nodes-base.executeWorkflowTrigger',
typeVersion: 1,
position: [100, 100],
parameters: {}
})
.addHttpRequestNode({ id: 'http-1', name: 'HTTP Request' })
.connect('execute-1', 'http-1')
.build() as Workflow;
// Fix connections
const newConnections: any = {};
for (const [nodeId, outputs] of Object.entries(workflowWithExecuteTrigger.connections)) {
const node = workflowWithExecuteTrigger.nodes.find((n: any) => n.id === nodeId);
if (node) {
newConnections[node.name] = {};
for (const [outputName, connections] of Object.entries(outputs)) {
newConnections[node.name][outputName] = (connections as any[]).map((conns: any) =>
conns.map((conn: any) => {
const targetNode = workflowWithExecuteTrigger.nodes.find((n: any) => n.id === conn.node);
return { ...conn, node: targetNode ? targetNode.name : conn.node };
})
);
}
}
}
workflowWithExecuteTrigger.connections = newConnections;
const operation: any = {
type: 'activateWorkflow'
};
const request: WorkflowDiffRequest = {
id: 'test-workflow',
operations: [operation]
};
const result = await diffEngine.applyDiff(workflowWithExecuteTrigger, request);
expect(result.success).toBe(false);
expect(result.errors).toBeDefined();
expect(result.errors![0].message).toContain('No activatable trigger nodes found');
expect(result.errors![0].message).toContain('executeWorkflowTrigger cannot activate workflows');
});
});
});

View File

@@ -278,297 +278,9 @@ describe('WorkflowValidator', () => {
describe('validation options', () => {
it('should support profiles when different validation levels are needed', () => {
const profiles = ['minimal', 'runtime', 'ai-friendly', 'strict'];
expect(profiles).toContain('minimal');
expect(profiles).toContain('runtime');
});
});
describe('duplicate node ID validation', () => {
it('should detect duplicate node IDs and provide helpful context', () => {
const workflow = {
name: 'Test Workflow with Duplicate IDs',
nodes: [
{
id: 'abc123',
name: 'First Node',
type: 'n8n-nodes-base.httpRequest',
typeVersion: 3,
position: [250, 300],
parameters: {}
},
{
id: 'abc123', // Duplicate ID
name: 'Second Node',
type: 'n8n-nodes-base.set',
typeVersion: 2,
position: [450, 300],
parameters: {}
}
],
connections: {}
};
// Simulate validation logic
const nodeIds = new Set<string>();
const nodeIdToIndex = new Map<string, number>();
const errors: Array<{ message: string }> = [];
for (let i = 0; i < workflow.nodes.length; i++) {
const node = workflow.nodes[i];
if (nodeIds.has(node.id)) {
const firstNodeIndex = nodeIdToIndex.get(node.id);
const firstNode = firstNodeIndex !== undefined ? workflow.nodes[firstNodeIndex] : undefined;
errors.push({
message: `Duplicate node ID: "${node.id}". Node at index ${i} (name: "${node.name}", type: "${node.type}") conflicts with node at index ${firstNodeIndex} (name: "${firstNode?.name || 'unknown'}", type: "${firstNode?.type || 'unknown'}")`
});
} else {
nodeIds.add(node.id);
nodeIdToIndex.set(node.id, i);
}
}
expect(errors).toHaveLength(1);
expect(errors[0].message).toContain('Duplicate node ID: "abc123"');
expect(errors[0].message).toContain('index 1');
expect(errors[0].message).toContain('Second Node');
expect(errors[0].message).toContain('n8n-nodes-base.set');
expect(errors[0].message).toContain('index 0');
expect(errors[0].message).toContain('First Node');
});
it('should include UUID generation example in error message context', () => {
const workflow = {
name: 'Test',
nodes: [
{ id: 'dup', name: 'A', type: 'n8n-nodes-base.webhook', typeVersion: 1, position: [0, 0], parameters: {} },
{ id: 'dup', name: 'B', type: 'n8n-nodes-base.webhook', typeVersion: 1, position: [0, 0], parameters: {} }
],
connections: {}
};
// Error message should contain UUID example pattern
const expectedPattern = /crypto\.randomUUID\(\)/;
// This validates that our implementation uses the pattern
expect(expectedPattern.test('crypto.randomUUID()')).toBe(true);
});
it('should detect multiple nodes with the same duplicate ID', () => {
// Edge case: Three or more nodes with the same ID
const workflow = {
name: 'Test Workflow with Multiple Duplicates',
nodes: [
{
id: 'shared-id',
name: 'First Node',
type: 'n8n-nodes-base.httpRequest',
typeVersion: 3,
position: [250, 300],
parameters: {}
},
{
id: 'shared-id', // Duplicate 1
name: 'Second Node',
type: 'n8n-nodes-base.set',
typeVersion: 2,
position: [450, 300],
parameters: {}
},
{
id: 'shared-id', // Duplicate 2
name: 'Third Node',
type: 'n8n-nodes-base.code',
typeVersion: 1,
position: [650, 300],
parameters: {}
}
],
connections: {}
};
// Simulate validation logic
const nodeIds = new Set<string>();
const nodeIdToIndex = new Map<string, number>();
const errors: Array<{ message: string }> = [];
for (let i = 0; i < workflow.nodes.length; i++) {
const node = workflow.nodes[i];
if (nodeIds.has(node.id)) {
const firstNodeIndex = nodeIdToIndex.get(node.id);
const firstNode = firstNodeIndex !== undefined ? workflow.nodes[firstNodeIndex] : undefined;
errors.push({
message: `Duplicate node ID: "${node.id}". Node at index ${i} (name: "${node.name}", type: "${node.type}") conflicts with node at index ${firstNodeIndex} (name: "${firstNode?.name || 'unknown'}", type: "${firstNode?.type || 'unknown'}")`
});
} else {
nodeIds.add(node.id);
nodeIdToIndex.set(node.id, i);
}
}
// Should report 2 errors (nodes at index 1 and 2 both conflict with node at index 0)
expect(errors).toHaveLength(2);
expect(errors[0].message).toContain('index 1');
expect(errors[0].message).toContain('Second Node');
expect(errors[1].message).toContain('index 2');
expect(errors[1].message).toContain('Third Node');
});
it('should handle duplicate IDs with same node type', () => {
// Edge case: Both nodes are the same type
const workflow = {
name: 'Test Workflow with Same Type Duplicates',
nodes: [
{
id: 'duplicate-slack',
name: 'Slack Send 1',
type: 'n8n-nodes-base.slack',
typeVersion: 2,
position: [250, 300],
parameters: {}
},
{
id: 'duplicate-slack',
name: 'Slack Send 2',
type: 'n8n-nodes-base.slack',
typeVersion: 2,
position: [450, 300],
parameters: {}
}
],
connections: {}
};
// Simulate validation logic
const nodeIds = new Set<string>();
const nodeIdToIndex = new Map<string, number>();
const errors: Array<{ message: string }> = [];
for (let i = 0; i < workflow.nodes.length; i++) {
const node = workflow.nodes[i];
if (nodeIds.has(node.id)) {
const firstNodeIndex = nodeIdToIndex.get(node.id);
const firstNode = firstNodeIndex !== undefined ? workflow.nodes[firstNodeIndex] : undefined;
errors.push({
message: `Duplicate node ID: "${node.id}". Node at index ${i} (name: "${node.name}", type: "${node.type}") conflicts with node at index ${firstNodeIndex} (name: "${firstNode?.name || 'unknown'}", type: "${firstNode?.type || 'unknown'}")`
});
} else {
nodeIds.add(node.id);
nodeIdToIndex.set(node.id, i);
}
}
expect(errors).toHaveLength(1);
expect(errors[0].message).toContain('Duplicate node ID: "duplicate-slack"');
expect(errors[0].message).toContain('Slack Send 2');
expect(errors[0].message).toContain('Slack Send 1');
// Both should show the same type
expect(errors[0].message).toMatch(/n8n-nodes-base\.slack.*n8n-nodes-base\.slack/s);
});
it('should handle duplicate IDs with empty node names gracefully', () => {
// Edge case: Empty string node names
const workflow = {
name: 'Test Workflow with Empty Names',
nodes: [
{
id: 'empty-name-id',
name: '',
type: 'n8n-nodes-base.httpRequest',
typeVersion: 3,
position: [250, 300],
parameters: {}
},
{
id: 'empty-name-id',
name: '',
type: 'n8n-nodes-base.set',
typeVersion: 2,
position: [450, 300],
parameters: {}
}
],
connections: {}
};
// Simulate validation logic with safe fallback
const nodeIds = new Set<string>();
const nodeIdToIndex = new Map<string, number>();
const errors: Array<{ message: string }> = [];
for (let i = 0; i < workflow.nodes.length; i++) {
const node = workflow.nodes[i];
if (nodeIds.has(node.id)) {
const firstNodeIndex = nodeIdToIndex.get(node.id);
const firstNode = firstNodeIndex !== undefined ? workflow.nodes[firstNodeIndex] : undefined;
errors.push({
message: `Duplicate node ID: "${node.id}". Node at index ${i} (name: "${node.name}", type: "${node.type}") conflicts with node at index ${firstNodeIndex} (name: "${firstNode?.name || 'unknown'}", type: "${firstNode?.type || 'unknown'}")`
});
} else {
nodeIds.add(node.id);
nodeIdToIndex.set(node.id, i);
}
}
// Should not crash and should use empty string in message
expect(errors).toHaveLength(1);
expect(errors[0].message).toContain('Duplicate node ID');
expect(errors[0].message).toContain('name: ""');
});
it('should handle duplicate IDs with missing node properties', () => {
// Edge case: Node with undefined type or name
const workflow = {
name: 'Test Workflow with Missing Properties',
nodes: [
{
id: 'missing-props',
name: 'Valid Node',
type: 'n8n-nodes-base.httpRequest',
typeVersion: 3,
position: [250, 300],
parameters: {}
},
{
id: 'missing-props',
name: undefined as any,
type: undefined as any,
typeVersion: 2,
position: [450, 300],
parameters: {}
}
],
connections: {}
};
// Simulate validation logic with safe fallbacks
const nodeIds = new Set<string>();
const nodeIdToIndex = new Map<string, number>();
const errors: Array<{ message: string }> = [];
for (let i = 0; i < workflow.nodes.length; i++) {
const node = workflow.nodes[i];
if (nodeIds.has(node.id)) {
const firstNodeIndex = nodeIdToIndex.get(node.id);
const firstNode = firstNodeIndex !== undefined ? workflow.nodes[firstNodeIndex] : undefined;
errors.push({
message: `Duplicate node ID: "${node.id}". Node at index ${i} (name: "${node.name}", type: "${node.type}") conflicts with node at index ${firstNodeIndex} (name: "${firstNode?.name || 'unknown'}", type: "${firstNode?.type || 'unknown'}")`
});
} else {
nodeIds.add(node.id);
nodeIdToIndex.set(node.id, i);
}
}
// Should use fallback values without crashing
expect(errors).toHaveLength(1);
expect(errors[0].message).toContain('Duplicate node ID: "missing-props"');
expect(errors[0].message).toContain('name: "undefined"');
expect(errors[0].message).toContain('type: "undefined"');
});
});
});

View File

@@ -1,577 +0,0 @@
/**
* Unit tests for MutationTracker - Sanitization and Processing
*/
import { describe, it, expect, beforeEach, vi } from 'vitest';
import { MutationTracker } from '../../../src/telemetry/mutation-tracker';
import { WorkflowMutationData, MutationToolName } from '../../../src/telemetry/mutation-types';
describe('MutationTracker', () => {
let tracker: MutationTracker;
beforeEach(() => {
tracker = new MutationTracker();
tracker.clearRecentMutations();
});
describe('Workflow Sanitization', () => {
it('should remove credentials from workflow level', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test sanitization',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {},
credentials: { apiKey: 'secret-key-123' },
sharedWorkflows: ['user1', 'user2'],
ownedBy: { id: 'user1', email: 'user@example.com' }
},
workflowAfter: {
id: 'wf1',
name: 'Test Updated',
nodes: [],
connections: {},
credentials: { apiKey: 'secret-key-456' }
},
mutationSuccess: true,
durationMs: 100
};
const result = await tracker.processMutation(data, 'test-user');
expect(result).toBeTruthy();
expect(result!.workflowBefore).toBeDefined();
expect(result!.workflowBefore.credentials).toBeUndefined();
expect(result!.workflowBefore.sharedWorkflows).toBeUndefined();
expect(result!.workflowBefore.ownedBy).toBeUndefined();
expect(result!.workflowAfter.credentials).toBeUndefined();
});
it('should remove credentials from node level', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test node credentials',
operations: [{ type: 'addNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [
{
id: 'node1',
name: 'HTTP Request',
type: 'n8n-nodes-base.httpRequest',
position: [100, 100],
credentials: {
httpBasicAuth: {
id: 'cred-123',
name: 'My Auth'
}
},
parameters: {
url: 'https://api.example.com'
}
}
],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [
{
id: 'node1',
name: 'HTTP Request',
type: 'n8n-nodes-base.httpRequest',
position: [100, 100],
credentials: {
httpBasicAuth: {
id: 'cred-456',
name: 'Updated Auth'
}
},
parameters: {
url: 'https://api.example.com'
}
}
],
connections: {}
},
mutationSuccess: true,
durationMs: 150
};
const result = await tracker.processMutation(data, 'test-user');
expect(result).toBeTruthy();
expect(result!.workflowBefore.nodes[0].credentials).toBeUndefined();
expect(result!.workflowAfter.nodes[0].credentials).toBeUndefined();
});
it('should redact API keys in parameters', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test API key redaction',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [
{
id: 'node1',
name: 'OpenAI',
type: 'n8n-nodes-base.openAi',
position: [100, 100],
parameters: {
apiKeyField: 'sk-1234567890abcdef1234567890abcdef',
tokenField: 'Bearer abc123def456',
config: {
passwordField: 'secret-password-123'
}
}
}
],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [
{
id: 'node1',
name: 'OpenAI',
type: 'n8n-nodes-base.openAi',
position: [100, 100],
parameters: {
apiKeyField: 'sk-newkey567890abcdef1234567890abcdef'
}
}
],
connections: {}
},
mutationSuccess: true,
durationMs: 200
};
const result = await tracker.processMutation(data, 'test-user');
expect(result).toBeTruthy();
const params = result!.workflowBefore.nodes[0].parameters;
// Fields with sensitive key names are redacted
expect(params.apiKeyField).toBe('[REDACTED]');
expect(params.tokenField).toBe('[REDACTED]');
expect(params.config.passwordField).toBe('[REDACTED]');
});
it('should redact URLs with authentication', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test URL redaction',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [
{
id: 'node1',
name: 'HTTP Request',
type: 'n8n-nodes-base.httpRequest',
position: [100, 100],
parameters: {
url: 'https://user:password@api.example.com/endpoint',
webhookUrl: 'http://admin:secret@webhook.example.com'
}
}
],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = await tracker.processMutation(data, 'test-user');
expect(result).toBeTruthy();
const params = result!.workflowBefore.nodes[0].parameters;
// URL auth is redacted but path is preserved
expect(params.url).toBe('[REDACTED_URL_WITH_AUTH]/endpoint');
expect(params.webhookUrl).toBe('[REDACTED_URL_WITH_AUTH]');
});
it('should redact long tokens (32+ characters)', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test token redaction',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [
{
id: 'node1',
name: 'Slack',
type: 'n8n-nodes-base.slack',
position: [100, 100],
parameters: {
message: 'Token: test-token-1234567890-1234567890123-abcdefghijklmnopqrstuvwx'
}
}
],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = await tracker.processMutation(data, 'test-user');
expect(result).toBeTruthy();
const message = result!.workflowBefore.nodes[0].parameters.message;
expect(message).toContain('[REDACTED_TOKEN]');
});
it('should redact OpenAI-style keys', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test OpenAI key redaction',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [
{
id: 'node1',
name: 'Code',
type: 'n8n-nodes-base.code',
position: [100, 100],
parameters: {
code: 'const apiKey = "sk-proj-abcd1234efgh5678ijkl9012mnop3456";'
}
}
],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = await tracker.processMutation(data, 'test-user');
expect(result).toBeTruthy();
const code = result!.workflowBefore.nodes[0].parameters.code;
// The 32+ char regex runs before OpenAI-specific regex, so it becomes [REDACTED_TOKEN]
expect(code).toContain('[REDACTED_TOKEN]');
});
it('should redact Bearer tokens', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test Bearer token redaction',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [
{
id: 'node1',
name: 'HTTP Request',
type: 'n8n-nodes-base.httpRequest',
position: [100, 100],
parameters: {
headerParameters: {
parameter: [
{
name: 'Authorization',
value: 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c'
}
]
}
}
}
],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = await tracker.processMutation(data, 'test-user');
expect(result).toBeTruthy();
const authValue = result!.workflowBefore.nodes[0].parameters.headerParameters.parameter[0].value;
expect(authValue).toBe('Bearer [REDACTED]');
});
it('should preserve workflow structure while sanitizing', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test structure preservation',
operations: [{ type: 'addNode' }],
workflowBefore: {
id: 'wf1',
name: 'My Workflow',
nodes: [
{
id: 'node1',
name: 'Start',
type: 'n8n-nodes-base.start',
position: [100, 100],
parameters: {}
},
{
id: 'node2',
name: 'HTTP',
type: 'n8n-nodes-base.httpRequest',
position: [300, 100],
parameters: {
url: 'https://api.example.com',
apiKey: 'secret-key'
}
}
],
connections: {
Start: {
main: [[{ node: 'HTTP', type: 'main', index: 0 }]]
}
},
active: true,
credentials: { apiKey: 'workflow-secret' }
},
workflowAfter: {
id: 'wf1',
name: 'My Workflow',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 150
};
const result = await tracker.processMutation(data, 'test-user');
expect(result).toBeTruthy();
// Check structure preserved
expect(result!.workflowBefore.id).toBe('wf1');
expect(result!.workflowBefore.name).toBe('My Workflow');
expect(result!.workflowBefore.nodes).toHaveLength(2);
expect(result!.workflowBefore.connections).toBeDefined();
expect(result!.workflowBefore.active).toBe(true);
// Check credentials removed
expect(result!.workflowBefore.credentials).toBeUndefined();
// Check node parameters sanitized
expect(result!.workflowBefore.nodes[1].parameters.apiKey).toBe('[REDACTED]');
// Check connections preserved
expect(result!.workflowBefore.connections.Start).toBeDefined();
});
it('should handle nested objects recursively', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test nested sanitization',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [
{
id: 'node1',
name: 'Complex Node',
type: 'n8n-nodes-base.httpRequest',
position: [100, 100],
parameters: {
authentication: {
type: 'oauth2',
// Use 'settings' instead of 'credentials' since 'credentials' is a sensitive key
settings: {
clientId: 'safe-client-id',
clientSecret: 'very-secret-key',
nested: {
apiKeyValue: 'deep-secret-key',
tokenValue: 'nested-token'
}
}
}
}
}
],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = await tracker.processMutation(data, 'test-user');
expect(result).toBeTruthy();
const auth = result!.workflowBefore.nodes[0].parameters.authentication;
// The key 'authentication' contains 'auth' which is sensitive, so entire object is redacted
expect(auth).toBe('[REDACTED]');
});
});
describe('Deduplication', () => {
it('should detect and skip duplicate mutations', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'First mutation',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test Updated',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
// First mutation should succeed
const result1 = await tracker.processMutation(data, 'test-user');
expect(result1).toBeTruthy();
// Exact duplicate should be skipped
const result2 = await tracker.processMutation(data, 'test-user');
expect(result2).toBeNull();
});
it('should allow mutations with different workflows', async () => {
const data1: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'First mutation',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test 1',
nodes: [],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test 1 Updated',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const data2: WorkflowMutationData = {
...data1,
workflowBefore: {
id: 'wf2',
name: 'Test 2',
nodes: [],
connections: {}
},
workflowAfter: {
id: 'wf2',
name: 'Test 2 Updated',
nodes: [],
connections: {}
}
};
const result1 = await tracker.processMutation(data1, 'test-user');
const result2 = await tracker.processMutation(data2, 'test-user');
expect(result1).toBeTruthy();
expect(result2).toBeTruthy();
});
});
describe('Statistics', () => {
it('should track recent mutations count', async () => {
expect(tracker.getRecentMutationsCount()).toBe(0);
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test counting',
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test Updated', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: 100
};
await tracker.processMutation(data, 'test-user');
expect(tracker.getRecentMutationsCount()).toBe(1);
// Process another with different workflow
const data2 = { ...data, workflowBefore: { ...data.workflowBefore, id: 'wf2' } };
await tracker.processMutation(data2, 'test-user');
expect(tracker.getRecentMutationsCount()).toBe(2);
});
it('should clear recent mutations', async () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test clearing',
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test Updated', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: 100
};
await tracker.processMutation(data, 'test-user');
expect(tracker.getRecentMutationsCount()).toBe(1);
tracker.clearRecentMutations();
expect(tracker.getRecentMutationsCount()).toBe(0);
});
});
});

View File

@@ -1,557 +0,0 @@
/**
* Unit tests for MutationValidator - Data Quality Validation
*/
import { describe, it, expect, beforeEach } from 'vitest';
import { MutationValidator } from '../../../src/telemetry/mutation-validator';
import { WorkflowMutationData, MutationToolName } from '../../../src/telemetry/mutation-types';
import type { UpdateNodeOperation } from '../../../src/types/workflow-diff';
describe('MutationValidator', () => {
let validator: MutationValidator;
beforeEach(() => {
validator = new MutationValidator();
});
describe('Workflow Structure Validation', () => {
it('should accept valid workflow structure', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Valid mutation',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test Updated',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.valid).toBe(true);
expect(result.errors).toHaveLength(0);
});
it('should reject workflow without nodes array', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Invalid mutation',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
connections: {}
} as any,
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.valid).toBe(false);
expect(result.errors).toContain('Invalid workflow_before structure');
});
it('should reject workflow without connections object', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Invalid mutation',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: []
} as any,
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.valid).toBe(false);
expect(result.errors).toContain('Invalid workflow_before structure');
});
it('should reject null workflow', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Invalid mutation',
operations: [{ type: 'updateNode' }],
workflowBefore: null as any,
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.valid).toBe(false);
expect(result.errors).toContain('Invalid workflow_before structure');
});
});
describe('Workflow Size Validation', () => {
it('should accept workflows within size limit', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Size test',
operations: [{ type: 'addNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [{
id: 'node1',
name: 'Start',
type: 'n8n-nodes-base.start',
position: [100, 100],
parameters: {}
}],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.valid).toBe(true);
expect(result.errors).not.toContain(expect.stringContaining('size'));
});
it('should reject oversized workflows', () => {
// Create a very large workflow (over 500KB default limit)
// 600KB string = 600,000 characters
const largeArray = new Array(600000).fill('x').join('');
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Oversized test',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [{
id: 'node1',
name: 'Large',
type: 'n8n-nodes-base.code',
position: [100, 100],
parameters: {
code: largeArray
}
}],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.valid).toBe(false);
expect(result.errors.some(err => err.includes('size') && err.includes('exceeds'))).toBe(true);
});
it('should respect custom size limit', () => {
const customValidator = new MutationValidator({ maxWorkflowSizeKb: 1 });
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Custom size test',
operations: [{ type: 'addNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [{
id: 'node1',
name: 'Medium',
type: 'n8n-nodes-base.code',
position: [100, 100],
parameters: {
code: 'x'.repeat(2000) // ~2KB
}
}],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = customValidator.validate(data);
expect(result.valid).toBe(false);
expect(result.errors.some(err => err.includes('exceeds maximum (1KB)'))).toBe(true);
});
});
describe('Intent Validation', () => {
it('should warn about empty intent', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: '',
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.warnings).toContain('User intent is empty');
});
it('should warn about very short intent', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'fix',
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.warnings).toContain('User intent is too short (less than 5 characters)');
});
it('should warn about very long intent', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'x'.repeat(1001),
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.warnings).toContain('User intent is very long (over 1000 characters)');
});
it('should accept good intent length', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Add error handling to API nodes',
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.warnings).not.toContain(expect.stringContaining('intent'));
});
});
describe('Operations Validation', () => {
it('should reject empty operations array', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test',
operations: [],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.valid).toBe(false);
expect(result.errors).toContain('No operations provided');
});
it('should accept operations array with items', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test',
operations: [
{ type: 'addNode' },
{ type: 'addConnection' }
],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.valid).toBe(true);
expect(result.errors).not.toContain('No operations provided');
});
});
describe('Duration Validation', () => {
it('should reject negative duration', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test',
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: -100
};
const result = validator.validate(data);
expect(result.valid).toBe(false);
expect(result.errors).toContain('Duration cannot be negative');
});
it('should warn about very long duration', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test',
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: 400000 // Over 5 minutes
};
const result = validator.validate(data);
expect(result.warnings).toContain('Duration is very long (over 5 minutes)');
});
it('should accept reasonable duration', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test',
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
mutationSuccess: true,
durationMs: 150
};
const result = validator.validate(data);
expect(result.valid).toBe(true);
expect(result.warnings).not.toContain(expect.stringContaining('Duration'));
});
});
describe('Meaningful Change Detection', () => {
it('should warn when workflows are identical', () => {
const workflow = {
id: 'wf1',
name: 'Test',
nodes: [
{
id: 'node1',
name: 'Start',
type: 'n8n-nodes-base.start',
position: [100, 100],
parameters: {}
}
],
connections: {}
};
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'No actual change',
operations: [{ type: 'updateNode' }],
workflowBefore: workflow,
workflowAfter: JSON.parse(JSON.stringify(workflow)), // Deep clone
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.warnings).toContain('No meaningful change detected between before and after workflows');
});
it('should not warn when workflows are different', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Real change',
operations: [{ type: 'updateNode' }],
workflowBefore: {
id: 'wf1',
name: 'Test',
nodes: [],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'Test Updated',
nodes: [],
connections: {}
},
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.warnings).not.toContain(expect.stringContaining('meaningful change'));
});
});
describe('Validation Data Consistency', () => {
it('should warn about invalid validation structure', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test',
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
validationBefore: { valid: 'yes' } as any, // Invalid structure
validationAfter: { valid: true, errors: [] },
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.warnings).toContain('Invalid validation_before structure');
});
it('should accept valid validation structure', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Test',
operations: [{ type: 'updateNode' }],
workflowBefore: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
workflowAfter: { id: 'wf1', name: 'Test', nodes: [], connections: {} },
validationBefore: { valid: false, errors: [{ type: 'test_error', message: 'Error 1' }] },
validationAfter: { valid: true, errors: [] },
mutationSuccess: true,
durationMs: 100
};
const result = validator.validate(data);
expect(result.warnings).not.toContain(expect.stringContaining('validation'));
});
});
describe('Comprehensive Validation', () => {
it('should collect multiple errors and warnings', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: '', // Empty - warning
operations: [], // Empty - error
workflowBefore: null as any, // Invalid - error
workflowAfter: { nodes: [] } as any, // Missing connections - error
mutationSuccess: true,
durationMs: -50 // Negative - error
};
const result = validator.validate(data);
expect(result.valid).toBe(false);
expect(result.errors.length).toBeGreaterThan(0);
expect(result.warnings.length).toBeGreaterThan(0);
});
it('should pass validation with all criteria met', () => {
const data: WorkflowMutationData = {
sessionId: 'test-session-123',
toolName: MutationToolName.UPDATE_PARTIAL,
userIntent: 'Add error handling to HTTP Request nodes',
operations: [
{ type: 'updateNode', nodeName: 'node1', updates: { onError: 'continueErrorOutput' } } as UpdateNodeOperation
],
workflowBefore: {
id: 'wf1',
name: 'API Workflow',
nodes: [
{
id: 'node1',
name: 'HTTP Request',
type: 'n8n-nodes-base.httpRequest',
position: [300, 200],
parameters: {
url: 'https://api.example.com',
method: 'GET'
}
}
],
connections: {}
},
workflowAfter: {
id: 'wf1',
name: 'API Workflow',
nodes: [
{
id: 'node1',
name: 'HTTP Request',
type: 'n8n-nodes-base.httpRequest',
position: [300, 200],
parameters: {
url: 'https://api.example.com',
method: 'GET'
},
onError: 'continueErrorOutput'
}
],
connections: {}
},
validationBefore: { valid: true, errors: [] },
validationAfter: { valid: true, errors: [] },
mutationSuccess: true,
durationMs: 245
};
const result = validator.validate(data);
expect(result.valid).toBe(true);
expect(result.errors).toHaveLength(0);
});
});
});

View File

@@ -70,18 +70,13 @@ describe('TelemetryManager', () => {
updateToolSequence: vi.fn(),
getEventQueue: vi.fn().mockReturnValue([]),
getWorkflowQueue: vi.fn().mockReturnValue([]),
getMutationQueue: vi.fn().mockReturnValue([]),
clearEventQueue: vi.fn(),
clearWorkflowQueue: vi.fn(),
clearMutationQueue: vi.fn(),
enqueueMutation: vi.fn(),
getMutationQueueSize: vi.fn().mockReturnValue(0),
getStats: vi.fn().mockReturnValue({
rateLimiter: { currentEvents: 0, droppedEvents: 0 },
validator: { successes: 0, errors: 0 },
eventQueueSize: 0,
workflowQueueSize: 0,
mutationQueueSize: 0,
performanceMetrics: {}
})
};
@@ -322,21 +317,17 @@ describe('TelemetryManager', () => {
it('should flush events and workflows', async () => {
const mockEvents = [{ user_id: 'user1', event: 'test', properties: {} }];
const mockWorkflows = [{ user_id: 'user1', workflow_hash: 'hash1' }];
const mockMutations: any[] = [];
mockEventTracker.getEventQueue.mockReturnValue(mockEvents);
mockEventTracker.getWorkflowQueue.mockReturnValue(mockWorkflows);
mockEventTracker.getMutationQueue.mockReturnValue(mockMutations);
await manager.flush();
expect(mockEventTracker.getEventQueue).toHaveBeenCalled();
expect(mockEventTracker.getWorkflowQueue).toHaveBeenCalled();
expect(mockEventTracker.getMutationQueue).toHaveBeenCalled();
expect(mockEventTracker.clearEventQueue).toHaveBeenCalled();
expect(mockEventTracker.clearWorkflowQueue).toHaveBeenCalled();
expect(mockEventTracker.clearMutationQueue).toHaveBeenCalled();
expect(mockBatchProcessor.flush).toHaveBeenCalledWith(mockEvents, mockWorkflows, mockMutations);
expect(mockBatchProcessor.flush).toHaveBeenCalledWith(mockEvents, mockWorkflows);
});
it('should not flush when disabled', async () => {