chore: clean up development artifacts and update .gitignore

- Remove AI agent coordination files and progress tracking
- Remove temporary test results and generated artifacts
- Remove diagnostic test scripts from src/scripts/
- Remove development planning documents
- Update .gitignore to exclude test artifacts
- Clean up 53 temporary files total
This commit is contained in:
czlonkowski
2025-07-30 09:22:53 +02:00
parent f4c776f43b
commit 07cda6e3ab
54 changed files with 8 additions and 13666 deletions

8
.gitignore vendored
View File

@@ -44,12 +44,20 @@ test-reports/
test-summary.md
test-metadata.json
benchmark-results.json
benchmark-results*.json
benchmark-summary.json
coverage-report.json
benchmark-comparison.md
benchmark-comparison.json
benchmark-current.json
benchmark-baseline.json
tests/data/*.db
tests/fixtures/*.tmp
tests/test-results/
.test-dbs/
junit.xml
*.test.db
test-*.db
.vitest/
# TypeScript

View File

@@ -1,62 +0,0 @@
# AI Agent Task Assignments
## Parallel Fix Strategy
### Agent 1: Database Isolation Fixer
**Target: Fix 9 database-related test failures**
- Fix database isolation in all test files
- Fix FTS5 rebuild syntax: `VALUES('rebuild')` not `VALUES("rebuild")`
- Add proper cleanup in afterEach hooks
- Files: `tests/integration/database/*.test.ts`
### Agent 2: MSW Setup Fixer
**Target: Fix 6 MSW-related failures**
- Add MSW setup to each integration test file
- Remove global MSW setup conflicts
- Ensure proper start/stop lifecycle
- Files: `tests/integration/msw-setup.test.ts`, `tests/integration/n8n-api/*.test.ts`
### Agent 3: MCP Protocol Fixer
**Target: Fix 16 MCP error handling failures**
- Apply pattern from tool-invocation.test.ts to error-handling.test.ts
- Change `response[0].text` to `(response as any).content[0].text`
- Files: `tests/integration/mcp-protocol/error-handling.test.ts`
### Agent 4: FTS5 Search Fixer
**Target: Fix 7 FTS5 search failures**
- Handle empty search terms
- Fix NOT query syntax
- Adjust result count expectations
- Files: `tests/integration/database/fts5-search.test.ts`
### Agent 5: Performance Test Adjuster
**Target: Fix 15 performance test failures**
- Analyze actual performance vs expectations
- Adjust thresholds to realistic values
- Document why thresholds were changed
- Files: `tests/integration/database/performance.test.ts`, `tests/integration/mcp-protocol/performance.test.ts`
### Agent 6: Session Management Fixer
**Target: Fix 5 session/timeout failures**
- Add proper async cleanup
- Fix transport initialization
- Reduce timeout values
- Files: `tests/integration/mcp-protocol/session-management.test.ts`
## Coordination Strategy
1. **All agents work in parallel** on the same branch
2. **Each agent creates atomic commits** for their fixes
3. **Test after each fix** to ensure no regressions
4. **Report back** with status and any blockers
## Success Criteria
- All 58 failing tests should pass
- No new test failures introduced
- CI shows green (after removing || true)
- Ready to merge in 2-3 days
## If Blocked
- Adjust test expectations rather than fixing complex issues
- Use test.skip() for truly problematic tests
- Document why changes were made

View File

@@ -1,77 +0,0 @@
# Integration Test Follow-up Tasks
## Summary
We've successfully fixed all 115 failing integration tests, achieving 100% pass rate (249 tests passing, 4 skipped). However, the code review identified several areas needing improvement to ensure tests remain effective quality gates.
## Critical Issues to Address
### 1. Skipped Session Management Tests (HIGH PRIORITY)
**Issue**: 2 critical concurrent session tests are skipped instead of fixed
**Impact**: Could miss concurrency bugs in production
**Action**:
- Investigate root cause of concurrency issues
- Implement proper session isolation
- Consider using database transactions or separate processes
### 2. Ambiguous Error Handling (MEDIUM PRIORITY)
**Issue**: Protocol compliance tests accept both errors AND exceptions as valid
**Impact**: Unclear expected behavior, could mask bugs
**Action**:
- Define clear error handling expectations
- Separate tests for error vs exception cases
- Document expected behavior in each scenario
### 3. Performance Thresholds (MEDIUM PRIORITY)
**Issue**: CI thresholds may be too lenient (2x local thresholds)
**Impact**: Could miss performance regressions
**Action**:
- Collect baseline performance data from CI runs
- Adjust thresholds based on actual data (p95/p99)
- Implement performance tracking over time
### 4. Timing Dependencies (LOW PRIORITY)
**Issue**: Hardcoded setTimeout delays for cleanup
**Impact**: Tests could be flaky in different environments
**Action**:
- Replace timeouts with proper state checking
- Implement retry logic with exponential backoff
- Use waitFor patterns instead of fixed delays
## Recommended Improvements
### Test Quality Enhancements
1. Add performance baseline tracking
2. Implement flaky test detection
3. Add resource leak detection
4. Improve error messages with more context
### Infrastructure Improvements
1. Create test stability dashboard
2. Add parallel test execution capabilities
3. Implement test result caching
4. Add visual regression testing for UI components
### Documentation Needs
1. Document why specific thresholds were chosen
2. Create testing best practices guide
3. Add troubleshooting guide for common failures
4. Document CI vs local environment differences
## Technical Debt Created
- 2 skipped concurrent session tests
- Arbitrary performance thresholds without data backing
- Timeout-based cleanup instead of state-based
- Missing test stability metrics
## Next Steps
1. Create issues for each critical item
2. Prioritize based on risk to production
3. Allocate time in next sprint for test improvements
4. Consider dedicated test infrastructure improvements
## Success Metrics
- 0 skipped tests (currently 4)
- <1% flaky test rate
- Performance thresholds based on actual data
- All tests pass in <5 minutes
- Clear documentation for all test patterns

View File

@@ -1,30 +0,0 @@
[
{
"name": "sample - array sorting - small",
"unit": "ms",
"value": 0.0135,
"range": 0.21789999999999998,
"extra": "74100 ops/sec"
},
{
"name": "sample - array sorting - large",
"unit": "ms",
"value": 2.3265,
"range": 0.8298999999999999,
"extra": "430 ops/sec"
},
{
"name": "sample - string concatenation",
"unit": "ms",
"value": 0.0032,
"range": 0.26320000000000005,
"extra": "309346 ops/sec"
},
{
"name": "sample - object creation",
"unit": "ms",
"value": 0.0476,
"range": 0.30010000000000003,
"extra": "20994 ops/sec"
}
]

View File

@@ -1,29 +0,0 @@
{
"timestamp": "2025-07-28T21:24:37.843Z",
"benchmarks": [
{
"name": "sample - array sorting - small",
"time": "0.013ms",
"opsPerSec": "74100 ops/sec",
"range": "±0.109ms"
},
{
"name": "sample - array sorting - large",
"time": "2.326ms",
"opsPerSec": "430 ops/sec",
"range": "±0.415ms"
},
{
"name": "sample - string concatenation",
"time": "0.003ms",
"opsPerSec": "309346 ops/sec",
"range": "±0.132ms"
},
{
"name": "sample - object creation",
"time": "0.048ms",
"opsPerSec": "20994 ops/sec",
"range": "±0.150ms"
}
]
}

File diff suppressed because one or more lines are too long

View File

@@ -1,184 +0,0 @@
# Integration Test Fix Plan
## Executive Summary
We're developing a comprehensive test suite from scratch on a feature branch. Unit tests are solid (932 passing, 87.8% coverage), but integration tests need significant work (58 failures out of 246 tests).
**Key Decision**: Should we fix all integration tests before merging, or merge with a phased approach?
## Current Situation
### What's Working
-**Unit Tests**: 932 tests, 87.8% coverage, all passing
-**Test Infrastructure**: Vitest, factories, builders all set up
-**CI/CD Pipeline**: Runs in ~2 minutes (but hiding failures)
### What Needs Work
- ⚠️ **Integration Tests**: 58 failures (23.6% failure rate)
- ⚠️ **CI Configuration**: `|| true` hiding test failures
- ⚠️ **No E2E Tests**: Not started yet
## Root Cause Analysis
### 1. Database State Management (9 failures)
```
UNIQUE constraint failed: templates.workflow_id
database disk image is malformed
```
**Fix**: Isolate database instances per test
### 2. MCP Protocol Response Structure (16 failures)
```
Cannot read properties of undefined (reading 'text')
```
**Fix**: Update error-handling tests to match actual response structure
### 3. MSW Not Initialized (6 failures)
```
Request failed with status code 501
```
**Fix**: Add MSW setup to each test file
### 4. FTS5 Search Syntax (7 failures)
```
fts5: syntax error near ""
```
**Fix**: Handle empty search terms, fix NOT query syntax
### 5. Session Management Timeouts (5 failures)
**Fix**: Proper async cleanup in afterEach hooks
### 6. Performance Thresholds (15 failures)
**Fix**: Adjust thresholds to match actual performance
## Proposed Course of Action
### Option A: Fix Everything Before Merge (3-4 weeks)
**Pros:**
- Clean, fully passing test suite
- No technical debt
- Sets high quality bar
**Cons:**
- Delays value delivery
- Blocks other development
- Risk of scope creep
### Option B: Phased Approach (Recommended)
#### Phase 1: Critical Fixes (1 week)
1. **Remove `|| true` from CI** - See real status
2. **Fix Database Isolation** - Prevents data corruption
3. **Fix MSW Setup** - Unblocks API tests
4. **Update MCP error-handling tests** - Quick fix
**Target**: 30-35 tests fixed, ~85% pass rate
#### Phase 2: Merge & Iterate (Week 2)
1. **Merge to main with known issues**
- Document failing tests
- Create issues for remaining work
- Set CI to warn but not block
2. **Benefits:**
- Team gets unit test coverage immediately
- Integration tests provide partial coverage
- Incremental improvement approach
#### Phase 3: Complete Integration Tests (Week 3-4)
- Fix remaining FTS5 search issues
- Resolve session management timeouts
- Adjust performance thresholds
- Target: 100% pass rate
#### Phase 4: E2E Tests (Week 5-6)
- Build on stable integration test foundation
- Focus on critical user journeys
## Implementation Steps
### Week 1: Critical Infrastructure
```bash
# Day 1-2: Fix CI and Database
- Remove || true from workflow
- Implement TestDatabase.create() for isolation
- Fix FTS5 rebuild syntax
# Day 3-4: Fix MSW and MCP
- Add MSW to test files
- Apply response.content[0] pattern to error-handling.test.ts
# Day 5: Test & Document
- Run full suite
- Document remaining issues
- Create tracking board
```
### Week 2: Merge Strategy
```yaml
# Modified CI configuration
- name: Run integration tests
run: |
npm run test:integration || echo "::warning::Integration tests have known failures"
# Still exit 0 to allow merge, but warn
continue-on-error: true # Temporary until all fixed
```
## Success Metrics
### Week 1 Goals
- [ ] CI shows real test status
- [ ] Database tests isolated (9 fixed)
- [ ] MSW tests passing (6 fixed)
- [ ] MCP error tests fixed (16 fixed)
- [ ] ~85% integration test pass rate
### End State Goals
- [ ] 100% integration test pass rate
- [ ] No flaky tests
- [ ] E2E test suite started
- [ ] CI blocks on failures
## Risk Mitigation
### If Fixes Take Longer
- Focus on critical path tests only
- Temporarily skip problematic tests
- Adjust thresholds rather than fix performance
### If New Issues Arise
- Time-box investigation (2 hours max)
- Document and move on
- Create follow-up tickets
## Team Communication
### Messaging
```
We're adding comprehensive test coverage to ensure code quality.
Unit tests are complete and passing (932 tests, 87.8% coverage).
Integration tests need some work - we'll fix critical issues this week
and merge with a plan to complete the remaining fixes.
```
### Benefits to Emphasize
- Catching bugs before production
- Faster development with safety net
- Better code documentation through tests
- Reduced manual testing burden
## Decision Point
**Recommendation**: Go with Option B (Phased Approach)
**Rationale:**
1. Delivers immediate value (unit tests)
2. Makes progress visible
3. Allows parallel development
4. Reduces merge conflicts
5. Pragmatic over perfect
**Next Step**: Get team consensus on phased approach, then start Week 1 fixes.

View File

@@ -1,59 +0,0 @@
# MCP Error Handling Test Fixes Summary
## Overview
Fixed 16 failing tests in `tests/integration/mcp-protocol/error-handling.test.ts` by correcting response access patterns and adjusting test expectations to match actual API behavior.
## Key Fixes Applied
### 1. Response Access Pattern
Changed from: `(response as any)[0].text`
To: `(response as any).content[0].text`
This aligns with the MCP protocol structure where responses have a `content` array containing text objects.
### 2. list_nodes Response Structure
The `list_nodes` tool returns an object with a `nodes` property:
```javascript
const result = JSON.parse((response as any).content[0].text);
expect(result).toHaveProperty('nodes');
expect(Array.isArray(result.nodes)).toBe(true);
```
### 3. search_nodes Response Structure
The `search_nodes` tool returns an object with a `results` property (not `nodes`):
```javascript
const result = JSON.parse((response as any).content[0].text);
expect(result).toHaveProperty('results');
expect(Array.isArray(result.results)).toBe(true);
```
### 4. Error Handling Behavior
- Empty search queries return empty results rather than throwing errors
- Invalid categories in list_nodes return empty arrays
- Workflow validation errors are returned as response objects with `valid: false` rather than throwing
### 5. Missing Parameter Errors
When required parameters are missing (e.g., nodeType for get_node_info), the actual error is:
"Cannot read properties of undefined (reading 'startsWith')"
This occurs because the parameter validation happens inside the implementation when trying to use the undefined value.
### 6. Validation Error Structure
Not all validation errors have a `field` property, so tests now check for its existence before asserting on it:
```javascript
if (validation.errors[0].field !== undefined) {
expect(validation.errors[0].field).toBeDefined();
}
```
## Test Results
All 31 tests in error-handling.test.ts now pass successfully, providing comprehensive coverage of MCP error handling scenarios including:
- JSON-RPC error codes
- Tool-specific errors
- Large payload handling
- Invalid JSON handling
- Timeout scenarios
- Memory pressure
- Error recovery
- Edge cases
- Error message quality

View File

@@ -1,72 +0,0 @@
# Transactional Updates Implementation Summary
## Overview
We successfully implemented a simple transactional update system for the `n8n_update_partial_workflow` tool that allows AI agents to add nodes and connect them in a single request, regardless of operation order.
## Key Changes
### 1. WorkflowDiffEngine (`src/services/workflow-diff-engine.ts`)
- Added **5 operation limit** to keep complexity manageable
- Implemented **two-pass processing**:
- Pass 1: Node operations (add, remove, update, move, enable, disable)
- Pass 2: Other operations (connections, settings, metadata)
- Operations are always applied to working copy for proper validation
### 2. Benefits
- **Order Independence**: AI agents can write operations in any logical order
- **Atomic Updates**: All operations succeed or all fail
- **Simple Implementation**: ~50 lines of code change
- **Backward Compatible**: Existing usage still works
### 3. Example Usage
```json
{
"id": "workflow-id",
"operations": [
// Connections first (would fail before)
{ "type": "addConnection", "source": "Start", "target": "Process" },
{ "type": "addConnection", "source": "Process", "target": "End" },
// Nodes added later (processed first internally)
{ "type": "addNode", "node": { "name": "Process", ... }},
{ "type": "addNode", "node": { "name": "End", ... }}
]
}
```
## Testing
Created comprehensive test suite (`src/scripts/test-transactional-diff.ts`) that validates:
- Mixed operations with connections before nodes
- Operation limit enforcement (max 5)
- Validate-only mode
- Complex mixed operations
All tests pass successfully!
## Documentation Updates
1. **CLAUDE.md** - Added transactional updates to v2.7.0 release notes
2. **workflow-diff-examples.md** - Added new section explaining transactional updates
3. **Tool description** - Updated to highlight order independence
4. **transactional-updates-example.md** - Before/after comparison
## Why This Approach?
1. **Simplicity**: No complex dependency graphs or topological sorting
2. **Predictability**: Clear two-pass rule is easy to understand
3. **Reliability**: 5 operation limit prevents edge cases
4. **Performance**: Minimal overhead, same validation logic
## Future Enhancements (Not Implemented)
If needed in the future, we could add:
- Automatic operation reordering based on dependencies
- Larger operation limits with smarter batching
- Dependency hints in error messages
But the current simple approach covers 90%+ of use cases effectively!

View File

@@ -1,7 +0,0 @@
#!/bin/bash
# Emergency script to run tests without coverage in CI if hanging persists
echo "Running tests without coverage to diagnose hanging issue..."
FEATURE_TEST_COVERAGE=false vitest run --reporter=default --reporter=junit
echo "Tests completed. If this works but regular test:ci hangs, the issue is coverage-related."

View File

@@ -1,48 +0,0 @@
#!/bin/bash
echo "Testing MSW fix to prevent hanging in CI..."
echo "========================================"
# Colors for output
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Test 1: Run unit tests (should not load MSW)
echo -e "\n${YELLOW}Test 1: Running unit tests (without MSW)...${NC}"
if npm run test:unit -- --run --reporter=verbose tests/unit/services/property-filter.test.ts; then
echo -e "${GREEN}✓ Unit tests passed without MSW${NC}"
else
echo -e "${RED}✗ Unit tests failed${NC}"
exit 1
fi
# Test 2: Run integration test that uses MSW
echo -e "\n${YELLOW}Test 2: Running integration test with MSW...${NC}"
if npm run test:integration -- --run --reporter=verbose tests/integration/msw-setup.test.ts; then
echo -e "${GREEN}✓ Integration tests passed with MSW${NC}"
else
echo -e "${RED}✗ Integration tests failed${NC}"
exit 1
fi
# Test 3: Check that process exits cleanly
echo -e "\n${YELLOW}Test 3: Testing clean process exit...${NC}"
timeout 30s npm run test:unit -- --run tests/unit/services/property-filter.test.ts
EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
echo -e "${GREEN}✓ Process exited cleanly${NC}"
else
if [ $EXIT_CODE -eq 124 ]; then
echo -e "${RED}✗ Process timed out (hanging detected)${NC}"
exit 1
else
echo -e "${RED}✗ Process exited with code $EXIT_CODE${NC}"
exit 1
fi
fi
echo -e "\n${GREEN}All tests passed! MSW fix is working correctly.${NC}"
echo "The fix prevents MSW from being loaded globally, which was causing hanging in CI."

View File

@@ -1,22 +0,0 @@
#!/bin/bash
# Test MSW setup for n8n-mcp
echo "Testing MSW (Mock Service Worker) setup..."
echo "========================================"
# Build the project first
echo "Building project..."
npm run build
# Run the MSW setup test
echo -e "\nRunning MSW setup verification test..."
npm test tests/integration/msw-setup.test.ts
# Check if test passed
if [ $? -eq 0 ]; then
echo -e "\n✅ MSW setup is working correctly!"
echo "You can now use MSW for mocking n8n API in your integration tests."
else
echo -e "\n❌ MSW setup test failed. Please check the errors above."
exit 1
fi

View File

@@ -1,106 +0,0 @@
#!/usr/bin/env node
import axios from 'axios';
import { config } from 'dotenv';
// Load environment variables
config();
async function debugN8nAuth() {
const apiUrl = process.env.N8N_API_URL;
const apiKey = process.env.N8N_API_KEY;
if (!apiUrl || !apiKey) {
console.error('Error: N8N_API_URL and N8N_API_KEY environment variables are required');
console.error('Please set them in your .env file or environment');
process.exit(1);
}
console.log('Testing n8n API Authentication...');
console.log('API URL:', apiUrl);
console.log('API Key:', apiKey.substring(0, 20) + '...');
// Test 1: Direct health check
console.log('\n=== Test 1: Direct Health Check (no auth) ===');
try {
const healthResponse = await axios.get(`${apiUrl}/api/v1/health`);
console.log('Health Response:', healthResponse.data);
} catch (error: any) {
console.log('Health Check Error:', error.response?.status, error.response?.data || error.message);
}
// Test 2: Workflows with API key
console.log('\n=== Test 2: List Workflows (with auth) ===');
try {
const workflowsResponse = await axios.get(`${apiUrl}/api/v1/workflows`, {
headers: {
'X-N8N-API-KEY': apiKey,
'Content-Type': 'application/json'
},
params: { limit: 1 }
});
console.log('Workflows Response:', workflowsResponse.data);
} catch (error: any) {
console.log('Workflows Error:', error.response?.status, error.response?.data || error.message);
if (error.response?.headers) {
console.log('Response Headers:', error.response.headers);
}
}
// Test 3: Try different auth header formats
console.log('\n=== Test 3: Alternative Auth Headers ===');
// Try Bearer token
try {
const bearerResponse = await axios.get(`${apiUrl}/api/v1/workflows`, {
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
params: { limit: 1 }
});
console.log('Bearer Auth Success:', bearerResponse.data);
} catch (error: any) {
console.log('Bearer Auth Error:', error.response?.status);
}
// Try lowercase header
try {
const lowercaseResponse = await axios.get(`${apiUrl}/api/v1/workflows`, {
headers: {
'x-n8n-api-key': apiKey,
'Content-Type': 'application/json'
},
params: { limit: 1 }
});
console.log('Lowercase Header Success:', lowercaseResponse.data);
} catch (error: any) {
console.log('Lowercase Header Error:', error.response?.status);
}
// Test 4: Check API endpoint structure
console.log('\n=== Test 4: API Endpoint Structure ===');
const endpoints = [
'/api/v1/workflows',
'/workflows',
'/api/workflows',
'/api/v1/workflow'
];
for (const endpoint of endpoints) {
try {
const response = await axios.get(`${apiUrl}${endpoint}`, {
headers: {
'X-N8N-API-KEY': apiKey,
},
params: { limit: 1 },
timeout: 5000
});
console.log(`${endpoint} - Success`);
} catch (error: any) {
console.log(`${endpoint} - ${error.response?.status || 'Failed'}`);
}
}
}
debugN8nAuth().catch(console.error);

View File

@@ -1,65 +0,0 @@
#!/usr/bin/env node
import { N8nNodeLoader } from '../loaders/node-loader';
import { NodeParser } from '../parsers/node-parser';
async function debugNode() {
const loader = new N8nNodeLoader();
const parser = new NodeParser();
console.log('Loading nodes...');
const nodes = await loader.loadAllNodes();
// Find HTTP Request node
const httpNode = nodes.find(n => n.nodeName === 'HttpRequest');
if (httpNode) {
console.log('\n=== HTTP Request Node Debug ===');
console.log('NodeName:', httpNode.nodeName);
console.log('Package:', httpNode.packageName);
console.log('NodeClass type:', typeof httpNode.NodeClass);
console.log('NodeClass constructor name:', httpNode.NodeClass?.constructor?.name);
try {
const parsed = parser.parse(httpNode.NodeClass, httpNode.packageName);
console.log('\nParsed successfully:');
console.log('- Node Type:', parsed.nodeType);
console.log('- Display Name:', parsed.displayName);
console.log('- Style:', parsed.style);
console.log('- Properties count:', parsed.properties.length);
console.log('- Operations count:', parsed.operations.length);
console.log('- Is AI Tool:', parsed.isAITool);
console.log('- Is Versioned:', parsed.isVersioned);
if (parsed.properties.length > 0) {
console.log('\nFirst property:', parsed.properties[0]);
}
} catch (error) {
console.error('\nError parsing node:', (error as Error).message);
console.error('Stack:', (error as Error).stack);
}
} else {
console.log('HTTP Request node not found');
}
// Find Code node
const codeNode = nodes.find(n => n.nodeName === 'Code');
if (codeNode) {
console.log('\n\n=== Code Node Debug ===');
console.log('NodeName:', codeNode.nodeName);
console.log('Package:', codeNode.packageName);
console.log('NodeClass type:', typeof codeNode.NodeClass);
try {
const parsed = parser.parse(codeNode.NodeClass, codeNode.packageName);
console.log('\nParsed successfully:');
console.log('- Node Type:', parsed.nodeType);
console.log('- Properties count:', parsed.properties.length);
console.log('- Is Versioned:', parsed.isVersioned);
} catch (error) {
console.error('\nError parsing node:', (error as Error).message);
}
}
}
debugNode().catch(console.error);

View File

@@ -1,212 +0,0 @@
#!/usr/bin/env node
/**
* Test AI workflow validation enhancements
*/
import { createDatabaseAdapter } from '../database/database-adapter';
import { NodeRepository } from '../database/node-repository';
import { WorkflowValidator } from '../services/workflow-validator';
import { Logger } from '../utils/logger';
import { EnhancedConfigValidator } from '../services/enhanced-config-validator';
const logger = new Logger({ prefix: '[TestAIWorkflow]' });
// Test workflow with AI Agent and tools
const aiWorkflow = {
name: 'AI Agent with Tools',
nodes: [
{
id: '1',
name: 'Webhook',
type: 'n8n-nodes-base.webhook',
position: [100, 100],
parameters: {
path: 'ai-webhook',
httpMethod: 'POST'
}
},
{
id: '2',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
position: [300, 100],
parameters: {
text: '={{ $json.query }}',
systemMessage: 'You are a helpful assistant with access to tools'
}
},
{
id: '3',
name: 'Google Sheets Tool',
type: 'n8n-nodes-base.googleSheets',
position: [300, 250],
parameters: {
operation: 'append',
sheetId: '={{ $fromAI("sheetId", "Sheet ID") }}',
range: 'A:Z'
}
},
{
id: '4',
name: 'Slack Tool',
type: 'n8n-nodes-base.slack',
position: [300, 350],
parameters: {
resource: 'message',
operation: 'post',
channel: '={{ $fromAI("channel", "Channel name") }}',
text: '={{ $fromAI("message", "Message text") }}'
}
},
{
id: '5',
name: 'Response',
type: 'n8n-nodes-base.respondToWebhook',
position: [500, 100],
parameters: {
responseCode: 200
}
}
],
connections: {
'Webhook': {
main: [[{ node: 'AI Agent', type: 'main', index: 0 }]]
},
'AI Agent': {
main: [[{ node: 'Response', type: 'main', index: 0 }]],
ai_tool: [
[
{ node: 'Google Sheets Tool', type: 'ai_tool', index: 0 },
{ node: 'Slack Tool', type: 'ai_tool', index: 0 }
]
]
}
}
};
// Test workflow without tools (should trigger warning)
const aiWorkflowNoTools = {
name: 'AI Agent without Tools',
nodes: [
{
id: '1',
name: 'Manual',
type: 'n8n-nodes-base.manualTrigger',
position: [100, 100],
parameters: {}
},
{
id: '2',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
position: [300, 100],
parameters: {
text: 'Hello AI'
}
}
],
connections: {
'Manual': {
main: [[{ node: 'AI Agent', type: 'main', index: 0 }]]
}
}
};
// Test workflow with googleSheetsTool (unknown node type)
const unknownToolWorkflow = {
name: 'Unknown Tool Test',
nodes: [
{
id: '1',
name: 'Agent',
type: 'nodes-langchain.agent',
position: [100, 100],
parameters: {}
},
{
id: '2',
name: 'Sheets Tool',
type: 'googleSheetsTool',
position: [300, 100],
parameters: {}
}
],
connections: {
'Agent': {
ai_tool: [[{ node: 'Sheets Tool', type: 'ai_tool', index: 0 }]]
}
}
};
async function testWorkflow(name: string, workflow: any) {
console.log(`\n🧪 Testing: ${name}`);
console.log('='.repeat(50));
const db = await createDatabaseAdapter('./data/nodes.db');
const repository = new NodeRepository(db);
const validator = new WorkflowValidator(repository, EnhancedConfigValidator);
try {
const result = await validator.validateWorkflow(workflow);
console.log(`\n📊 Validation Results:`);
console.log(`Valid: ${result.valid ? '✅' : '❌'}`);
if (result.errors.length > 0) {
console.log('\n❌ Errors:');
result.errors.forEach((err: any) => {
if (typeof err === 'string') {
console.log(` - ${err}`);
} else if (err.message) {
const nodeInfo = err.nodeName ? ` [${err.nodeName}]` : '';
console.log(` - ${err.message}${nodeInfo}`);
} else {
console.log(` - ${JSON.stringify(err, null, 2)}`);
}
});
}
if (result.warnings.length > 0) {
console.log('\n⚠ Warnings:');
result.warnings.forEach((warn: any) => {
const msg = warn.message || warn;
const nodeInfo = warn.nodeName ? ` [${warn.nodeName}]` : '';
console.log(` - ${msg}${nodeInfo}`);
});
}
if (result.suggestions.length > 0) {
console.log('\n💡 Suggestions:');
result.suggestions.forEach((sug: any) => console.log(` - ${sug}`));
}
console.log('\n📈 Statistics:');
console.log(` - Total nodes: ${result.statistics.totalNodes}`);
console.log(` - Valid connections: ${result.statistics.validConnections}`);
console.log(` - Invalid connections: ${result.statistics.invalidConnections}`);
console.log(` - Expressions validated: ${result.statistics.expressionsValidated}`);
} catch (error) {
console.error('Validation error:', error);
} finally {
db.close();
}
}
async function main() {
console.log('🤖 Testing AI Workflow Validation Enhancements');
// Test 1: Complete AI workflow with tools
await testWorkflow('AI Agent with Multiple Tools', aiWorkflow);
// Test 2: AI Agent without tools (should warn)
await testWorkflow('AI Agent without Tools', aiWorkflowNoTools);
// Test 3: Unknown tool type (like googleSheetsTool)
await testWorkflow('Unknown Tool Type', unknownToolWorkflow);
console.log('\n✅ All tests completed!');
}
if (require.main === module) {
main().catch(console.error);
}

View File

@@ -1,172 +0,0 @@
#!/usr/bin/env ts-node
/**
* Test Enhanced Validation
*
* Demonstrates the improvements in the enhanced validation system:
* - Operation-aware validation reduces false positives
* - Node-specific validators provide better error messages
* - Examples are included in validation responses
*/
import { ConfigValidator } from '../services/config-validator';
import { EnhancedConfigValidator } from '../services/enhanced-config-validator';
import { createDatabaseAdapter } from '../database/database-adapter';
import { NodeRepository } from '../database/node-repository';
import { logger } from '../utils/logger';
async function testValidation() {
const db = await createDatabaseAdapter('./data/nodes.db');
const repository = new NodeRepository(db);
console.log('🧪 Testing Enhanced Validation System\n');
console.log('=' .repeat(60));
// Test Case 1: Slack Send Message - Compare old vs new validation
console.log('\n📧 Test Case 1: Slack Send Message');
console.log('-'.repeat(40));
const slackConfig = {
resource: 'message',
operation: 'send',
channel: '#general',
text: 'Hello from n8n!'
};
const slackNode = repository.getNode('nodes-base.slack');
if (slackNode && slackNode.properties) {
// Old validation (full mode)
console.log('\n❌ OLD Validation (validate_node_config):');
const oldResult = ConfigValidator.validate('nodes-base.slack', slackConfig, slackNode.properties);
console.log(` Errors: ${oldResult.errors.length}`);
console.log(` Warnings: ${oldResult.warnings.length}`);
console.log(` Visible Properties: ${oldResult.visibleProperties.length}`);
if (oldResult.errors.length > 0) {
console.log('\n Sample errors:');
oldResult.errors.slice(0, 3).forEach(err => {
console.log(` - ${err.message}`);
});
}
// New validation (operation mode)
console.log('\n✅ NEW Validation (validate_node_operation):');
const newResult = EnhancedConfigValidator.validateWithMode(
'nodes-base.slack',
slackConfig,
slackNode.properties,
'operation'
);
console.log(` Errors: ${newResult.errors.length}`);
console.log(` Warnings: ${newResult.warnings.length}`);
console.log(` Mode: ${newResult.mode}`);
console.log(` Operation: ${newResult.operation?.resource}/${newResult.operation?.operation}`);
if (newResult.examples && newResult.examples.length > 0) {
console.log('\n 📚 Examples provided:');
newResult.examples.forEach(ex => {
console.log(` - ${ex.description}`);
});
}
if (newResult.nextSteps && newResult.nextSteps.length > 0) {
console.log('\n 🎯 Next steps:');
newResult.nextSteps.forEach(step => {
console.log(` - ${step}`);
});
}
}
// Test Case 2: Google Sheets Append - With validation errors
console.log('\n\n📊 Test Case 2: Google Sheets Append (with errors)');
console.log('-'.repeat(40));
const sheetsConfigBad = {
operation: 'append',
// Missing required fields
};
const sheetsNode = repository.getNode('nodes-base.googleSheets');
if (sheetsNode && sheetsNode.properties) {
const result = EnhancedConfigValidator.validateWithMode(
'nodes-base.googleSheets',
sheetsConfigBad,
sheetsNode.properties,
'operation'
);
console.log(`\n Validation result:`);
console.log(` Valid: ${result.valid}`);
console.log(` Errors: ${result.errors.length}`);
if (result.errors.length > 0) {
console.log('\n Errors found:');
result.errors.forEach(err => {
console.log(` - ${err.message}`);
if (err.fix) console.log(` Fix: ${err.fix}`);
});
}
if (result.examples && result.examples.length > 0) {
console.log('\n 📚 Working examples provided:');
result.examples.forEach(ex => {
console.log(` - ${ex.description}:`);
console.log(` ${JSON.stringify(ex.config, null, 2).split('\n').join('\n ')}`);
});
}
}
// Test Case 3: Complex Slack Update Message
console.log('\n\n💬 Test Case 3: Slack Update Message');
console.log('-'.repeat(40));
const slackUpdateConfig = {
resource: 'message',
operation: 'update',
channel: '#general',
// Missing required 'ts' field
text: 'Updated message'
};
if (slackNode && slackNode.properties) {
const result = EnhancedConfigValidator.validateWithMode(
'nodes-base.slack',
slackUpdateConfig,
slackNode.properties,
'operation'
);
console.log(`\n Validation result:`);
console.log(` Valid: ${result.valid}`);
console.log(` Errors: ${result.errors.length}`);
result.errors.forEach(err => {
console.log(` - Property: ${err.property}`);
console.log(` Message: ${err.message}`);
console.log(` Fix: ${err.fix}`);
});
}
// Test Case 4: Comparison Summary
console.log('\n\n📈 Summary: Old vs New Validation');
console.log('=' .repeat(60));
console.log('\nOLD validate_node_config:');
console.log(' ❌ Validates ALL properties regardless of operation');
console.log(' ❌ Many false positives for complex nodes');
console.log(' ❌ Generic error messages');
console.log(' ❌ No examples or next steps');
console.log('\nNEW validate_node_operation:');
console.log(' ✅ Only validates properties for selected operation');
console.log(' ✅ 80%+ reduction in false positives');
console.log(' ✅ Operation-specific error messages');
console.log(' ✅ Includes working examples when errors found');
console.log(' ✅ Provides actionable next steps');
console.log(' ✅ Auto-fix suggestions for common issues');
console.log('\n✨ The enhanced validation makes AI agents much more effective!');
db.close();
}
// Run the test
testValidation().catch(console.error);

View File

@@ -1,165 +0,0 @@
#!/usr/bin/env node
/**
* Test for Issue #45 Fix: Partial Update Tool Validation/Execution Discrepancy
*
* This test verifies that the cleanWorkflowForUpdate function no longer adds
* default settings to workflows during updates, which was causing the n8n API
* to reject requests with "settings must NOT have additional properties".
*/
import { config } from 'dotenv';
import { logger } from '../utils/logger';
import { cleanWorkflowForUpdate, cleanWorkflowForCreate } from '../services/n8n-validation';
import { Workflow } from '../types/n8n-api';
// Load environment variables
config();
function testCleanWorkflowFunctions() {
logger.info('Testing Issue #45 Fix: cleanWorkflowForUpdate should not add default settings\n');
// Test 1: cleanWorkflowForUpdate with workflow without settings
logger.info('=== Test 1: cleanWorkflowForUpdate without settings ===');
const workflowWithoutSettings: Workflow = {
id: 'test-123',
name: 'Test Workflow',
nodes: [],
connections: {},
active: false,
createdAt: '2024-01-01T00:00:00.000Z',
updatedAt: '2024-01-01T00:00:00.000Z',
versionId: 'version-123'
};
const cleanedUpdate = cleanWorkflowForUpdate(workflowWithoutSettings);
if ('settings' in cleanedUpdate) {
logger.error('❌ FAIL: cleanWorkflowForUpdate added settings when it should not have');
logger.error(' Found settings:', JSON.stringify(cleanedUpdate.settings));
} else {
logger.info('✅ PASS: cleanWorkflowForUpdate did not add settings');
}
// Test 2: cleanWorkflowForUpdate with existing settings
logger.info('\n=== Test 2: cleanWorkflowForUpdate with existing settings ===');
const workflowWithSettings: Workflow = {
...workflowWithoutSettings,
settings: {
executionOrder: 'v1',
saveDataErrorExecution: 'none',
saveDataSuccessExecution: 'none',
saveManualExecutions: false,
saveExecutionProgress: false
}
};
const cleanedUpdate2 = cleanWorkflowForUpdate(workflowWithSettings);
if ('settings' in cleanedUpdate2) {
const settingsMatch = JSON.stringify(cleanedUpdate2.settings) === JSON.stringify(workflowWithSettings.settings);
if (settingsMatch) {
logger.info('✅ PASS: cleanWorkflowForUpdate preserved existing settings without modification');
} else {
logger.error('❌ FAIL: cleanWorkflowForUpdate modified existing settings');
logger.error(' Original:', JSON.stringify(workflowWithSettings.settings));
logger.error(' Cleaned:', JSON.stringify(cleanedUpdate2.settings));
}
} else {
logger.error('❌ FAIL: cleanWorkflowForUpdate removed existing settings');
}
// Test 3: cleanWorkflowForUpdate with partial settings
logger.info('\n=== Test 3: cleanWorkflowForUpdate with partial settings ===');
const workflowWithPartialSettings: Workflow = {
...workflowWithoutSettings,
settings: {
executionOrder: 'v1'
// Missing other default properties
}
};
const cleanedUpdate3 = cleanWorkflowForUpdate(workflowWithPartialSettings);
if ('settings' in cleanedUpdate3) {
const settingsKeys = cleanedUpdate3.settings ? Object.keys(cleanedUpdate3.settings) : [];
const hasOnlyExecutionOrder = settingsKeys.length === 1 &&
cleanedUpdate3.settings?.executionOrder === 'v1';
if (hasOnlyExecutionOrder) {
logger.info('✅ PASS: cleanWorkflowForUpdate preserved partial settings without adding defaults');
} else {
logger.error('❌ FAIL: cleanWorkflowForUpdate added default properties to partial settings');
logger.error(' Original keys:', Object.keys(workflowWithPartialSettings.settings || {}));
logger.error(' Cleaned keys:', settingsKeys);
}
} else {
logger.error('❌ FAIL: cleanWorkflowForUpdate removed partial settings');
}
// Test 4: Verify cleanWorkflowForCreate still adds defaults
logger.info('\n=== Test 4: cleanWorkflowForCreate should add default settings ===');
const newWorkflow = {
name: 'New Workflow',
nodes: [],
connections: {}
};
const cleanedCreate = cleanWorkflowForCreate(newWorkflow);
if ('settings' in cleanedCreate && cleanedCreate.settings) {
const hasDefaults =
cleanedCreate.settings.executionOrder === 'v1' &&
cleanedCreate.settings.saveDataErrorExecution === 'all' &&
cleanedCreate.settings.saveDataSuccessExecution === 'all' &&
cleanedCreate.settings.saveManualExecutions === true &&
cleanedCreate.settings.saveExecutionProgress === true;
if (hasDefaults) {
logger.info('✅ PASS: cleanWorkflowForCreate correctly adds default settings');
} else {
logger.error('❌ FAIL: cleanWorkflowForCreate added settings but not with correct defaults');
logger.error(' Settings:', JSON.stringify(cleanedCreate.settings));
}
} else {
logger.error('❌ FAIL: cleanWorkflowForCreate did not add default settings');
}
// Test 5: Verify read-only fields are removed
logger.info('\n=== Test 5: cleanWorkflowForUpdate removes read-only fields ===');
const workflowWithReadOnly: any = {
...workflowWithoutSettings,
staticData: { some: 'data' },
pinData: { node1: 'data' },
tags: ['tag1', 'tag2'],
isArchived: true,
usedCredentials: ['cred1'],
sharedWithProjects: ['proj1'],
triggerCount: 5,
shared: true,
active: true
};
const cleanedReadOnly = cleanWorkflowForUpdate(workflowWithReadOnly);
const removedFields = [
'id', 'createdAt', 'updatedAt', 'versionId', 'meta',
'staticData', 'pinData', 'tags', 'isArchived',
'usedCredentials', 'sharedWithProjects', 'triggerCount',
'shared', 'active'
];
const hasRemovedFields = removedFields.some(field => field in cleanedReadOnly);
if (!hasRemovedFields) {
logger.info('✅ PASS: cleanWorkflowForUpdate correctly removed all read-only fields');
} else {
const foundFields = removedFields.filter(field => field in cleanedReadOnly);
logger.error('❌ FAIL: cleanWorkflowForUpdate did not remove these fields:', foundFields);
}
logger.info('\n=== Test Summary ===');
logger.info('All tests completed. The fix ensures that cleanWorkflowForUpdate only removes fields');
logger.info('without adding default settings, preventing the n8n API validation error.');
}
// Run the tests
testCleanWorkflowFunctions();

View File

@@ -1,162 +0,0 @@
#!/usr/bin/env node
/**
* Integration test for n8n_update_partial_workflow MCP tool
* Tests that the tool can be called successfully via MCP protocol
*/
import { config } from 'dotenv';
import { logger } from '../utils/logger';
import { isN8nApiConfigured } from '../config/n8n-api';
import { handleUpdatePartialWorkflow } from '../mcp/handlers-workflow-diff';
// Load environment variables
config();
async function testMcpUpdatePartialWorkflow() {
logger.info('Testing n8n_update_partial_workflow MCP tool...');
// Check if API is configured
if (!isN8nApiConfigured()) {
logger.warn('n8n API not configured. Set N8N_API_URL and N8N_API_KEY to test.');
logger.info('Example:');
logger.info(' N8N_API_URL=https://your-n8n.com N8N_API_KEY=your-key npm run test:mcp:update-partial');
return;
}
// Test 1: Validate only - should work without actual workflow
logger.info('\n=== Test 1: Validate Only (no actual workflow needed) ===');
const validateOnlyRequest = {
id: 'test-workflow-123',
operations: [
{
type: 'addNode',
description: 'Add HTTP Request node',
node: {
name: 'HTTP Request',
type: 'n8n-nodes-base.httpRequest',
position: [400, 300],
parameters: {
url: 'https://api.example.com/data',
method: 'GET'
}
}
},
{
type: 'addConnection',
source: 'Start',
target: 'HTTP Request'
}
],
validateOnly: true
};
try {
const result = await handleUpdatePartialWorkflow(validateOnlyRequest);
logger.info('Validation result:', JSON.stringify(result, null, 2));
} catch (error) {
logger.error('Validation test failed:', error);
}
// Test 2: Test with missing required fields
logger.info('\n=== Test 2: Missing Required Fields ===');
const invalidRequest = {
operations: [{
type: 'addNode'
// Missing node property
}]
// Missing id
};
try {
const result = await handleUpdatePartialWorkflow(invalidRequest);
logger.info('Should fail with validation error:', JSON.stringify(result, null, 2));
} catch (error) {
logger.info('Expected validation error:', error instanceof Error ? error.message : String(error));
}
// Test 3: Test with complex operations array
logger.info('\n=== Test 3: Complex Operations Array ===');
const complexRequest = {
id: 'workflow-456',
operations: [
{
type: 'updateNode',
nodeName: 'Webhook',
changes: {
'parameters.path': 'new-webhook-path',
'parameters.method': 'POST'
}
},
{
type: 'addNode',
node: {
name: 'Set',
type: 'n8n-nodes-base.set',
typeVersion: 3,
position: [600, 300],
parameters: {
mode: 'manual',
fields: {
values: [
{ name: 'status', value: 'processed' }
]
}
}
}
},
{
type: 'addConnection',
source: 'Webhook',
target: 'Set'
},
{
type: 'updateName',
name: 'Updated Workflow Name'
},
{
type: 'addTag',
tag: 'production'
}
],
validateOnly: true
};
try {
const result = await handleUpdatePartialWorkflow(complexRequest);
logger.info('Complex operations result:', JSON.stringify(result, null, 2));
} catch (error) {
logger.error('Complex operations test failed:', error);
}
// Test 4: Test operation type validation
logger.info('\n=== Test 4: Invalid Operation Type ===');
const invalidTypeRequest = {
id: 'workflow-789',
operations: [{
type: 'invalidOperation',
something: 'else'
}],
validateOnly: true
};
try {
const result = await handleUpdatePartialWorkflow(invalidTypeRequest);
logger.info('Invalid type result:', JSON.stringify(result, null, 2));
} catch (error) {
logger.info('Expected error for invalid type:', error instanceof Error ? error.message : String(error));
}
logger.info('\n✅ MCP tool integration tests completed!');
logger.info('\nNOTE: These tests verify the MCP tool can be called without errors.');
logger.info('To test with real workflows, ensure N8N_API_URL and N8N_API_KEY are set.');
}
// Run tests
testMcpUpdatePartialWorkflow().catch(error => {
logger.error('Unhandled error:', error);
process.exit(1);
});

View File

@@ -1,54 +0,0 @@
#!/usr/bin/env node
/**
* Test MCP tools directly
*/
import { createDatabaseAdapter } from '../database/database-adapter';
import { NodeRepository } from '../database/node-repository';
import { N8NDocumentationMCPServer } from '../mcp/server';
import { Logger } from '../utils/logger';
const logger = new Logger({ prefix: '[TestMCPTools]' });
async function testTool(server: any, toolName: string, args: any) {
try {
console.log(`\n🔧 Testing: ${toolName}`);
console.log('Args:', JSON.stringify(args, null, 2));
console.log('-'.repeat(60));
const result = await server[toolName].call(server, args);
console.log('Result:', JSON.stringify(result, null, 2));
} catch (error) {
console.error(`❌ Error: ${error}`);
}
}
async function main() {
console.log('🤖 Testing MCP Tools\n');
// Create server instance and wait for initialization
const server = new N8NDocumentationMCPServer();
// Give it time to initialize
await new Promise(resolve => setTimeout(resolve, 100));
// Test get_node_as_tool_info
console.log('\n=== Testing get_node_as_tool_info ===');
await testTool(server, 'getNodeAsToolInfo', 'nodes-base.slack');
await testTool(server, 'getNodeAsToolInfo', 'nodes-base.googleSheets');
// Test enhanced get_node_info with aiToolCapabilities
console.log('\n\n=== Testing get_node_info (with aiToolCapabilities) ===');
await testTool(server, 'getNodeInfo', 'nodes-base.httpRequest');
// Test list_ai_tools with enhanced response
console.log('\n\n=== Testing list_ai_tools (enhanced) ===');
await testTool(server, 'listAITools', {});
console.log('\n✅ All tests completed!');
process.exit(0);
}
if (require.main === module) {
main().catch(console.error);
}

View File

@@ -1,148 +0,0 @@
#!/usr/bin/env node
import { config } from 'dotenv';
import { logger } from '../utils/logger';
import { isN8nApiConfigured, getN8nApiConfig } from '../config/n8n-api';
import { getN8nApiClient } from '../mcp/handlers-n8n-manager';
import { N8nApiClient } from '../services/n8n-api-client';
import { Workflow, ExecutionStatus } from '../types/n8n-api';
// Load environment variables
config();
async function testN8nManagerIntegration() {
logger.info('Testing n8n Manager Integration...');
// Check if API is configured
if (!isN8nApiConfigured()) {
logger.warn('n8n API not configured. Set N8N_API_URL and N8N_API_KEY to test.');
logger.info('Example:');
logger.info(' N8N_API_URL=https://your-n8n.com N8N_API_KEY=your-key npm run test:n8n-manager');
return;
}
const apiConfig = getN8nApiConfig();
logger.info('n8n API Configuration:', {
url: apiConfig!.baseUrl,
timeout: apiConfig!.timeout,
maxRetries: apiConfig!.maxRetries
});
const client = getN8nApiClient();
if (!client) {
logger.error('Failed to create n8n API client');
return;
}
try {
// Test 1: Health Check
logger.info('\n=== Test 1: Health Check ===');
const health = await client.healthCheck();
logger.info('Health check passed:', health);
// Test 2: List Workflows
logger.info('\n=== Test 2: List Workflows ===');
const workflows = await client.listWorkflows({ limit: 5 });
logger.info(`Found ${workflows.data.length} workflows`);
workflows.data.forEach(wf => {
logger.info(`- ${wf.name} (ID: ${wf.id}, Active: ${wf.active})`);
});
// Test 3: Create a Test Workflow
logger.info('\n=== Test 3: Create Test Workflow ===');
const testWorkflow: Partial<Workflow> = {
name: `Test Workflow - MCP Integration ${Date.now()}`,
nodes: [
{
id: '1',
name: 'Start',
type: 'n8n-nodes-base.start',
typeVersion: 1,
position: [250, 300],
parameters: {}
},
{
id: '2',
name: 'Set',
type: 'n8n-nodes-base.set',
typeVersion: 1,
position: [450, 300],
parameters: {
values: {
string: [
{
name: 'message',
value: 'Hello from MCP!'
}
]
}
}
}
],
connections: {
'1': {
main: [[{ node: '2', type: 'main', index: 0 }]]
}
},
settings: {
executionOrder: 'v1',
saveDataErrorExecution: 'all',
saveDataSuccessExecution: 'all',
saveManualExecutions: true,
saveExecutionProgress: true
}
};
const createdWorkflow = await client.createWorkflow(testWorkflow);
logger.info('Created workflow:', {
id: createdWorkflow.id,
name: createdWorkflow.name,
active: createdWorkflow.active
});
// Test 4: Get Workflow Details
logger.info('\n=== Test 4: Get Workflow Details ===');
const workflowDetails = await client.getWorkflow(createdWorkflow.id!);
logger.info('Retrieved workflow:', {
id: workflowDetails.id,
name: workflowDetails.name,
nodeCount: workflowDetails.nodes.length
});
// Test 5: Update Workflow
logger.info('\n=== Test 5: Update Workflow ===');
// n8n API requires full workflow structure for updates
const updatedWorkflow = await client.updateWorkflow(createdWorkflow.id!, {
name: `${createdWorkflow.name} - Updated`,
nodes: workflowDetails.nodes,
connections: workflowDetails.connections,
settings: workflowDetails.settings
});
logger.info('Updated workflow name:', updatedWorkflow.name);
// Test 6: List Executions
logger.info('\n=== Test 6: List Recent Executions ===');
const executions = await client.listExecutions({ limit: 5 });
logger.info(`Found ${executions.data.length} recent executions`);
executions.data.forEach(exec => {
logger.info(`- Workflow: ${exec.workflowName || exec.workflowId}, Status: ${exec.status}, Started: ${exec.startedAt}`);
});
// Test 7: Cleanup - Delete Test Workflow
logger.info('\n=== Test 7: Cleanup ===');
await client.deleteWorkflow(createdWorkflow.id!);
logger.info('Deleted test workflow');
logger.info('\n✅ All tests passed successfully!');
} catch (error) {
logger.error('Test failed:', error);
process.exit(1);
}
}
// Run tests
testN8nManagerIntegration().catch(error => {
logger.error('Unhandled error:', error);
process.exit(1);
});

View File

@@ -1,113 +0,0 @@
#!/usr/bin/env ts-node
/**
* Test script for the n8n_validate_workflow tool
*
* This script tests the new tool that fetches a workflow from n8n
* and validates it using the existing validation logic.
*/
import { config } from 'dotenv';
import { handleValidateWorkflow } from '../mcp/handlers-n8n-manager';
import { NodeRepository } from '../database/node-repository';
import { createDatabaseAdapter } from '../database/database-adapter';
import { Logger } from '../utils/logger';
import * as path from 'path';
// Load environment variables
config();
const logger = new Logger({ prefix: '[TestN8nValidateWorkflow]' });
async function testN8nValidateWorkflow() {
try {
// Check if n8n API is configured
if (!process.env.N8N_API_URL || !process.env.N8N_API_KEY) {
logger.error('N8N_API_URL and N8N_API_KEY must be set in environment variables');
process.exit(1);
}
logger.info('n8n API Configuration:', {
url: process.env.N8N_API_URL,
hasApiKey: !!process.env.N8N_API_KEY
});
// Initialize database
const dbPath = path.join(process.cwd(), 'data', 'nodes.db');
const db = await createDatabaseAdapter(dbPath);
const repository = new NodeRepository(db);
// Test cases
const testCases = [
{
name: 'Validate existing workflow with all options',
args: {
id: '1', // Replace with an actual workflow ID from your n8n instance
options: {
validateNodes: true,
validateConnections: true,
validateExpressions: true,
profile: 'runtime'
}
}
},
{
name: 'Validate with minimal profile',
args: {
id: '1', // Replace with an actual workflow ID
options: {
profile: 'minimal'
}
}
},
{
name: 'Validate connections only',
args: {
id: '1', // Replace with an actual workflow ID
options: {
validateNodes: false,
validateConnections: true,
validateExpressions: false
}
}
}
];
// Run test cases
for (const testCase of testCases) {
logger.info(`\nRunning test: ${testCase.name}`);
logger.info('Input:', JSON.stringify(testCase.args, null, 2));
try {
const result = await handleValidateWorkflow(testCase.args, repository);
if (result.success) {
logger.info('✅ Validation completed successfully');
logger.info('Result:', JSON.stringify(result.data, null, 2));
} else {
logger.error('❌ Validation failed');
logger.error('Error:', result.error);
if (result.details) {
logger.error('Details:', JSON.stringify(result.details, null, 2));
}
}
} catch (error) {
logger.error('❌ Test case failed with exception:', error);
}
logger.info('-'.repeat(80));
}
logger.info('\n✅ All tests completed');
} catch (error) {
logger.error('Test script failed:', error);
process.exit(1);
}
}
// Run the test
testN8nValidateWorkflow().catch(error => {
logger.error('Unhandled error:', error);
process.exit(1);
});

View File

@@ -1,200 +0,0 @@
#!/usr/bin/env node
/**
* Test script demonstrating all node-level properties in n8n workflows
* Shows correct placement and usage of properties that must be at node level
*/
import { createDatabaseAdapter } from '../database/database-adapter.js';
import { NodeRepository } from '../database/node-repository.js';
import { WorkflowValidator } from '../services/workflow-validator.js';
import { WorkflowDiffEngine } from '../services/workflow-diff-engine.js';
import { join } from 'path';
async function main() {
console.log('🔍 Testing Node-Level Properties Configuration\n');
// Initialize database
const dbPath = join(process.cwd(), 'nodes.db');
const dbAdapter = await createDatabaseAdapter(dbPath);
const nodeRepository = new NodeRepository(dbAdapter);
const EnhancedConfigValidator = (await import('../services/enhanced-config-validator.js')).EnhancedConfigValidator;
const validator = new WorkflowValidator(nodeRepository, EnhancedConfigValidator);
const diffEngine = new WorkflowDiffEngine();
// Example 1: Complete node with all properties
console.log('1⃣ Complete Node Configuration Example:');
const completeNode = {
id: 'node_1',
name: 'Database Query',
type: 'n8n-nodes-base.postgres',
typeVersion: 2.6,
position: [450, 300] as [number, number],
// Operation parameters (inside parameters)
parameters: {
operation: 'executeQuery',
query: 'SELECT * FROM users WHERE active = true'
},
// Node-level properties (NOT inside parameters!)
credentials: {
postgres: {
id: 'cred_123',
name: 'Production Database'
}
},
disabled: false,
notes: 'This node queries active users from the production database',
notesInFlow: true,
executeOnce: true,
// Error handling (also at node level!)
onError: 'continueErrorOutput' as const,
retryOnFail: true,
maxTries: 3,
waitBetweenTries: 2000,
alwaysOutputData: true
};
console.log(JSON.stringify(completeNode, null, 2));
console.log('\n✅ All properties are at the correct level!\n');
// Example 2: Workflow with properly configured nodes
console.log('2⃣ Complete Workflow Example:');
const workflow = {
name: 'Production Data Processing',
nodes: [
{
id: 'trigger_1',
name: 'Every Hour',
type: 'n8n-nodes-base.scheduleTrigger',
typeVersion: 1.2,
position: [250, 300] as [number, number],
parameters: {
rule: { interval: [{ field: 'hours', hoursInterval: 1 }] }
},
notes: 'Runs every hour to check for new data',
notesInFlow: true
},
completeNode,
{
id: 'error_handler',
name: 'Error Notification',
type: 'n8n-nodes-base.slack',
typeVersion: 2.3,
position: [650, 450] as [number, number],
parameters: {
resource: 'message',
operation: 'post',
channel: '#alerts',
text: 'Database query failed!'
},
credentials: {
slackApi: {
id: 'cred_456',
name: 'Alert Slack'
}
},
executeOnce: true,
onError: 'continueRegularOutput' as const
}
],
connections: {
'Every Hour': {
main: [[{ node: 'Database Query', type: 'main', index: 0 }]]
},
'Database Query': {
main: [[{ node: 'Process Data', type: 'main', index: 0 }]],
error: [[{ node: 'Error Notification', type: 'main', index: 0 }]]
}
}
};
// Validate the workflow
console.log('\n3⃣ Validating Workflow:');
const result = await validator.validateWorkflow(workflow as any, { profile: 'strict' });
console.log(`Valid: ${result.valid}`);
console.log(`Errors: ${result.errors.length}`);
console.log(`Warnings: ${result.warnings.length}`);
if (result.errors.length > 0) {
console.log('\nErrors:');
result.errors.forEach((err: any) => console.log(`- ${err.message}`));
}
// Example 3: Using workflow diff to update node-level properties
console.log('\n4⃣ Updating Node-Level Properties with Diff Engine:');
const operations = [
{
type: 'updateNode' as const,
nodeName: 'Database Query',
changes: {
// Update operation parameters
'parameters.query': 'SELECT * FROM users WHERE active = true AND created_at > NOW() - INTERVAL \'7 days\'',
// Update node-level properties (no 'parameters.' prefix!)
'onError': 'stopWorkflow',
'executeOnce': false,
'notes': 'Updated to only query users from last 7 days',
'maxTries': 5,
'disabled': false
}
}
];
console.log('Operations:');
console.log(JSON.stringify(operations, null, 2));
// Example 4: Common mistakes to avoid
console.log('\n5⃣ ❌ COMMON MISTAKES TO AVOID:');
const wrongNode = {
id: 'wrong_1',
name: 'Wrong Configuration',
type: 'n8n-nodes-base.httpRequest',
typeVersion: 4.2,
position: [250, 300] as [number, number],
parameters: {
method: 'POST',
url: 'https://api.example.com',
// ❌ WRONG - These should NOT be inside parameters!
onError: 'continueErrorOutput',
retryOnFail: true,
executeOnce: true,
notes: 'This is wrong!',
credentials: { httpAuth: { id: '123' } }
}
};
console.log('❌ Wrong (properties inside parameters):');
console.log(JSON.stringify(wrongNode.parameters, null, 2));
// Validate wrong configuration
const wrongWorkflow = {
name: 'Wrong Example',
nodes: [wrongNode],
connections: {}
};
const wrongResult = await validator.validateWorkflow(wrongWorkflow as any);
console.log('\nValidation of wrong configuration:');
wrongResult.errors.forEach((err: any) => console.log(`❌ ERROR: ${err.message}`));
console.log('\n✅ Summary of Node-Level Properties:');
console.log('- credentials: Link to credential sets');
console.log('- disabled: Disable node execution');
console.log('- notes: Internal documentation');
console.log('- notesInFlow: Show notes on canvas');
console.log('- executeOnce: Execute only once per run');
console.log('- onError: Error handling strategy');
console.log('- retryOnFail: Enable automatic retries');
console.log('- maxTries: Number of retry attempts');
console.log('- waitBetweenTries: Delay between retries');
console.log('- alwaysOutputData: Output data on error');
console.log('- continueOnFail: (deprecated - use onError)');
console.log('\n🎯 Remember: All these properties go at the NODE level, not inside parameters!');
}
main().catch(console.error);

View File

@@ -1,108 +0,0 @@
#!/usr/bin/env node
/**
* Copyright (c) 2024 AiAdvisors Romuald Czlonkowski
* Licensed under the Sustainable Use License v1.0
*/
import { createDatabaseAdapter } from '../database/database-adapter';
import { NodeRepository } from '../database/node-repository';
const TEST_CASES = [
{
nodeType: 'nodes-base.httpRequest',
checks: {
hasProperties: true,
minProperties: 5,
hasDocumentation: true,
isVersioned: true
}
},
{
nodeType: 'nodes-base.slack',
checks: {
hasOperations: true,
minOperations: 10,
style: 'declarative'
}
},
{
nodeType: 'nodes-base.code',
checks: {
hasProperties: true,
properties: ['mode', 'language', 'jsCode']
}
}
];
async function runTests() {
const db = await createDatabaseAdapter('./data/nodes.db');
const repository = new NodeRepository(db);
console.log('🧪 Running node tests...\n');
let passed = 0;
let failed = 0;
for (const testCase of TEST_CASES) {
console.log(`Testing ${testCase.nodeType}...`);
try {
const node = repository.getNode(testCase.nodeType);
if (!node) {
throw new Error('Node not found');
}
// Run checks
for (const [check, expected] of Object.entries(testCase.checks)) {
switch (check) {
case 'hasProperties':
if (expected && node.properties.length === 0) {
throw new Error('No properties found');
}
break;
case 'minProperties':
if (node.properties.length < expected) {
throw new Error(`Expected at least ${expected} properties, got ${node.properties.length}`);
}
break;
case 'hasOperations':
if (expected && node.operations.length === 0) {
throw new Error('No operations found');
}
break;
case 'minOperations':
if (node.operations.length < expected) {
throw new Error(`Expected at least ${expected} operations, got ${node.operations.length}`);
}
break;
case 'properties':
const propNames = node.properties.map((p: any) => p.name);
for (const prop of expected as string[]) {
if (!propNames.includes(prop)) {
throw new Error(`Missing property: ${prop}`);
}
}
break;
}
}
console.log(`${testCase.nodeType} passed all checks\n`);
passed++;
} catch (error) {
console.error(`${testCase.nodeType} failed: ${(error as Error).message}\n`);
failed++;
}
}
console.log(`\n📊 Test Results: ${passed} passed, ${failed} failed`);
db.close();
}
if (require.main === module) {
runTests().catch(console.error);
}

View File

@@ -1,137 +0,0 @@
#!/usr/bin/env node
/**
* Test validation of a single workflow
*/
import { existsSync, readFileSync } from 'fs';
import path from 'path';
import { NodeRepository } from '../database/node-repository';
import { createDatabaseAdapter } from '../database/database-adapter';
import { WorkflowValidator } from '../services/workflow-validator';
import { EnhancedConfigValidator } from '../services/enhanced-config-validator';
import { Logger } from '../utils/logger';
const logger = new Logger({ prefix: '[test-single-workflow]' });
async function testSingleWorkflow() {
// Read the workflow file
const workflowPath = process.argv[2];
if (!workflowPath) {
logger.error('Please provide a workflow file path');
process.exit(1);
}
if (!existsSync(workflowPath)) {
logger.error(`Workflow file not found: ${workflowPath}`);
process.exit(1);
}
logger.info(`Testing workflow: ${workflowPath}\n`);
// Initialize database
const dbPath = path.join(process.cwd(), 'data', 'nodes.db');
if (!existsSync(dbPath)) {
logger.error('Database not found. Run npm run rebuild first.');
process.exit(1);
}
const db = await createDatabaseAdapter(dbPath);
const repository = new NodeRepository(db);
const validator = new WorkflowValidator(
repository,
EnhancedConfigValidator
);
try {
// Read and parse workflow
const workflowJson = JSON.parse(readFileSync(workflowPath, 'utf8'));
logger.info(`Workflow: ${workflowJson.name || 'Unnamed'}`);
logger.info(`Nodes: ${workflowJson.nodes?.length || 0}`);
logger.info(`Connections: ${Object.keys(workflowJson.connections || {}).length}`);
// List all node types in the workflow
logger.info('\nNode types in workflow:');
workflowJson.nodes?.forEach((node: any) => {
logger.info(` - ${node.name}: ${node.type}`);
});
// Check what these node types are in our database
logger.info('\nChecking node types in database:');
for (const node of workflowJson.nodes || []) {
const dbNode = repository.getNode(node.type);
if (dbNode) {
logger.info(`${node.type} found in database`);
} else {
// Try normalization patterns
let shortType = node.type;
if (node.type.startsWith('n8n-nodes-base.')) {
shortType = node.type.replace('n8n-nodes-base.', 'nodes-base.');
} else if (node.type.startsWith('@n8n/n8n-nodes-langchain.')) {
shortType = node.type.replace('@n8n/n8n-nodes-langchain.', 'nodes-langchain.');
}
const dbNodeShort = repository.getNode(shortType);
if (dbNodeShort) {
logger.info(`${shortType} found in database (normalized)`);
} else {
logger.error(`${node.type} NOT found in database`);
}
}
}
logger.info('\n' + '='.repeat(80));
logger.info('VALIDATION RESULTS');
logger.info('='.repeat(80) + '\n');
// Validate the workflow
const result = await validator.validateWorkflow(workflowJson);
console.log(`Valid: ${result.valid ? '✅ YES' : '❌ NO'}`);
if (result.errors.length > 0) {
console.log('\nErrors:');
result.errors.forEach((error: any) => {
console.log(` - ${error.nodeName || 'workflow'}: ${error.message}`);
});
}
if (result.warnings.length > 0) {
console.log('\nWarnings:');
result.warnings.forEach((warning: any) => {
const msg = typeof warning.message === 'string'
? warning.message
: JSON.stringify(warning.message);
console.log(` - ${warning.nodeName || 'workflow'}: ${msg}`);
});
}
if (result.suggestions?.length > 0) {
console.log('\nSuggestions:');
result.suggestions.forEach((suggestion: string) => {
console.log(` - ${suggestion}`);
});
}
console.log('\nStatistics:');
console.log(` - Total nodes: ${result.statistics.totalNodes}`);
console.log(` - Enabled nodes: ${result.statistics.enabledNodes}`);
console.log(` - Trigger nodes: ${result.statistics.triggerNodes}`);
console.log(` - Valid connections: ${result.statistics.validConnections}`);
console.log(` - Invalid connections: ${result.statistics.invalidConnections}`);
console.log(` - Expressions validated: ${result.statistics.expressionsValidated}`);
} catch (error) {
logger.error('Failed to validate workflow:', error);
process.exit(1);
} finally {
db.close();
}
}
// Run test
testSingleWorkflow().catch(error => {
logger.error('Test failed:', error);
process.exit(1);
});

View File

@@ -1,173 +0,0 @@
#!/usr/bin/env node
/**
* Test workflow validation on actual n8n templates from the database
*/
import { existsSync } from 'fs';
import path from 'path';
import { NodeRepository } from '../database/node-repository';
import { createDatabaseAdapter } from '../database/database-adapter';
import { WorkflowValidator } from '../services/workflow-validator';
import { EnhancedConfigValidator } from '../services/enhanced-config-validator';
import { TemplateRepository } from '../templates/template-repository';
import { Logger } from '../utils/logger';
const logger = new Logger({ prefix: '[test-template-validation]' });
async function testTemplateValidation() {
logger.info('Starting template validation tests...\n');
// Initialize database
const dbPath = path.join(process.cwd(), 'data', 'nodes.db');
if (!existsSync(dbPath)) {
logger.error('Database not found. Run npm run rebuild first.');
process.exit(1);
}
const db = await createDatabaseAdapter(dbPath);
const repository = new NodeRepository(db);
const templateRepository = new TemplateRepository(db);
const validator = new WorkflowValidator(
repository,
EnhancedConfigValidator
);
try {
// Get some templates to test
const templates = await templateRepository.getAllTemplates(20);
if (templates.length === 0) {
logger.warn('No templates found in database. Run npm run fetch:templates first.');
process.exit(0);
}
logger.info(`Found ${templates.length} templates to validate\n`);
const results = {
total: templates.length,
valid: 0,
invalid: 0,
withErrors: 0,
withWarnings: 0,
errorTypes: new Map<string, number>(),
warningTypes: new Map<string, number>()
};
// Validate each template
for (const template of templates) {
logger.info(`\n${'='.repeat(80)}`);
logger.info(`Validating: ${template.name} (ID: ${template.id})`);
logger.info(`Author: ${template.author_name} (@${template.author_username})`);
logger.info(`Views: ${template.views}`);
logger.info(`${'='.repeat(80)}\n`);
try {
const workflow = JSON.parse(template.workflow_json);
// Log workflow summary
logger.info(`Workflow summary:`);
logger.info(`- Nodes: ${workflow.nodes?.length || 0}`);
logger.info(`- Connections: ${Object.keys(workflow.connections || {}).length}`);
// Validate the workflow
const validationResult = await validator.validateWorkflow(workflow);
// Update statistics
if (validationResult.valid) {
results.valid++;
console.log('✅ VALID');
} else {
results.invalid++;
console.log('❌ INVALID');
}
if (validationResult.errors.length > 0) {
results.withErrors++;
console.log('\nErrors:');
validationResult.errors.forEach((error: any) => {
const errorMsg = typeof error.message === 'string' ? error.message : JSON.stringify(error.message);
const errorKey = errorMsg.substring(0, 50);
results.errorTypes.set(errorKey, (results.errorTypes.get(errorKey) || 0) + 1);
console.log(` - ${error.nodeName || 'workflow'}: ${errorMsg}`);
});
}
if (validationResult.warnings.length > 0) {
results.withWarnings++;
console.log('\nWarnings:');
validationResult.warnings.forEach((warning: any) => {
const warningKey = typeof warning.message === 'string'
? warning.message.substring(0, 50)
: JSON.stringify(warning.message).substring(0, 50);
results.warningTypes.set(warningKey, (results.warningTypes.get(warningKey) || 0) + 1);
console.log(` - ${warning.nodeName || 'workflow'}: ${
typeof warning.message === 'string' ? warning.message : JSON.stringify(warning.message)
}`);
});
}
if (validationResult.suggestions?.length > 0) {
console.log('\nSuggestions:');
validationResult.suggestions.forEach((suggestion: string) => {
console.log(` - ${suggestion}`);
});
}
console.log('\nStatistics:');
console.log(` - Total nodes: ${validationResult.statistics.totalNodes}`);
console.log(` - Enabled nodes: ${validationResult.statistics.enabledNodes}`);
console.log(` - Trigger nodes: ${validationResult.statistics.triggerNodes}`);
console.log(` - Valid connections: ${validationResult.statistics.validConnections}`);
console.log(` - Invalid connections: ${validationResult.statistics.invalidConnections}`);
console.log(` - Expressions validated: ${validationResult.statistics.expressionsValidated}`);
} catch (error) {
logger.error(`Failed to validate template ${template.id}:`, error);
results.invalid++;
}
}
// Print summary
console.log('\n' + '='.repeat(80));
console.log('VALIDATION SUMMARY');
console.log('='.repeat(80));
console.log(`Total templates tested: ${results.total}`);
console.log(`Valid workflows: ${results.valid} (${((results.valid / results.total) * 100).toFixed(1)}%)`);
console.log(`Invalid workflows: ${results.invalid} (${((results.invalid / results.total) * 100).toFixed(1)}%)`);
console.log(`Workflows with errors: ${results.withErrors}`);
console.log(`Workflows with warnings: ${results.withWarnings}`);
if (results.errorTypes.size > 0) {
console.log('\nMost common errors:');
const sortedErrors = Array.from(results.errorTypes.entries())
.sort((a, b) => b[1] - a[1])
.slice(0, 5);
sortedErrors.forEach(([error, count]) => {
console.log(` - "${error}..." (${count} times)`);
});
}
if (results.warningTypes.size > 0) {
console.log('\nMost common warnings:');
const sortedWarnings = Array.from(results.warningTypes.entries())
.sort((a, b) => b[1] - a[1])
.slice(0, 5);
sortedWarnings.forEach(([warning, count]) => {
console.log(` - "${warning}..." (${count} times)`);
});
}
} catch (error) {
logger.error('Failed to run template validation:', error);
process.exit(1);
} finally {
db.close();
}
}
// Run tests
testTemplateValidation().catch(error => {
logger.error('Test failed:', error);
process.exit(1);
});

View File

@@ -1,88 +0,0 @@
#!/usr/bin/env node
import { createDatabaseAdapter } from '../database/database-adapter';
import { TemplateService } from '../templates/template-service';
import * as fs from 'fs';
import * as path from 'path';
async function testTemplates() {
console.log('🧪 Testing template functionality...\n');
// Initialize database
const db = await createDatabaseAdapter('./data/nodes.db');
// Apply schema if needed
const schema = fs.readFileSync(path.join(__dirname, '../../src/database/schema.sql'), 'utf8');
db.exec(schema);
// Create service
const service = new TemplateService(db);
try {
// Get statistics
const stats = await service.getTemplateStats();
console.log('📊 Template Database Stats:');
console.log(` Total templates: ${stats.totalTemplates}`);
if (stats.totalTemplates === 0) {
console.log('\n⚠ No templates found in database!');
console.log(' Run "npm run fetch:templates" to populate the database.\n');
return;
}
console.log(` Average views: ${stats.averageViews}`);
console.log('\n🔝 Most used nodes in templates:');
stats.topUsedNodes.forEach((node: any, i: number) => {
console.log(` ${i + 1}. ${node.node} (${node.count} templates)`);
});
// Test search
console.log('\n🔍 Testing search for "webhook":');
const searchResults = await service.searchTemplates('webhook', 3);
searchResults.forEach((t: any) => {
console.log(` - ${t.name} (${t.views} views)`);
});
// Test node-based search
console.log('\n🔍 Testing templates with HTTP Request node:');
const httpTemplates = await service.listNodeTemplates(['n8n-nodes-base.httpRequest'], 3);
httpTemplates.forEach((t: any) => {
console.log(` - ${t.name} (${t.nodes.length} nodes)`);
});
// Test task-based search
console.log('\n🔍 Testing AI automation templates:');
const aiTemplates = await service.getTemplatesForTask('ai_automation');
aiTemplates.forEach((t: any) => {
console.log(` - ${t.name} by @${t.author.username}`);
});
// Get a specific template
if (searchResults.length > 0) {
const templateId = searchResults[0].id;
console.log(`\n📄 Getting template ${templateId} details...`);
const template = await service.getTemplate(templateId);
if (template) {
console.log(` Name: ${template.name}`);
console.log(` Nodes: ${template.nodes.join(', ')}`);
console.log(` Workflow has ${template.workflow.nodes.length} nodes`);
}
}
console.log('\n✅ All template tests passed!');
} catch (error) {
console.error('❌ Error during testing:', error);
}
// Close database
if ('close' in db && typeof db.close === 'function') {
db.close();
}
}
// Run if called directly
if (require.main === module) {
testTemplates().catch(console.error);
}
export { testTemplates };

View File

@@ -1,55 +0,0 @@
import { N8NDocumentationMCPServer } from '../mcp/server';
async function testToolsDocumentation() {
const server = new N8NDocumentationMCPServer();
console.log('=== Testing tools_documentation tool ===\n');
// Test 1: No parameters (quick reference)
console.log('1. Testing without parameters (quick reference):');
console.log('----------------------------------------');
const quickRef = await server.executeTool('tools_documentation', {});
console.log(quickRef);
console.log('\n');
// Test 2: Overview with essentials depth
console.log('2. Testing overview with essentials:');
console.log('----------------------------------------');
const overviewEssentials = await server.executeTool('tools_documentation', { topic: 'overview' });
console.log(overviewEssentials);
console.log('\n');
// Test 3: Overview with full depth
console.log('3. Testing overview with full depth:');
console.log('----------------------------------------');
const overviewFull = await server.executeTool('tools_documentation', { topic: 'overview', depth: 'full' });
console.log(overviewFull.substring(0, 500) + '...\n');
// Test 4: Specific tool with essentials
console.log('4. Testing search_nodes with essentials:');
console.log('----------------------------------------');
const searchNodesEssentials = await server.executeTool('tools_documentation', { topic: 'search_nodes' });
console.log(searchNodesEssentials);
console.log('\n');
// Test 5: Specific tool with full documentation
console.log('5. Testing search_nodes with full depth:');
console.log('----------------------------------------');
const searchNodesFull = await server.executeTool('tools_documentation', { topic: 'search_nodes', depth: 'full' });
console.log(searchNodesFull.substring(0, 800) + '...\n');
// Test 6: Non-existent tool
console.log('6. Testing non-existent tool:');
console.log('----------------------------------------');
const nonExistent = await server.executeTool('tools_documentation', { topic: 'fake_tool' });
console.log(nonExistent);
console.log('\n');
// Test 7: Another tool example
console.log('7. Testing n8n_update_partial_workflow with essentials:');
console.log('----------------------------------------');
const updatePartial = await server.executeTool('tools_documentation', { topic: 'n8n_update_partial_workflow' });
console.log(updatePartial);
}
testToolsDocumentation().catch(console.error);

View File

@@ -1,276 +0,0 @@
/**
* Test script for transactional workflow diff operations
* Tests the two-pass processing approach
*/
import { WorkflowDiffEngine } from '../services/workflow-diff-engine';
import { Workflow, WorkflowNode } from '../types/n8n-api';
import { WorkflowDiffRequest } from '../types/workflow-diff';
import { Logger } from '../utils/logger';
const logger = new Logger({ prefix: '[TestTransactionalDiff]' });
// Create a test workflow
const testWorkflow: Workflow = {
id: 'test-workflow-123',
name: 'Test Workflow',
active: false,
nodes: [
{
id: '1',
name: 'Webhook',
type: 'n8n-nodes-base.webhook',
typeVersion: 2,
position: [200, 300],
parameters: {
path: '/test',
method: 'GET'
}
}
],
connections: {},
settings: {
executionOrder: 'v1'
},
tags: []
};
async function testAddNodesAndConnect() {
logger.info('Test 1: Add two nodes and connect them in one operation');
const engine = new WorkflowDiffEngine();
const request: WorkflowDiffRequest = {
id: testWorkflow.id!,
operations: [
// Add connections first (would fail in old implementation)
{
type: 'addConnection',
source: 'Webhook',
target: 'Process Data'
},
{
type: 'addConnection',
source: 'Process Data',
target: 'Send Email'
},
// Then add the nodes (two-pass will process these first)
{
type: 'addNode',
node: {
id: '2',
name: 'Process Data',
type: 'n8n-nodes-base.set',
typeVersion: 3,
position: [400, 300],
parameters: {
mode: 'manual',
fields: []
}
}
},
{
type: 'addNode',
node: {
id: '3',
name: 'Send Email',
type: 'n8n-nodes-base.emailSend',
typeVersion: 2.1,
position: [600, 300],
parameters: {
to: 'test@example.com',
subject: 'Test'
}
}
}
]
};
const result = await engine.applyDiff(testWorkflow, request);
if (result.success) {
logger.info('✅ Test passed! Operations applied successfully');
logger.info(`Message: ${result.message}`);
// Verify nodes were added
const workflow = result.workflow!;
const hasProcessData = workflow.nodes.some((n: WorkflowNode) => n.name === 'Process Data');
const hasSendEmail = workflow.nodes.some((n: WorkflowNode) => n.name === 'Send Email');
if (hasProcessData && hasSendEmail) {
logger.info('✅ Both nodes were added');
} else {
logger.error('❌ Nodes were not added correctly');
}
// Verify connections were made
const webhookConnections = workflow.connections['Webhook'];
const processConnections = workflow.connections['Process Data'];
if (webhookConnections && processConnections) {
logger.info('✅ Connections were established');
} else {
logger.error('❌ Connections were not established correctly');
}
} else {
logger.error('❌ Test failed!');
logger.error('Errors:', result.errors);
}
}
async function testOperationLimit() {
logger.info('\nTest 2: Operation limit (max 5)');
const engine = new WorkflowDiffEngine();
const request: WorkflowDiffRequest = {
id: testWorkflow.id!,
operations: [
{ type: 'addNode', node: { id: '101', name: 'Node1', type: 'n8n-nodes-base.set', typeVersion: 1, position: [400, 100], parameters: {} } },
{ type: 'addNode', node: { id: '102', name: 'Node2', type: 'n8n-nodes-base.set', typeVersion: 1, position: [400, 200], parameters: {} } },
{ type: 'addNode', node: { id: '103', name: 'Node3', type: 'n8n-nodes-base.set', typeVersion: 1, position: [400, 300], parameters: {} } },
{ type: 'addNode', node: { id: '104', name: 'Node4', type: 'n8n-nodes-base.set', typeVersion: 1, position: [400, 400], parameters: {} } },
{ type: 'addNode', node: { id: '105', name: 'Node5', type: 'n8n-nodes-base.set', typeVersion: 1, position: [400, 500], parameters: {} } },
{ type: 'addNode', node: { id: '106', name: 'Node6', type: 'n8n-nodes-base.set', typeVersion: 1, position: [400, 600], parameters: {} } }
]
};
const result = await engine.applyDiff(testWorkflow, request);
if (!result.success && result.errors?.[0]?.message.includes('Too many operations')) {
logger.info('✅ Operation limit enforced correctly');
} else {
logger.error('❌ Operation limit not enforced');
}
}
async function testValidateOnly() {
logger.info('\nTest 3: Validate only mode');
const engine = new WorkflowDiffEngine();
const request: WorkflowDiffRequest = {
id: testWorkflow.id!,
operations: [
// Test with connection first - two-pass should handle this
{
type: 'addConnection',
source: 'Webhook',
target: 'HTTP Request'
},
{
type: 'addNode',
node: {
id: '4',
name: 'HTTP Request',
type: 'n8n-nodes-base.httpRequest',
typeVersion: 4.2,
position: [400, 300],
parameters: {
method: 'GET',
url: 'https://api.example.com'
}
}
},
{
type: 'updateSettings',
settings: {
saveDataErrorExecution: 'all'
}
}
],
validateOnly: true
};
const result = await engine.applyDiff(testWorkflow, request);
if (result.success) {
logger.info('✅ Validate-only mode works correctly');
logger.info(`Validation message: ${result.message}`);
// Verify original workflow wasn't modified
if (testWorkflow.nodes.length === 1) {
logger.info('✅ Original workflow unchanged');
} else {
logger.error('❌ Original workflow was modified in validate-only mode');
}
} else {
logger.error('❌ Validate-only mode failed');
logger.error('Errors:', result.errors);
}
}
async function testMixedOperations() {
logger.info('\nTest 4: Mixed operations (update existing, add new, connect)');
const engine = new WorkflowDiffEngine();
const request: WorkflowDiffRequest = {
id: testWorkflow.id!,
operations: [
// Update existing node
{
type: 'updateNode',
nodeName: 'Webhook',
changes: {
'parameters.path': '/updated-path'
}
},
// Add new node
{
type: 'addNode',
node: {
id: '5',
name: 'Logger',
type: 'n8n-nodes-base.n8n',
typeVersion: 1,
position: [400, 300],
parameters: {
operation: 'log',
level: 'info'
}
}
},
// Connect them
{
type: 'addConnection',
source: 'Webhook',
target: 'Logger'
},
// Update workflow settings
{
type: 'updateSettings',
settings: {
saveDataErrorExecution: 'all'
}
}
]
};
const result = await engine.applyDiff(testWorkflow, request);
if (result.success) {
logger.info('✅ Mixed operations applied successfully');
logger.info(`Message: ${result.message}`);
} else {
logger.error('❌ Mixed operations failed');
logger.error('Errors:', result.errors);
}
}
// Run all tests
async function runTests() {
logger.info('Starting transactional diff tests...\n');
try {
await testAddNodesAndConnect();
await testOperationLimit();
await testValidateOnly();
await testMixedOperations();
logger.info('\n✅ All tests completed!');
} catch (error) {
logger.error('Test suite failed:', error);
}
}
// Run tests if this file is executed directly
if (require.main === module) {
runTests().catch(console.error);
}

View File

@@ -1,114 +0,0 @@
#!/usr/bin/env node
/**
* Debug test for n8n_update_partial_workflow
* Tests the actual update path to identify the issue
*/
import { config } from 'dotenv';
import { logger } from '../utils/logger';
import { isN8nApiConfigured } from '../config/n8n-api';
import { handleUpdatePartialWorkflow } from '../mcp/handlers-workflow-diff';
import { getN8nApiClient } from '../mcp/handlers-n8n-manager';
// Load environment variables
config();
async function testUpdatePartialDebug() {
logger.info('Debug test for n8n_update_partial_workflow...');
// Check if API is configured
if (!isN8nApiConfigured()) {
logger.warn('n8n API not configured. This test requires a real n8n instance.');
logger.info('Set N8N_API_URL and N8N_API_KEY to test.');
return;
}
const client = getN8nApiClient();
if (!client) {
logger.error('Failed to create n8n API client');
return;
}
try {
// First, create a test workflow
logger.info('\n=== Creating test workflow ===');
const testWorkflow = {
name: `Test Partial Update ${Date.now()}`,
nodes: [
{
id: '1',
name: 'Start',
type: 'n8n-nodes-base.start',
typeVersion: 1,
position: [250, 300] as [number, number],
parameters: {}
},
{
id: '2',
name: 'Set',
type: 'n8n-nodes-base.set',
typeVersion: 3,
position: [450, 300] as [number, number],
parameters: {
mode: 'manual',
fields: {
values: [
{ name: 'message', value: 'Initial value' }
]
}
}
}
],
connections: {
'Start': {
main: [[{ node: 'Set', type: 'main', index: 0 }]]
}
},
settings: {
executionOrder: 'v1' as 'v1'
}
};
const createdWorkflow = await client.createWorkflow(testWorkflow);
logger.info('Created workflow:', {
id: createdWorkflow.id,
name: createdWorkflow.name
});
// Now test partial update WITHOUT validateOnly
logger.info('\n=== Testing partial update (NO validateOnly) ===');
const updateRequest = {
id: createdWorkflow.id!,
operations: [
{
type: 'updateName',
name: 'Updated via Partial Update'
}
]
// Note: NO validateOnly flag
};
logger.info('Update request:', JSON.stringify(updateRequest, null, 2));
const result = await handleUpdatePartialWorkflow(updateRequest);
logger.info('Update result:', JSON.stringify(result, null, 2));
// Cleanup - delete test workflow
if (createdWorkflow.id) {
logger.info('\n=== Cleanup ===');
await client.deleteWorkflow(createdWorkflow.id);
logger.info('Deleted test workflow');
}
} catch (error) {
logger.error('Test failed:', error);
}
}
// Run test
testUpdatePartialDebug().catch(error => {
logger.error('Unhandled error:', error);
process.exit(1);
});

View File

@@ -1,90 +0,0 @@
import { NodeParser } from '../parsers/node-parser';
// Test script to verify version extraction from different node types
async function testVersionExtraction() {
console.log('Testing version extraction from different node types...\n');
const parser = new NodeParser();
// Test cases
const testCases = [
{
name: 'Gmail Trigger (version array)',
nodeType: 'nodes-base.gmailTrigger',
expectedVersion: '1.2',
expectedVersioned: true
},
{
name: 'HTTP Request (VersionedNodeType)',
nodeType: 'nodes-base.httpRequest',
expectedVersion: '4.2',
expectedVersioned: true
},
{
name: 'Code (version array)',
nodeType: 'nodes-base.code',
expectedVersion: '2',
expectedVersioned: true
}
];
// Load nodes from packages
const basePackagePath = process.cwd() + '/node_modules/n8n/node_modules/n8n-nodes-base';
for (const testCase of testCases) {
console.log(`\nTesting: ${testCase.name}`);
console.log(`Node Type: ${testCase.nodeType}`);
try {
// Find the node file
const nodeName = testCase.nodeType.split('.')[1];
// Try different paths
const possiblePaths = [
`${basePackagePath}/dist/nodes/${nodeName}.node.js`,
`${basePackagePath}/dist/nodes/Google/Gmail/GmailTrigger.node.js`,
`${basePackagePath}/dist/nodes/HttpRequest/HttpRequest.node.js`,
`${basePackagePath}/dist/nodes/Code/Code.node.js`
];
let nodeClass = null;
for (const path of possiblePaths) {
try {
const module = require(path);
nodeClass = module[Object.keys(module)[0]];
if (nodeClass) break;
} catch (e) {
// Try next path
}
}
if (!nodeClass) {
console.log('❌ Could not load node');
continue;
}
// Parse the node
const parsed = parser.parse(nodeClass, 'n8n-nodes-base');
console.log(`Loaded node: ${parsed.displayName} (${parsed.nodeType})`);
console.log(`Extracted version: ${parsed.version}`);
console.log(`Is versioned: ${parsed.isVersioned}`);
console.log(`Expected version: ${testCase.expectedVersion}`);
console.log(`Expected versioned: ${testCase.expectedVersioned}`);
if (parsed.version === testCase.expectedVersion &&
parsed.isVersioned === testCase.expectedVersioned) {
console.log('✅ PASS');
} else {
console.log('❌ FAIL');
}
} catch (error) {
console.log(`❌ Error: ${error instanceof Error ? error.message : String(error)}`);
}
}
}
// Run the test
testVersionExtraction().catch(console.error);

View File

@@ -1,374 +0,0 @@
#!/usr/bin/env node
/**
* Test script for workflow diff engine
* Tests various diff operations and edge cases
*/
import { WorkflowDiffEngine } from '../services/workflow-diff-engine';
import { WorkflowDiffRequest } from '../types/workflow-diff';
import { Workflow } from '../types/n8n-api';
import { Logger } from '../utils/logger';
const logger = new Logger({ prefix: '[test-workflow-diff]' });
// Sample workflow for testing
const sampleWorkflow: Workflow = {
id: 'test-workflow-123',
name: 'Test Workflow',
nodes: [
{
id: 'webhook_1',
name: 'Webhook',
type: 'n8n-nodes-base.webhook',
typeVersion: 1.1,
position: [200, 200],
parameters: {
path: 'test-webhook',
method: 'GET'
}
},
{
id: 'set_1',
name: 'Set',
type: 'n8n-nodes-base.set',
typeVersion: 3,
position: [400, 200],
parameters: {
mode: 'manual',
fields: {
values: [
{ name: 'message', value: 'Hello World' }
]
}
}
}
],
connections: {
'Webhook': {
main: [[{ node: 'Set', type: 'main', index: 0 }]]
}
},
settings: {
executionOrder: 'v1',
saveDataSuccessExecution: 'all'
},
tags: ['test', 'demo']
};
async function testAddNode() {
console.log('\n=== Testing Add Node Operation ===');
const engine = new WorkflowDiffEngine();
const request: WorkflowDiffRequest = {
id: 'test-workflow-123',
operations: [
{
type: 'addNode',
description: 'Add HTTP Request node',
node: {
name: 'HTTP Request',
type: 'n8n-nodes-base.httpRequest',
position: [600, 200],
parameters: {
url: 'https://api.example.com/data',
method: 'GET'
}
}
}
]
};
const result = await engine.applyDiff(sampleWorkflow, request);
if (result.success) {
console.log('✅ Add node successful');
console.log(` - Nodes count: ${result.workflow!.nodes.length}`);
console.log(` - New node: ${result.workflow!.nodes[2].name}`);
} else {
console.error('❌ Add node failed:', result.errors);
}
}
async function testRemoveNode() {
console.log('\n=== Testing Remove Node Operation ===');
const engine = new WorkflowDiffEngine();
const request: WorkflowDiffRequest = {
id: 'test-workflow-123',
operations: [
{
type: 'removeNode',
description: 'Remove Set node',
nodeName: 'Set'
}
]
};
const result = await engine.applyDiff(sampleWorkflow, request);
if (result.success) {
console.log('✅ Remove node successful');
console.log(` - Nodes count: ${result.workflow!.nodes.length}`);
console.log(` - Connections cleaned: ${Object.keys(result.workflow!.connections).length}`);
} else {
console.error('❌ Remove node failed:', result.errors);
}
}
async function testUpdateNode() {
console.log('\n=== Testing Update Node Operation ===');
const engine = new WorkflowDiffEngine();
const request: WorkflowDiffRequest = {
id: 'test-workflow-123',
operations: [
{
type: 'updateNode',
description: 'Update webhook path',
nodeName: 'Webhook',
changes: {
'parameters.path': 'new-webhook-path',
'parameters.method': 'POST'
}
}
]
};
const result = await engine.applyDiff(sampleWorkflow, request);
if (result.success) {
console.log('✅ Update node successful');
const updatedNode = result.workflow!.nodes.find((n: any) => n.name === 'Webhook');
console.log(` - New path: ${updatedNode!.parameters.path}`);
console.log(` - New method: ${updatedNode!.parameters.method}`);
} else {
console.error('❌ Update node failed:', result.errors);
}
}
async function testAddConnection() {
console.log('\n=== Testing Add Connection Operation ===');
// First add a node to connect to
const workflowWithExtraNode = JSON.parse(JSON.stringify(sampleWorkflow));
workflowWithExtraNode.nodes.push({
id: 'email_1',
name: 'Send Email',
type: 'n8n-nodes-base.emailSend',
typeVersion: 2,
position: [600, 200],
parameters: {}
});
const engine = new WorkflowDiffEngine();
const request: WorkflowDiffRequest = {
id: 'test-workflow-123',
operations: [
{
type: 'addConnection',
description: 'Connect Set to Send Email',
source: 'Set',
target: 'Send Email'
}
]
};
const result = await engine.applyDiff(workflowWithExtraNode, request);
if (result.success) {
console.log('✅ Add connection successful');
const setConnections = result.workflow!.connections['Set'];
console.log(` - Connection added: ${JSON.stringify(setConnections)}`);
} else {
console.error('❌ Add connection failed:', result.errors);
}
}
async function testMultipleOperations() {
console.log('\n=== Testing Multiple Operations ===');
const engine = new WorkflowDiffEngine();
const request: WorkflowDiffRequest = {
id: 'test-workflow-123',
operations: [
{
type: 'updateName',
name: 'Updated Test Workflow'
},
{
type: 'addNode',
node: {
name: 'If',
type: 'n8n-nodes-base.if',
position: [400, 400],
parameters: {}
}
},
{
type: 'disableNode',
nodeName: 'Set'
},
{
type: 'addTag',
tag: 'updated'
}
]
};
const result = await engine.applyDiff(sampleWorkflow, request);
if (result.success) {
console.log('✅ Multiple operations successful');
console.log(` - New name: ${result.workflow!.name}`);
console.log(` - Operations applied: ${result.operationsApplied}`);
console.log(` - Node count: ${result.workflow!.nodes.length}`);
console.log(` - Tags: ${result.workflow!.tags?.join(', ')}`);
} else {
console.error('❌ Multiple operations failed:', result.errors);
}
}
async function testValidationOnly() {
console.log('\n=== Testing Validation Only ===');
const engine = new WorkflowDiffEngine();
const request: WorkflowDiffRequest = {
id: 'test-workflow-123',
operations: [
{
type: 'addNode',
node: {
name: 'Webhook', // Duplicate name - should fail validation
type: 'n8n-nodes-base.webhook',
position: [600, 400]
}
}
],
validateOnly: true
};
const result = await engine.applyDiff(sampleWorkflow, request);
console.log(` - Validation result: ${result.success ? '✅ Valid' : '❌ Invalid'}`);
if (!result.success) {
console.log(` - Error: ${result.errors![0].message}`);
} else {
console.log(` - Message: ${result.message}`);
}
}
async function testInvalidOperations() {
console.log('\n=== Testing Invalid Operations ===');
const engine = new WorkflowDiffEngine();
// Test 1: Invalid node type
console.log('\n1. Testing invalid node type:');
let result = await engine.applyDiff(sampleWorkflow, {
id: 'test-workflow-123',
operations: [{
type: 'addNode',
node: {
name: 'Bad Node',
type: 'webhook', // Missing package prefix
position: [600, 400]
}
}]
});
console.log(` - Result: ${result.success ? '✅' : '❌'} ${result.errors?.[0]?.message || 'Success'}`);
// Test 2: Remove non-existent node
console.log('\n2. Testing remove non-existent node:');
result = await engine.applyDiff(sampleWorkflow, {
id: 'test-workflow-123',
operations: [{
type: 'removeNode',
nodeName: 'Non Existent Node'
}]
});
console.log(` - Result: ${result.success ? '✅' : '❌'} ${result.errors?.[0]?.message || 'Success'}`);
// Test 3: Invalid connection
console.log('\n3. Testing invalid connection:');
result = await engine.applyDiff(sampleWorkflow, {
id: 'test-workflow-123',
operations: [{
type: 'addConnection',
source: 'Webhook',
target: 'Non Existent Node'
}]
});
console.log(` - Result: ${result.success ? '✅' : '❌'} ${result.errors?.[0]?.message || 'Success'}`);
}
async function testNodeReferenceByIdAndName() {
console.log('\n=== Testing Node Reference by ID and Name ===');
const engine = new WorkflowDiffEngine();
// Test update by ID
console.log('\n1. Update node by ID:');
let result = await engine.applyDiff(sampleWorkflow, {
id: 'test-workflow-123',
operations: [{
type: 'updateNode',
nodeId: 'webhook_1',
changes: {
'parameters.path': 'updated-by-id'
}
}]
});
if (result.success) {
const node = result.workflow!.nodes.find((n: any) => n.id === 'webhook_1');
console.log(` - ✅ Success: path = ${node!.parameters.path}`);
} else {
console.log(` - ❌ Failed: ${result.errors![0].message}`);
}
// Test update by name
console.log('\n2. Update node by name:');
result = await engine.applyDiff(sampleWorkflow, {
id: 'test-workflow-123',
operations: [{
type: 'updateNode',
nodeName: 'Webhook',
changes: {
'parameters.path': 'updated-by-name'
}
}]
});
if (result.success) {
const node = result.workflow!.nodes.find((n: any) => n.name === 'Webhook');
console.log(` - ✅ Success: path = ${node!.parameters.path}`);
} else {
console.log(` - ❌ Failed: ${result.errors![0].message}`);
}
}
// Run all tests
async function runTests() {
try {
console.log('🧪 Running Workflow Diff Engine Tests...\n');
await testAddNode();
await testRemoveNode();
await testUpdateNode();
await testAddConnection();
await testMultipleOperations();
await testValidationOnly();
await testInvalidOperations();
await testNodeReferenceByIdAndName();
console.log('\n✅ All tests completed!');
} catch (error) {
console.error('\n❌ Test failed with error:', error);
process.exit(1);
}
}
// Run tests if this is the main module
if (require.main === module) {
runTests();
}

View File

@@ -1,272 +0,0 @@
#!/usr/bin/env node
/**
* Test script for workflow validation features
* Tests the new workflow validation tools with various scenarios
*/
import { existsSync } from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
import { dirname } from 'path';
import { NodeRepository } from '../database/node-repository';
import { createDatabaseAdapter } from '../database/database-adapter';
import { WorkflowValidator } from '../services/workflow-validator';
import { EnhancedConfigValidator } from '../services/enhanced-config-validator';
import { Logger } from '../utils/logger';
const logger = new Logger({ prefix: '[test-workflow-validation]' });
// Test workflows
const VALID_WORKFLOW = {
name: 'Test Valid Workflow',
nodes: [
{
id: '1',
name: 'Schedule Trigger',
type: 'nodes-base.scheduleTrigger',
position: [250, 300] as [number, number],
parameters: {
rule: {
interval: [{ field: 'hours', hoursInterval: 1 }]
}
}
},
{
id: '2',
name: 'HTTP Request',
type: 'nodes-base.httpRequest',
position: [450, 300] as [number, number],
parameters: {
url: 'https://api.example.com/data',
method: 'GET'
}
},
{
id: '3',
name: 'Set',
type: 'nodes-base.set',
position: [650, 300] as [number, number],
parameters: {
values: {
string: [
{
name: 'status',
value: '={{ $json.status }}'
}
]
}
}
}
],
connections: {
'Schedule Trigger': {
main: [[{ node: 'HTTP Request', type: 'main', index: 0 }]]
},
'HTTP Request': {
main: [[{ node: 'Set', type: 'main', index: 0 }]]
}
}
};
const WORKFLOW_WITH_CYCLE = {
name: 'Workflow with Cycle',
nodes: [
{
id: '1',
name: 'Start',
type: 'nodes-base.start',
position: [250, 300] as [number, number],
parameters: {}
},
{
id: '2',
name: 'Node A',
type: 'nodes-base.set',
position: [450, 300] as [number, number],
parameters: { values: { string: [] } }
},
{
id: '3',
name: 'Node B',
type: 'nodes-base.set',
position: [650, 300] as [number, number],
parameters: { values: { string: [] } }
}
],
connections: {
'Start': {
main: [[{ node: 'Node A', type: 'main', index: 0 }]]
},
'Node A': {
main: [[{ node: 'Node B', type: 'main', index: 0 }]]
},
'Node B': {
main: [[{ node: 'Node A', type: 'main', index: 0 }]] // Creates cycle
}
}
};
const WORKFLOW_WITH_INVALID_EXPRESSION = {
name: 'Workflow with Invalid Expression',
nodes: [
{
id: '1',
name: 'Webhook',
type: 'nodes-base.webhook',
position: [250, 300] as [number, number],
parameters: {
path: 'test-webhook'
}
},
{
id: '2',
name: 'Set Data',
type: 'nodes-base.set',
position: [450, 300] as [number, number],
parameters: {
values: {
string: [
{
name: 'invalidExpression',
value: '={{ json.field }}' // Missing $ prefix
},
{
name: 'nestedExpression',
value: '={{ {{ $json.field }} }}' // Nested expressions not allowed
},
{
name: 'nodeReference',
value: '={{ $node["Non Existent Node"].json.data }}'
}
]
}
}
}
],
connections: {
'Webhook': {
main: [[{ node: 'Set Data', type: 'main', index: 0 }]]
}
}
};
const WORKFLOW_WITH_ORPHANED_NODE = {
name: 'Workflow with Orphaned Node',
nodes: [
{
id: '1',
name: 'Schedule Trigger',
type: 'nodes-base.scheduleTrigger',
position: [250, 300] as [number, number],
parameters: {
rule: { interval: [{ field: 'hours', hoursInterval: 1 }] }
}
},
{
id: '2',
name: 'HTTP Request',
type: 'nodes-base.httpRequest',
position: [450, 300] as [number, number],
parameters: {
url: 'https://api.example.com',
method: 'GET'
}
},
{
id: '3',
name: 'Orphaned Node',
type: 'nodes-base.set',
position: [450, 500] as [number, number],
parameters: {
values: { string: [] }
}
}
],
connections: {
'Schedule Trigger': {
main: [[{ node: 'HTTP Request', type: 'main', index: 0 }]]
}
// Orphaned Node has no connections
}
};
async function testWorkflowValidation() {
logger.info('Starting workflow validation tests...\n');
// Initialize database
const dbPath = path.join(process.cwd(), 'data', 'nodes.db');
if (!existsSync(dbPath)) {
logger.error('Database not found. Run npm run rebuild first.');
process.exit(1);
}
const db = await createDatabaseAdapter(dbPath);
const repository = new NodeRepository(db);
const validator = new WorkflowValidator(
repository,
EnhancedConfigValidator
);
// Test 1: Valid workflow
logger.info('Test 1: Validating a valid workflow');
const validResult = await validator.validateWorkflow(VALID_WORKFLOW);
console.log('Valid workflow result:', JSON.stringify(validResult, null, 2));
console.log('---\n');
// Test 2: Workflow with cycle
logger.info('Test 2: Validating workflow with cycle');
const cycleResult = await validator.validateWorkflow(WORKFLOW_WITH_CYCLE);
console.log('Cycle workflow result:', JSON.stringify(cycleResult, null, 2));
console.log('---\n');
// Test 3: Workflow with invalid expressions
logger.info('Test 3: Validating workflow with invalid expressions');
const expressionResult = await validator.validateWorkflow(WORKFLOW_WITH_INVALID_EXPRESSION);
console.log('Invalid expression result:', JSON.stringify(expressionResult, null, 2));
console.log('---\n');
// Test 4: Workflow with orphaned node
logger.info('Test 4: Validating workflow with orphaned node');
const orphanedResult = await validator.validateWorkflow(WORKFLOW_WITH_ORPHANED_NODE);
console.log('Orphaned node result:', JSON.stringify(orphanedResult, null, 2));
console.log('---\n');
// Test 5: Connection-only validation
logger.info('Test 5: Testing connection-only validation');
const connectionOnlyResult = await validator.validateWorkflow(WORKFLOW_WITH_CYCLE, {
validateNodes: false,
validateConnections: true,
validateExpressions: false
});
console.log('Connection-only result:', JSON.stringify(connectionOnlyResult, null, 2));
console.log('---\n');
// Test 6: Expression-only validation
logger.info('Test 6: Testing expression-only validation');
const expressionOnlyResult = await validator.validateWorkflow(WORKFLOW_WITH_INVALID_EXPRESSION, {
validateNodes: false,
validateConnections: false,
validateExpressions: true
});
console.log('Expression-only result:', JSON.stringify(expressionOnlyResult, null, 2));
console.log('---\n');
// Test summary
logger.info('Test Summary:');
console.log('✓ Valid workflow:', validResult.valid ? 'PASSED' : 'FAILED');
console.log('✓ Cycle detection:', !cycleResult.valid ? 'PASSED' : 'FAILED');
console.log('✓ Expression validation:', !expressionResult.valid ? 'PASSED' : 'FAILED');
console.log('✓ Orphaned node detection:', orphanedResult.warnings.length > 0 ? 'PASSED' : 'FAILED');
console.log('✓ Connection-only validation:', connectionOnlyResult.errors.length > 0 ? 'PASSED' : 'FAILED');
console.log('✓ Expression-only validation:', expressionOnlyResult.errors.length > 0 ? 'PASSED' : 'FAILED');
// Close database
db.close();
}
// Run tests
testWorkflowValidation().catch(error => {
logger.error('Test failed:', error);
process.exit(1);
});

View File

@@ -1,281 +0,0 @@
import { describe, it, expect, vi, beforeEach, afterEach, beforeAll } from 'vitest';
import { SingleSessionHTTPServer } from '../http-server-single-session';
import express from 'express';
import { ConsoleManager } from '../utils/console-manager';
// Mock express Request and Response
const createMockRequest = (body: any = {}): express.Request => {
// Create a mock readable stream for the request body
const { Readable } = require('stream');
const bodyString = JSON.stringify(body);
const stream = new Readable({
read() {}
});
// Push the body data and signal end
setTimeout(() => {
stream.push(bodyString);
stream.push(null);
}, 0);
const req: any = Object.assign(stream, {
body,
headers: {
authorization: `Bearer ${process.env.AUTH_TOKEN || 'test-token'}`,
'content-type': 'application/json',
'content-length': bodyString.length.toString()
},
method: 'POST',
path: '/mcp',
ip: '127.0.0.1',
get: (header: string) => {
if (header === 'user-agent') return 'test-agent';
if (header === 'content-length') return bodyString.length.toString();
if (header === 'content-type') return 'application/json';
return req.headers[header.toLowerCase()];
}
});
return req;
};
const createMockResponse = (): express.Response => {
const { Writable } = require('stream');
const chunks: Buffer[] = [];
const stream = new Writable({
write(chunk: any, encoding: string, callback: Function) {
chunks.push(Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk));
callback();
}
});
const res: any = Object.assign(stream, {
statusCode: 200,
headers: {} as any,
body: null as any,
headersSent: false,
chunks,
status: function(code: number) {
this.statusCode = code;
return this;
},
json: function(data: any) {
this.body = data;
this.headersSent = true;
const jsonStr = JSON.stringify(data);
stream.write(jsonStr);
stream.end();
return this;
},
setHeader: function(name: string, value: string) {
this.headers[name] = value;
return this;
},
writeHead: function(statusCode: number, headers?: any) {
this.statusCode = statusCode;
if (headers) {
Object.assign(this.headers, headers);
}
this.headersSent = true;
return this;
},
end: function(data?: any) {
if (data) {
stream.write(data);
}
// Parse the accumulated chunks as the body
if (chunks.length > 0) {
const fullBody = Buffer.concat(chunks).toString();
try {
this.body = JSON.parse(fullBody);
} catch {
this.body = fullBody;
}
}
stream.end();
return this;
}
});
return res;
};
describe('SingleSessionHTTPServer', () => {
let server: SingleSessionHTTPServer;
beforeAll(() => {
process.env.AUTH_TOKEN = 'test-token';
process.env.MCP_MODE = 'http';
});
beforeEach(() => {
server = new SingleSessionHTTPServer();
});
afterEach(async () => {
await server.shutdown();
});
describe('Console Management', () => {
it('should silence console during request handling', async () => {
// Set MCP_MODE to http to enable console silencing
const originalMode = process.env.MCP_MODE;
process.env.MCP_MODE = 'http';
// Save the original console.log
const originalLog = console.log;
// Track if console methods were called
let logCalled = false;
const trackingLog = (...args: any[]) => {
logCalled = true;
originalLog(...args); // Call original for debugging
};
// Replace console.log BEFORE creating ConsoleManager
console.log = trackingLog;
// Now create console manager which will capture our tracking function
const consoleManager = new ConsoleManager();
// Test console is silenced during operation
await consoleManager.wrapOperation(async () => {
// Reset the flag
logCalled = false;
// This should not actually call our tracking function
console.log('This should not appear');
expect(logCalled).toBe(false);
});
// After operation, console should be restored to our tracking function
logCalled = false;
console.log('This should appear');
expect(logCalled).toBe(true);
// Restore everything
console.log = originalLog;
process.env.MCP_MODE = originalMode;
});
it('should handle errors and still restore console', async () => {
const consoleManager = new ConsoleManager();
const originalError = console.error;
try {
await consoleManager.wrapOperation(() => {
throw new Error('Test error');
});
} catch (error) {
// Expected error
}
// Verify console was restored
expect(console.error).toBe(originalError);
});
});
describe('Session Management', () => {
it('should create a single session on first request', async () => {
const sessionInfoBefore = server.getSessionInfo();
expect(sessionInfoBefore.active).toBe(false);
// Since handleRequest would hang with our mocks,
// we'll test the session info functionality directly
// The actual request handling is an integration test concern
// Test that we can get session info when no session exists
expect(sessionInfoBefore).toEqual({ active: false });
});
it('should reuse the same session for multiple requests', async () => {
// This is tested implicitly by the SingleSessionHTTPServer design
// which always returns 'single-session' as the sessionId
const sessionInfo = server.getSessionInfo();
// If there was a session, it would always have the same ID
if (sessionInfo.active) {
expect(sessionInfo.sessionId).toBe('single-session');
}
});
it('should handle authentication correctly', async () => {
// Authentication is handled by the Express middleware in the actual server
// The handleRequest method assumes auth has already been validated
// This is more of an integration test concern
// Test that the server was initialized with auth token
expect(server).toBeDefined();
// The constructor would have thrown if auth token was invalid
});
it('should handle invalid auth token', async () => {
// This test would need to test the Express route handler, not handleRequest
// handleRequest assumes authentication has already been performed
// This is covered by integration tests
expect(server).toBeDefined();
});
});
describe('Session Expiry', () => {
it('should detect expired sessions', () => {
// This would require mocking timers or exposing internal state
// For now, we'll test the concept
const sessionInfo = server.getSessionInfo();
expect(sessionInfo.active).toBe(false);
});
});
describe('Error Handling', () => {
it('should handle server errors gracefully', async () => {
// Error handling is tested by the handleRequest method's try-catch block
// Since we can't easily test handleRequest with mocks (it uses streams),
// we'll verify the server's error handling setup
// Test that shutdown method exists and can be called
expect(server.shutdown).toBeDefined();
expect(typeof server.shutdown).toBe('function');
// The actual error handling is covered by integration tests
});
});
});
describe('ConsoleManager', () => {
it('should only silence in HTTP mode', () => {
const originalMode = process.env.MCP_MODE;
process.env.MCP_MODE = 'stdio';
const consoleManager = new ConsoleManager();
const originalLog = console.log;
consoleManager.silence();
expect(console.log).toBe(originalLog); // Should not change
process.env.MCP_MODE = originalMode;
});
it('should track silenced state', () => {
process.env.MCP_MODE = 'http';
const consoleManager = new ConsoleManager();
expect(consoleManager.isActive).toBe(false);
consoleManager.silence();
expect(consoleManager.isActive).toBe(true);
consoleManager.restore();
expect(consoleManager.isActive).toBe(false);
});
it('should handle nested calls correctly', () => {
process.env.MCP_MODE = 'http';
const consoleManager = new ConsoleManager();
const originalLog = console.log;
consoleManager.silence();
consoleManager.silence(); // Second call should be no-op
expect(consoleManager.isActive).toBe(true);
consoleManager.restore();
expect(console.log).toBe(originalLog);
});
});

View File

@@ -1,137 +0,0 @@
# Phase 3 Implementation Context
## Quick Start for Implementation
You are implementing Phase 3 of the testing strategy. Phase 2 (test infrastructure) is complete. Your task is to write comprehensive unit tests for all services in `src/services/`.
### Immediate Action Items
1. **Start with Priority 1 Services** (in order):
- `config-validator.ts` - Complete existing tests (currently ~20% coverage)
- `enhanced-config-validator.ts` - Complete existing tests (currently ~15% coverage)
- `workflow-validator.ts` - Complete existing tests (currently ~10% coverage)
2. **Use Existing Infrastructure**:
- Framework: Vitest (already configured)
- Test location: `tests/unit/services/`
- Factories: `tests/fixtures/factories/`
- Imports: Use `@/` alias for src, `@tests/` for test utils
### Critical Context
#### 1. Validation Services Architecture
```
ConfigValidator (base)
EnhancedConfigValidator (extends base, adds operation awareness)
NodeSpecificValidators (used by both)
```
#### 2. Key Testing Patterns
**For Validators:**
```typescript
describe('ConfigValidator', () => {
describe('validate', () => {
it('should detect missing required fields', () => {
const result = ConfigValidator.validate(nodeType, config, properties);
expect(result.valid).toBe(false);
expect(result.errors).toContainEqual(
expect.objectContaining({
type: 'missing_required',
property: 'channel'
})
);
});
});
});
```
**For API Client:**
```typescript
vi.mock('axios');
const mockAxios = axios as jest.Mocked<typeof axios>;
describe('N8nApiClient', () => {
beforeEach(() => {
mockAxios.create.mockReturnValue({
get: vi.fn(),
post: vi.fn(),
// ... etc
});
});
});
```
#### 3. Complex Scenarios to Test
**ConfigValidator:**
- Property visibility with displayOptions (show/hide conditions)
- Node-specific validation (HTTP Request, Webhook, Code nodes)
- Security validations (hardcoded credentials, SQL injection)
- Type validation (string, number, boolean, options)
**WorkflowValidator:**
- Invalid node types (missing package prefix)
- Connection validation (cycles, orphaned nodes)
- Expression validation within workflow context
- Error handling properties (onError, retryOnFail)
- AI Agent workflows with tool connections
**WorkflowDiffEngine:**
- All operation types (addNode, removeNode, updateNode, etc.)
- Transaction-like behavior (all succeed or all fail)
- Node name vs ID handling
- Connection cleanup when removing nodes
### Testing Infrastructure Available
1. **Database Mocking**:
```typescript
vi.mock('better-sqlite3');
```
2. **Node Factory** (already exists):
```typescript
import { slackNodeFactory } from '@tests/fixtures/factories/node.factory';
```
3. **Type Imports**:
```typescript
import type { ValidationResult, ValidationError } from '@/services/config-validator';
```
### Common Pitfalls to Avoid
1. **Don't Mock Too Deep**: Mock at service boundaries (database, HTTP), not internal methods
2. **Test Behavior, Not Implementation**: Focus on inputs/outputs, not internal state
3. **Use Real Data Structures**: Use actual n8n node/workflow structures from fixtures
4. **Handle Async Properly**: Many services have async methods, use `async/await` in tests
### Coverage Goals
| Priority | Service | Target Coverage | Key Focus Areas |
|----------|---------|----------------|-----------------|
| 1 | config-validator | 85% | displayOptions, node-specific validation |
| 1 | enhanced-config-validator | 85% | operation modes, profiles |
| 1 | workflow-validator | 90% | connections, expressions, error handling |
| 2 | n8n-api-client | 85% | all endpoints, error scenarios |
| 2 | workflow-diff-engine | 85% | all operations, validation |
| 3 | expression-validator | 90% | syntax, context validation |
### Next Steps
1. Complete tests for Priority 1 services first
2. Create additional factories as needed
3. Track coverage with `npm run test:coverage`
4. Focus on edge cases and error scenarios
5. Ensure all async operations are properly tested
### Resources
- Testing plan: `/tests/PHASE3_TESTING_PLAN.md`
- Service documentation: Check each service file's header comments
- n8n structures: Use actual examples from `tests/fixtures/`
Remember: The goal is reliable, maintainable tests that catch real bugs, not just high coverage numbers.

View File

@@ -1,262 +0,0 @@
# Phase 3: Unit Tests - Comprehensive Testing Plan
## Executive Summary
Phase 3 focuses on achieving 80%+ test coverage for all services in `src/services/`. The test infrastructure (Phase 2) is complete with Vitest, factories, and mocking capabilities. This plan prioritizes critical services and identifies complex testing scenarios.
## Current State Analysis
### Test Infrastructure (Phase 2 Complete)
- ✅ Vitest framework configured
- ✅ Test factories (`node.factory.ts`)
- ✅ Mocking strategy for SQLite database
- ✅ Initial test files created for 4 core services
- ✅ Test directory structure established
### Services Requiring Tests (13 total)
1. **config-validator.ts** - ⚠️ Partially tested
2. **enhanced-config-validator.ts** - ⚠️ Partially tested
3. **expression-validator.ts** - ⚠️ Partially tested
4. **workflow-validator.ts** - ⚠️ Partially tested
5. **n8n-api-client.ts** - ❌ Not tested
6. **n8n-validation.ts** - ❌ Not tested
7. **node-documentation-service.ts** - ❌ Not tested
8. **node-specific-validators.ts** - ❌ Not tested
9. **property-dependencies.ts** - ❌ Not tested
10. **property-filter.ts** - ❌ Not tested
11. **example-generator.ts** - ❌ Not tested
12. **task-templates.ts** - ❌ Not tested
13. **workflow-diff-engine.ts** - ❌ Not tested
## Priority Classification
### Priority 1: Critical Path Services (Core Validation)
These services are used by almost all MCP tools and must be thoroughly tested.
1. **config-validator.ts** (745 lines)
- Core validation logic for all nodes
- Complex displayOptions visibility logic
- Node-specific validation rules
- **Test Requirements**: 50+ test cases covering all validation types
2. **enhanced-config-validator.ts** (467 lines)
- Operation-aware validation
- Profile-based filtering (minimal, runtime, ai-friendly, strict)
- **Test Requirements**: 30+ test cases for each profile
3. **workflow-validator.ts** (1347 lines)
- Complete workflow validation
- Connection validation with cycle detection
- Node-level error handling validation
- **Test Requirements**: 60+ test cases covering all workflow patterns
### Priority 2: External Dependencies (API & Data Access)
Services with external dependencies requiring comprehensive mocking.
4. **n8n-api-client.ts** (405 lines)
- HTTP client with retry logic
- Multiple API endpoints
- Error handling for various failure modes
- **Test Requirements**: Mock axios, test all endpoints, error scenarios
5. **node-documentation-service.ts**
- Database queries
- Documentation formatting
- **Test Requirements**: Mock database, test query patterns
6. **workflow-diff-engine.ts** (628 lines)
- Complex state mutations
- Transaction-like operation application
- **Test Requirements**: 40+ test cases for all operation types
### Priority 3: Supporting Services
Important but lower complexity services.
7. **expression-validator.ts** (299 lines)
- n8n expression syntax validation
- Variable reference checking
- **Test Requirements**: 25+ test cases for expression patterns
8. **node-specific-validators.ts**
- Node-specific validation logic
- Integration with base validators
- **Test Requirements**: 20+ test cases per node type
9. **property-dependencies.ts**
- Property visibility dependencies
- **Test Requirements**: 15+ test cases
### Priority 4: Utility Services
Simpler services with straightforward testing needs.
10. **property-filter.ts**
- Property filtering logic
- **Test Requirements**: 10+ test cases
11. **example-generator.ts**
- Example configuration generation
- **Test Requirements**: 10+ test cases
12. **task-templates.ts**
- Pre-configured templates
- **Test Requirements**: Template validation tests
13. **n8n-validation.ts**
- Workflow cleaning utilities
- **Test Requirements**: 15+ test cases
## Complex Testing Scenarios
### 1. Circular Dependencies
- **config-validator.ts** ↔ **node-specific-validators.ts**
- **Solution**: Use dependency injection or partial mocking
### 2. Database Mocking
- Services: node-documentation-service.ts, property-dependencies.ts
- **Strategy**: Create mock NodeRepository with test data fixtures
### 3. HTTP Client Mocking
- Service: n8n-api-client.ts
- **Strategy**: Mock axios with response fixtures for each endpoint
### 4. Complex State Validation
- Service: workflow-diff-engine.ts
- **Strategy**: Snapshot testing for workflow states before/after operations
### 5. Expression Context
- Service: expression-validator.ts
- **Strategy**: Create comprehensive expression context fixtures
## Testing Infrastructure Enhancements Needed
### 1. Additional Factories
```typescript
// workflow.factory.ts
export const workflowFactory = {
minimal: () => ({ /* minimal valid workflow */ }),
withConnections: () => ({ /* workflow with node connections */ }),
withErrors: () => ({ /* workflow with validation errors */ }),
aiAgent: () => ({ /* AI agent workflow pattern */ })
};
// expression.factory.ts
export const expressionFactory = {
simple: () => '{{ $json.field }}',
complex: () => '{{ $node["HTTP Request"].json.data[0].value }}',
invalid: () => '{{ $json[notANumber] }}'
};
```
### 2. Mock Utilities
```typescript
// mocks/node-repository.mock.ts
export const createMockNodeRepository = () => ({
getNode: vi.fn(),
searchNodes: vi.fn(),
// ... other methods
});
// mocks/axios.mock.ts
export const createMockAxios = () => ({
create: vi.fn(() => ({
get: vi.fn(),
post: vi.fn(),
put: vi.fn(),
delete: vi.fn(),
interceptors: {
request: { use: vi.fn() },
response: { use: vi.fn() }
}
}))
});
```
### 3. Test Helpers
```typescript
// helpers/validation.helpers.ts
export const expectValidationError = (
result: ValidationResult,
errorType: string,
property?: string
) => {
const error = result.errors.find(e =>
e.type === errorType && (!property || e.property === property)
);
expect(error).toBeDefined();
return error;
};
```
## Coverage Goals by Service
| Service | Current | Target | Test Cases Needed |
|---------|---------|--------|-------------------|
| config-validator.ts | ~20% | 85% | 50+ |
| enhanced-config-validator.ts | ~15% | 85% | 30+ |
| workflow-validator.ts | ~10% | 90% | 60+ |
| n8n-api-client.ts | 0% | 85% | 40+ |
| expression-validator.ts | ~10% | 90% | 25+ |
| workflow-diff-engine.ts | 0% | 85% | 40+ |
| Others | 0% | 80% | 15-20 each |
## Implementation Strategy
### Week 1: Critical Path Services
1. Complete config-validator.ts tests
2. Complete enhanced-config-validator.ts tests
3. Complete workflow-validator.ts tests
4. Create necessary test factories and helpers
### Week 2: External Dependencies
1. Implement n8n-api-client.ts tests with axios mocking
2. Test workflow-diff-engine.ts with state snapshots
3. Mock database for node-documentation-service.ts
### Week 3: Supporting Services
1. Complete expression-validator.ts tests
2. Test all node-specific validators
3. Test property-dependencies.ts
### Week 4: Finalization
1. Complete remaining utility services
2. Integration tests for service interactions
3. Coverage report and gap analysis
## Risk Mitigation
### 1. Complex Mocking Requirements
- **Risk**: Over-mocking leading to brittle tests
- **Mitigation**: Use real implementations where possible, mock only external dependencies
### 2. Test Maintenance
- **Risk**: Tests becoming outdated as services evolve
- **Mitigation**: Use factories and shared fixtures, avoid hardcoded test data
### 3. Performance
- **Risk**: Large test suite becoming slow
- **Mitigation**: Parallelize tests, use focused test runs during development
## Success Metrics
1. **Coverage**: Achieve 80%+ line coverage across all services
2. **Quality**: Zero false positives, all edge cases covered
3. **Performance**: Full test suite runs in < 30 seconds
4. **Maintainability**: Clear test names, reusable fixtures, minimal duplication
## Next Steps
1. Review and approve this plan
2. Create missing test factories and mock utilities
3. Begin Priority 1 service testing
4. Daily progress tracking against coverage goals
5. Weekly review of test quality and maintenance needs
## Gaps Identified in Current Test Infrastructure
1. **Missing Factories**: Need workflow, expression, and validation result factories
2. **Mock Strategy**: Need consistent mocking approach for NodeRepository
3. **Test Data**: Need comprehensive test fixtures for different node types
4. **Helpers**: Need assertion helpers for complex validation scenarios
5. **Integration Tests**: Need strategy for testing service interactions
This plan provides a clear roadmap for completing Phase 3 with high-quality, maintainable tests that ensure the reliability of the n8n-mcp service layer.

View File

@@ -1,148 +0,0 @@
# Integration Test Fix Coordination Strategy
## Overview
58 failing integration tests across 6 categories. Each category assigned to a dedicated fix agent working in parallel.
## Test Failure Categories
### 1. Database Isolation (9 tests) - Agent 1
- **Files**: `tests/integration/database/*.test.ts`
- **Key Issues**:
- Database disk image corruption
- UNIQUE constraint violations
- Transaction handling failures
- Concurrent access issues
### 2. MSW Setup (6 tests) - Agent 2
- **Files**: `tests/integration/msw-setup.test.ts`
- **Key Issues**:
- Custom handler responses not matching expectations
- Rate limiting simulation failing
- Webhook execution response format mismatches
- Scoped handler registration issues
### 3. MCP Error Handling (16 tests) - Agent 3
- **Files**: `tests/integration/mcp-protocol/error-handling.test.ts`
- **Key Issues**:
- Invalid params error handling
- Empty search query validation
- Malformed workflow structure handling
- Large payload processing
- Unicode/special character handling
### 4. FTS5 Search (7 tests) - Agent 4
- **Files**: `tests/integration/database/fts5-search.test.ts`
- **Key Issues**:
- Multi-column search returning extra results
- NOT query failures
- FTS trigger synchronization
- Performance test data conflicts
### 5. Performance Thresholds (15 tests) - Agent 5
- **Files**: `tests/integration/mcp-protocol/performance.test.ts`, `tests/integration/database/performance.test.ts`
- **Key Issues**:
- Large data handling timeouts
- Memory efficiency thresholds
- Response time benchmarks
- Concurrent request handling
### 6. Session Management (5 tests) - Agent 6
- **Files**: `tests/integration/mcp-protocol/session-management.test.ts`
- **Key Issues**:
- Test timeouts
- Session state persistence
- Concurrent session handling
## Coordination Rules
### 1. No Conflict Zones
Each agent works on separate test files to avoid merge conflicts:
- Agent 1: `database/*.test.ts` (except fts5-search.test.ts and performance.test.ts)
- Agent 2: `msw-setup.test.ts`
- Agent 3: `mcp-protocol/error-handling.test.ts`
- Agent 4: `database/fts5-search.test.ts`
- Agent 5: `*/performance.test.ts`
- Agent 6: `mcp-protocol/session-management.test.ts`
### 2. Shared Resource Management
- **Database**: Agents 1, 4 must coordinate on database schema changes
- **MSW Handlers**: Agent 2 owns all MSW handler modifications
- **Test Utilities**: Changes to shared test utilities require coordination
### 3. Dependencies
```
Agent 2 (MSW) → Agent 3 (MCP Error) → Agent 6 (Session)
Agent 1 (DB) → Agent 4 (FTS5)
Agent 5 (Performance) depends on all others for stable baselines
```
### 4. Success Criteria
Each agent must achieve:
- [ ] All assigned tests passing
- [ ] No regression in other test suites
- [ ] Performance maintained or improved
- [ ] Clear documentation of changes
### 5. Progress Tracking
Each agent creates a progress file:
- `/tests/integration/fixes/agent-X-progress.md`
- Update after each test fix
- Document any blockers or dependencies
## Common Solutions
### Database Isolation
```typescript
// Use unique database per test
const testDb = `:memory:test-${Date.now()}-${Math.random()}`;
// Proper cleanup
afterEach(async () => {
await db.close();
// Force garbage collection if needed
});
```
### MSW Handler Reset
```typescript
// Reset handlers after each test
afterEach(() => {
server.resetHandlers();
});
// Use scoped handlers for specific tests
server.use(
rest.post('/api/workflows', (req, res, ctx) => {
return res.once(ctx.json({ /* test-specific response */ }));
})
);
```
### Error Validation
```typescript
// Consistent error checking
await expect(async () => {
await mcpClient.request('tools/call', params);
}).rejects.toThrow(/specific error pattern/);
```
### Performance Baselines
```typescript
// Adjust thresholds based on CI environment
const TIMEOUT = process.env.CI ? 200 : 100;
expect(duration).toBeLessThan(TIMEOUT);
```
## Communication Protocol
1. **Blockers**: Report immediately in progress file
2. **Schema Changes**: Announce in coordination channel
3. **Utility Changes**: Create PR for review
4. **Success**: Update progress file and move to next test
## Final Integration
Once all agents complete:
1. Run full test suite
2. Merge all fixes
3. Update CI configuration if needed
4. Document any new test patterns

View File

@@ -1,76 +0,0 @@
# MSW Setup Test Fixes Summary
## Fixed 6 Test Failures
### 1. **Workflow Creation Test**
- **Issue**: Custom mock handler wasn't overriding the default handler
- **Fix**: Used the global `server` instance instead of `mswTestServer` to ensure handlers are properly registered
### 2. **Error Response Test**
- **Issue**: Response was missing the timestamp field expected by the test
- **Fix**: Added timestamp field to the error response in the custom handler
### 3. **Rate Limiting Test**
- **Issue**: Endpoint `/api/v1/rate-limited` was returning 501 (not implemented)
- **Fix**: Added a custom handler with rate limiting logic that tracks request count
### 4. **Webhook Execution Test**
- **Issue**: Response structure from default handler didn't match expected format
- **Fix**: Created custom handler that returns the expected `processed`, `result`, and `webhookReceived` fields
### 5. **Scoped Handlers Test**
- **Issue**: Scoped handler wasn't being applied correctly
- **Fix**: Used global `server` instance and `resetHandlers()` to properly manage handler lifecycle
### 6. **Factory Test**
- **Issue**: Factory was generating name as "Test n8n-nodes-base.slack Workflow" instead of "Test Slack Workflow"
- **Fix**: Updated test expectation to match the actual factory behavior
## Key Implementation Details
### Handler Management
- Used the global MSW server instance (`server`) throughout instead of trying to manage multiple instances
- Added `afterEach(() => server.resetHandlers())` to ensure clean state between tests
- All custom handlers now use `server.use()` for consistency
### Specific Handler Implementations
#### Rate Limiting Handler
```typescript
server.use(
http.get('*/api/v1/rate-limited', () => {
requestCount++;
if (requestCount > limit) {
return HttpResponse.json(
{ message: 'Rate limit exceeded', code: 'RATE_LIMIT', retryAfter: 60 },
{ status: 429, headers: { 'X-RateLimit-Remaining': '0' } }
);
}
return HttpResponse.json({ success: true });
})
);
```
#### Webhook Handler
```typescript
server.use(
http.post('*/webhook/test-webhook', async ({ request }) => {
const body = await request.json();
return HttpResponse.json({
processed: true,
result: 'success',
webhookReceived: {
path: 'test-webhook',
method: 'POST',
body,
timestamp: new Date().toISOString()
}
});
})
);
```
## Test Results
- All 11 tests now pass successfully
- No hanging or timeout issues
- Clean handler isolation between tests

View File

@@ -1,38 +0,0 @@
# Transaction Test Fixes Summary
## Fixed Issues
### 1. Updated SQL Statements to Match Schema
- Changed all INSERT statements to use the correct column names:
- `name``node_type` (PRIMARY KEY)
- `type` → removed (not in schema)
- `package``package_name`
- Added all required columns: `description`, `category`, `development_style`, `is_ai_tool`, `is_trigger`, `is_webhook`, `is_versioned`, `documentation`, `properties_schema`, `operations`, `credentials_required`
### 2. Fixed Parameter Count Mismatches
- Updated all `.run()` calls to have 15 parameters matching the 15 placeholders in INSERT statements
- Added proper data transformations:
- Boolean fields converted to 0/1 (e.g., `node.isAITool ? 1 : 0`)
- JSON fields stringified (e.g., `JSON.stringify(node.properties || [])`)
### 3. Fixed Object Property References
- Changed all `node.name` references to `node.nodeType`
- Updated all property accesses to match TestDataGenerator output
### 4. Fixed Better-SQLite3 API Usage
- Removed `.immediate()` and `.exclusive()` methods which don't exist in better-sqlite3
- For exclusive transactions, used raw SQL: `BEGIN EXCLUSIVE`
### 5. Adjusted Performance Test Expectations
- Removed unrealistic performance expectations that were causing flaky tests
- Changed to simply verify successful completion
### 6. Fixed Constraint Violation Test
- Updated to test PRIMARY KEY constraint on `node_type` instead of non-existent UNIQUE constraint on `name`
- Updated error message expectation to match SQLite's actual error
## Key Learnings
1. Always verify the actual database schema before writing tests
2. Count the number of placeholders vs parameters carefully
3. Better-sqlite3 doesn't have all the transaction methods that might be expected
4. Performance tests should be careful about making assumptions about execution speed

View File

@@ -1,153 +0,0 @@
# Integration Test Fix Coordination Summary
## Quick Reference
| Agent | Category | Files | Tests | Priority | Dependencies |
|-------|----------|-------|-------|----------|--------------|
| 1 | Database Isolation | 4 files | 9 tests | HIGH | None |
| 2 | MSW Setup | 1 file | 6 tests | HIGH | None |
| 3 | MCP Error Handling | 1 file | 16 tests | MEDIUM | Agent 2 |
| 4 | FTS5 Search | 1 file | 7 tests | MEDIUM | Agent 1 |
| 5 | Performance | 2 files | 15 tests | LOW | All others |
| 6 | Session Management | 1 file | 5 tests | MEDIUM | Agents 2, 3 |
## Execution Order
```
Phase 1 (Parallel):
├── Agent 1: Database Isolation
└── Agent 2: MSW Setup
Phase 2 (Parallel):
├── Agent 3: MCP Error Handling (after Agent 2)
├── Agent 4: FTS5 Search (after Agent 1)
└── Agent 6: Session Management (after Agent 2)
Phase 3:
└── Agent 5: Performance (after all others)
```
## Key Shared Resources
### 1. Test Database Configuration
**Owner**: Agent 1
```typescript
// Shared pattern for database isolation
const createTestDatabase = () => {
return new Database(`:memory:test-${Date.now()}-${Math.random()}`);
};
```
### 2. MSW Server Instance
**Owner**: Agent 2
```typescript
// Global MSW server configuration
const server = setupServer(...handlers);
```
### 3. MCP Client Configuration
**Owner**: Agent 3
```typescript
// Standard MCP client setup
const mcpClient = new MCPClient({ timeout: 10000 });
```
## Communication Points
### Critical Handoffs
1. **Agent 1 → Agent 4**: Database schema and isolation strategy
2. **Agent 2 → Agent 3, 6**: MSW handler patterns and setup
3. **Agent 3 → Agent 6**: Error handling patterns for sessions
4. **All → Agent 5**: Completion status for baseline establishment
### Blocker Protocol
If blocked:
1. Update your progress file immediately
2. Tag the blocking agent in coordination doc
3. Provide specific details of what's needed
4. Consider temporary workaround if possible
## Success Verification
### Individual Agent Verification
```bash
# Agent 1
npm test tests/integration/database/node-repository.test.ts
npm test tests/integration/database/transactions.test.ts
npm test tests/integration/database/connection-management.test.ts
npm test tests/integration/database/template-repository.test.ts
# Agent 2
npm test tests/integration/msw-setup.test.ts
# Agent 3
npm test tests/integration/mcp-protocol/error-handling.test.ts
# Agent 4
npm test tests/integration/database/fts5-search.test.ts
# Agent 5
npm test tests/integration/mcp-protocol/performance.test.ts
npm test tests/integration/database/performance.test.ts
# Agent 6
npm test tests/integration/mcp-protocol/session-management.test.ts
```
### Full Integration Test
```bash
# After all agents complete
npm test tests/integration/
# Expected output: All 58 tests passing
```
## Progress Dashboard
```
Overall Progress: [⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜] 0/58
Agent 1 - Database: [⬜⬜⬜⬜⬜⬜⬜⬜⬜] 0/9
Agent 2 - MSW: [⬜⬜⬜⬜⬜⬜] 0/6
Agent 3 - MCP: [⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜] 0/16
Agent 4 - FTS5: [⬜⬜⬜⬜⬜⬜⬜] 0/7
Agent 5 - Perf: [⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜] 0/15
Agent 6 - Session: [⬜⬜⬜⬜⬜] 0/5
```
## Common Patterns Reference
### Error Handling Pattern
```typescript
await expect(async () => {
await operation();
}).rejects.toThrow(/expected pattern/);
```
### Performance Threshold Pattern
```typescript
const threshold = process.env.CI ? 200 : 100;
expect(duration).toBeLessThan(threshold);
```
### Database Isolation Pattern
```typescript
beforeEach(async () => {
db = createTestDatabase();
await initializeSchema(db);
});
afterEach(async () => {
await db.close();
});
```
## Final Checklist
- [ ] All 58 tests passing
- [ ] No test flakiness
- [ ] CI pipeline green
- [ ] Performance benchmarks documented
- [ ] No resource leaks
- [ ] All progress files updated
- [ ] Coordination document finalized

View File

@@ -1,156 +0,0 @@
# Agent 1: Database Isolation Fix Brief
## Assignment
Fix 9 failing tests related to database isolation and transaction handling.
## Files to Fix
- `tests/integration/database/node-repository.test.ts` (1 test)
- `tests/integration/database/transactions.test.ts` (estimated 3 tests)
- `tests/integration/database/connection-management.test.ts` (estimated 3 tests)
- `tests/integration/database/template-repository.test.ts` (estimated 2 tests)
## Specific Failures to Address
### 1. node-repository.test.ts
```
FAIL: Transaction handling > should handle errors gracefully
Issue: Expected function to throw an error but it didn't
Line: 530
```
### 2. Common Issues Across Database Tests
- Database disk image corruption
- UNIQUE constraint violations
- Concurrent access conflicts
- Transaction rollback failures
## Root Causes
1. **Shared Database State**: Tests are using the same database instance
2. **Missing Cleanup**: Database connections not properly closed
3. **Race Conditions**: Concurrent tests accessing same tables
4. **Transaction Overlap**: Transactions from different tests interfering
## Recommended Fixes
### 1. Implement Test Database Isolation
```typescript
// In each test file's beforeEach
let db: Database;
let repository: NodeRepository;
beforeEach(async () => {
// Create unique in-memory database for each test
const dbName = `:memory:test-${Date.now()}-${Math.random()}`;
db = new Database(dbName);
// Initialize schema
await initializeSchema(db);
// Create repository with isolated database
repository = new NodeRepository(db);
});
afterEach(async () => {
// Ensure proper cleanup
if (db) {
await db.close();
db = null;
}
});
```
### 2. Fix Transaction Error Test
```typescript
// In node-repository.test.ts around line 530
it('should handle errors gracefully', async () => {
// Create a scenario that will cause an error
// For example, close the database connection
await db.close();
// Now operations should throw
await expect(repository.saveNode(testNode)).rejects.toThrow(/database.*closed/i);
// Reopen for cleanup
db = new Database(':memory:');
});
```
### 3. Add Connection Pool Management
```typescript
// In connection-management.test.ts
class ConnectionPool {
private connections: Map<string, Database> = new Map();
getConnection(id: string): Database {
if (!this.connections.has(id)) {
this.connections.set(id, new Database(`:memory:${id}`));
}
return this.connections.get(id)!;
}
async closeAll() {
for (const [id, conn] of this.connections) {
await conn.close();
}
this.connections.clear();
}
}
```
### 4. Implement Proper Transaction Isolation
```typescript
// In transactions.test.ts
async function withTransaction<T>(
db: Database,
callback: (tx: Transaction) => Promise<T>
): Promise<T> {
const tx = db.transaction();
try {
const result = await callback(tx);
tx.commit();
return result;
} catch (error) {
tx.rollback();
throw error;
}
}
```
## Testing Strategy
1. Run each test file in isolation first
2. Verify no database files are left after tests
3. Run tests in parallel to ensure isolation works
4. Check for any performance regression
## Dependencies
- May need to update shared test utilities
- Coordinate with Agent 4 (FTS5) on any schema changes
## Success Metrics
- [ ] All 9 database isolation tests pass
- [ ] No test leaves database artifacts
- [ ] Tests can run in parallel without conflicts
- [ ] Transaction error handling works correctly
## Progress Tracking
Create `/tests/integration/fixes/agent-1-progress.md` and update after each fix:
```markdown
# Agent 1 Progress
## Fixed Tests
- [ ] node-repository.test.ts - Transaction error handling
- [ ] transactions.test.ts - Test 1
- [ ] transactions.test.ts - Test 2
- [ ] transactions.test.ts - Test 3
- [ ] connection-management.test.ts - Test 1
- [ ] connection-management.test.ts - Test 2
- [ ] connection-management.test.ts - Test 3
- [ ] template-repository.test.ts - Test 1
- [ ] template-repository.test.ts - Test 2
## Blockers
- None yet
## Notes
- [Add any discoveries or important changes]
```

View File

@@ -1,35 +0,0 @@
# Agent 1 Progress
## Fixed Tests
### FTS5 Search Tests (fts5-search.test.ts) - 7 failures fixed
- [x] should support NOT queries - Fixed FTS5 syntax to use minus sign (-) for negation
- [x] should optimize rebuilding FTS index - Fixed rebuild syntax quotes (VALUES('rebuild'))
- [x] should handle large dataset searches efficiently - Added DELETE to clear existing data
- [x] should automatically sync FTS on update - SKIPPED due to CI environment database corruption issue
### Node Repository Tests (node-repository.test.ts) - 1 failure fixed
- [x] should handle errors gracefully - Changed to use empty string for nodeType and null for NOT NULL fields
### Template Repository Tests (template-repository.test.ts) - 1 failure fixed
- [x] should sanitize workflow data before saving - Modified TemplateSanitizer to remove pinData, executionId, and staticData
## Blockers
- FTS5 trigger sync test experiences database corruption in test environment only
## Notes
- FTS5 uses minus sign (-) for NOT queries, not the word NOT
- FTS5 rebuild command needs single quotes around "rebuild"
- SQLite in JavaScript doesn't throw on null PRIMARY KEY, but does on empty string
- Added pinData/executionId/staticData removal to TemplateSanitizer for security
- One test skipped due to environment-specific FTS5 trigger issues that don't affect production
## Summary
Successfully fixed 8 out of 9 test failures:
1. Corrected FTS5 query syntax (NOT to -)
2. Fixed SQL string quoting for rebuild
3. Added data cleanup to prevent conflicts
4. Used unique IDs to avoid collisions
5. Changed error test to use constraint violations that actually throw
6. Extended sanitizer to remove sensitive workflow data
7. Skipped 1 test that has CI-specific database corruption (works in production)

View File

@@ -1,277 +0,0 @@
# Agent 2: MSW Setup Fix Brief
## Assignment
Fix 6 failing tests in MSW (Mock Service Worker) setup and configuration.
## Files to Fix
- `tests/integration/msw-setup.test.ts` (6 tests)
## Specific Failures to Address
### 1. Workflow Creation with Custom Response (3 retries)
```
FAIL: should handle workflow creation with custom response
Expected: { id: 'custom-workflow-123', name: 'Custom Workflow', active: true }
Actual: { id: 'workflow_1753821017065', ... }
```
### 2. Error Response Handling (3 retries)
```
FAIL: should handle error responses
Expected: { message: 'Workflow not found', code: 'WORKFLOW_NOT_FOUND' }
Actual: { message: 'Workflow not found' } (missing code field)
```
### 3. Rate Limiting Simulation (3 retries)
```
FAIL: should simulate rate limiting
AxiosError: Request failed with status code 501
Expected: Proper rate limit response with 429 status
```
### 4. Webhook Execution (3 retries)
```
FAIL: should handle webhook execution
Expected: { processed: true, workflowId: 'test-workflow' }
Actual: { success: true, ... } (different response structure)
```
### 5. Scoped Handlers (3 retries)
```
FAIL: should work with scoped handlers
AxiosError: Request failed with status code 501
Handler not properly registered or overridden
```
## Root Causes
1. **Handler Override Issues**: Test-specific handlers not properly overriding defaults
2. **Response Structure Mismatch**: Mock responses don't match expected format
3. **Handler Registration Timing**: Handlers registered after server starts
4. **Missing Handler Implementation**: Some endpoints return 501 (Not Implemented)
## Recommended Fixes
### 1. Fix Custom Response Handler
```typescript
it('should handle workflow creation with custom response', async () => {
// Use res.once() for test-specific override
server.use(
rest.post(`${API_BASE_URL}/workflows`, (req, res, ctx) => {
return res.once(
ctx.status(201),
ctx.json({
data: {
id: 'custom-workflow-123',
name: 'Custom Workflow',
active: true,
// Include all required fields from the actual response
nodes: [],
connections: {},
settings: {},
staticData: null,
tags: [],
createdAt: new Date().toISOString(),
updatedAt: new Date().toISOString()
}
})
);
})
);
const response = await axios.post(`${API_BASE_URL}/workflows`, {
name: 'Custom Workflow',
nodes: [],
connections: {}
});
expect(response.status).toBe(201);
expect(response.data.data).toMatchObject({
id: 'custom-workflow-123',
name: 'Custom Workflow',
active: true
});
});
```
### 2. Fix Error Response Structure
```typescript
it('should handle error responses', async () => {
server.use(
rest.get(`${API_BASE_URL}/workflows/:id`, (req, res, ctx) => {
return res.once(
ctx.status(404),
ctx.json({
message: 'Workflow not found',
code: 'WORKFLOW_NOT_FOUND',
status: 'error' // Add any other required fields
})
);
})
);
try {
await axios.get(`${API_BASE_URL}/workflows/non-existent`);
fail('Should have thrown an error');
} catch (error: any) {
expect(error.response.status).toBe(404);
expect(error.response.data).toEqual({
message: 'Workflow not found',
code: 'WORKFLOW_NOT_FOUND',
status: 'error'
});
}
});
```
### 3. Implement Rate Limiting Handler
```typescript
it('should simulate rate limiting', async () => {
let requestCount = 0;
server.use(
rest.get(`${API_BASE_URL}/workflows`, (req, res, ctx) => {
requestCount++;
// Rate limit after 3 requests
if (requestCount > 3) {
return res(
ctx.status(429),
ctx.json({
message: 'Rate limit exceeded',
retryAfter: 60
}),
ctx.set('X-RateLimit-Limit', '3'),
ctx.set('X-RateLimit-Remaining', '0'),
ctx.set('X-RateLimit-Reset', String(Date.now() + 60000))
);
}
return res(
ctx.status(200),
ctx.json({ data: [] })
);
})
);
// Make requests until rate limited
for (let i = 0; i < 3; i++) {
const response = await axios.get(`${API_BASE_URL}/workflows`);
expect(response.status).toBe(200);
}
// This should be rate limited
try {
await axios.get(`${API_BASE_URL}/workflows`);
fail('Should have been rate limited');
} catch (error: any) {
expect(error.response.status).toBe(429);
expect(error.response.data.message).toContain('Rate limit');
}
});
```
### 4. Fix Webhook Handler Response
```typescript
it('should handle webhook execution', async () => {
const webhookPath = '/webhook-test/abc-123';
server.use(
rest.post(`${API_BASE_URL}${webhookPath}`, async (req, res, ctx) => {
const body = await req.json();
return res(
ctx.status(200),
ctx.json({
processed: true,
workflowId: 'test-workflow',
receivedData: body,
executionId: `exec-${Date.now()}`,
timestamp: new Date().toISOString()
})
);
})
);
const testData = { test: 'data' };
const response = await axios.post(`${API_BASE_URL}${webhookPath}`, testData);
expect(response.status).toBe(200);
expect(response.data).toMatchObject({
processed: true,
workflowId: 'test-workflow',
receivedData: testData
});
});
```
### 5. Setup Proper Handler Scoping
```typescript
describe('scoped handlers', () => {
// Ensure clean handler state
beforeEach(() => {
server.resetHandlers();
});
it('should work with scoped handlers', async () => {
// Register handler for this test only
server.use(
rest.get(`${API_BASE_URL}/test-endpoint`, (req, res, ctx) => {
return res.once(
ctx.status(200),
ctx.json({ scoped: true })
);
})
);
const response = await axios.get(`${API_BASE_URL}/test-endpoint`);
expect(response.status).toBe(200);
expect(response.data).toEqual({ scoped: true });
// Verify handler is not available in next request
try {
await axios.get(`${API_BASE_URL}/test-endpoint`);
// Should fall back to default handler or 404
} catch (error: any) {
expect(error.response.status).toBe(404);
}
});
});
```
## Testing Strategy
1. Fix one test at a time
2. Ensure handlers are properly reset between tests
3. Verify no interference between test cases
4. Test both success and error scenarios
## Dependencies
- MSW server configuration affects all integration tests
- Changes here may impact Agent 3 (MCP Error) and Agent 6 (Session)
## Success Metrics
- [ ] All 6 MSW setup tests pass
- [ ] No handler conflicts between tests
- [ ] Proper error response formats
- [ ] Rate limiting works correctly
- [ ] Webhook handling matches n8n behavior
## Progress Tracking
Create `/tests/integration/fixes/agent-2-progress.md` and update after each fix:
```markdown
# Agent 2 Progress
## Fixed Tests
- [ ] should handle workflow creation with custom response
- [ ] should handle error responses
- [ ] should simulate rate limiting
- [ ] should handle webhook execution
- [ ] should work with scoped handlers
- [ ] (identify 6th test from full run)
## Blockers
- None yet
## Notes
- [Document any MSW configuration changes]
- [Note any handler patterns established]
```

View File

@@ -1,282 +0,0 @@
# Agent 3: MCP Error Handling Fix Brief
## Assignment
Fix 16 failing tests related to MCP protocol error handling and validation.
## Files to Fix
- `tests/integration/mcp-protocol/error-handling.test.ts` (16 tests)
## Specific Failures to Address
### 1. Invalid Params Handling (3 retries)
```
FAIL: should handle invalid params
Expected: error message to match /missing|required|nodeType/i
Actual: 'MCP error -32603: MCP error -32603: C...'
```
### 2. Invalid Category Filter (2 retries)
```
FAIL: should handle invalid category filter
Test is not properly validating category parameter
```
### 3. Empty Search Query (3 retries)
```
FAIL: should handle empty search query
Expected: error message to contain 'query'
Actual: 'Should have thrown an error' (no error thrown)
```
### 4. Malformed Workflow Structure (3 retries)
```
FAIL: should handle malformed workflow structure
Expected: error to contain 'nodes'
Actual: No error thrown, or wrong error message
Error in logs: TypeError: workflow.nodes is not iterable
```
### 5. Circular Workflow References (2 retries)
Test implementation missing or incorrect
### 6. Non-existent Documentation Topics (2 retries)
Documentation tool not returning expected errors
### 7. Large Node Info Requests (2 retries)
Performance/memory issues with large payloads
### 8. Large Workflow Validation (2 retries)
Timeout or memory issues
### 9. Workflow with Many Nodes (2 retries)
Performance degradation not handled
### 10. Empty Responses (2 retries)
Edge case handling failure
### 11. Special Characters in Parameters (2 retries)
Unicode/special character validation issues
### 12. Unicode in Parameters (2 retries)
Unicode handling failures
### 13. Null and Undefined Handling (2 retries)
Null/undefined parameter validation
### 14. Error Message Quality (3 retries)
```
Expected: error to match /not found|invalid|missing/
Actual: 'should have thrown an error'
```
### 15. Missing Required Parameters (2 retries)
Parameter validation not working correctly
## Root Causes
1. **Validation Logic**: MCP server not properly validating input parameters
2. **Error Propagation**: Errors caught but not properly formatted/returned
3. **Type Checking**: Missing or incorrect type validation
4. **Error Messages**: Generic errors instead of specific validation messages
## Recommended Fixes
### 1. Enhance Parameter Validation
```typescript
// In mcp/server.ts or relevant handler
async function validateToolParams(tool: string, params: any): Promise<void> {
switch (tool) {
case 'get_node_info':
if (!params.nodeType) {
throw new Error('Missing required parameter: nodeType');
}
if (typeof params.nodeType !== 'string') {
throw new Error('Parameter nodeType must be a string');
}
break;
case 'search_nodes':
if (params.query !== undefined && params.query === '') {
throw new Error('Parameter query cannot be empty');
}
break;
case 'list_nodes':
if (params.category && !['trigger', 'transform', 'output', 'input'].includes(params.category)) {
throw new Error(`Invalid category: ${params.category}. Must be one of: trigger, transform, output, input`);
}
break;
}
}
```
### 2. Fix Workflow Structure Validation
```typescript
// In workflow validator
function validateWorkflowStructure(workflow: any): void {
if (!workflow || typeof workflow !== 'object') {
throw new Error('Workflow must be an object');
}
if (!Array.isArray(workflow.nodes)) {
throw new Error('Workflow must have a nodes array');
}
if (!workflow.connections || typeof workflow.connections !== 'object') {
throw new Error('Workflow must have a connections object');
}
// Check for circular references
const visited = new Set<string>();
const recursionStack = new Set<string>();
for (const node of workflow.nodes) {
if (hasCircularReference(node.id, workflow.connections, visited, recursionStack)) {
throw new Error(`Circular reference detected starting from node: ${node.id}`);
}
}
}
```
### 3. Improve Error Response Format
```typescript
// In MCP error handler
function formatMCPError(error: any, code: number = -32603): MCPError {
let message = 'Internal error';
if (error instanceof Error) {
message = error.message;
} else if (typeof error === 'string') {
message = error;
}
// Ensure specific error messages
if (message.includes('Missing required parameter')) {
code = -32602; // Invalid params
}
return {
code,
message,
data: process.env.NODE_ENV === 'test' ? {
originalError: error.toString()
} : undefined
};
}
```
### 4. Handle Large Payloads
```typescript
// Add payload size validation
function validatePayloadSize(data: any, maxSize: number = 10 * 1024 * 1024): void {
const size = JSON.stringify(data).length;
if (size > maxSize) {
throw new Error(`Payload too large: ${size} bytes (max: ${maxSize})`);
}
}
// In large workflow handler
async function handleLargeWorkflow(workflow: any): Promise<any> {
// Validate size first
validatePayloadSize(workflow);
// Process in chunks if needed
const nodeChunks = chunkArray(workflow.nodes, 100);
const results = [];
for (const chunk of nodeChunks) {
const partialWorkflow = { ...workflow, nodes: chunk };
const result = await validateWorkflow(partialWorkflow);
results.push(result);
}
return mergeValidationResults(results);
}
```
### 5. Unicode and Special Character Handling
```typescript
// Sanitize and validate unicode input
function validateUnicodeInput(input: any): void {
if (typeof input === 'string') {
// Check for control characters
if (/[\x00-\x1F\x7F]/.test(input)) {
throw new Error('Control characters not allowed in input');
}
// Validate UTF-8
try {
// This will throw if invalid UTF-8
Buffer.from(input, 'utf8').toString('utf8');
} catch {
throw new Error('Invalid UTF-8 encoding in input');
}
} else if (typeof input === 'object' && input !== null) {
// Recursively validate object properties
for (const [key, value] of Object.entries(input)) {
validateUnicodeInput(key);
validateUnicodeInput(value);
}
}
}
```
### 6. Null/Undefined Handling
```typescript
// Strict null/undefined validation
function validateNotNullish(params: any, paramName: string): void {
if (params[paramName] === null) {
throw new Error(`Parameter ${paramName} cannot be null`);
}
if (params[paramName] === undefined) {
throw new Error(`Missing required parameter: ${paramName}`);
}
}
```
## Testing Strategy
1. Add validation at MCP entry points
2. Ensure errors bubble up correctly
3. Test each error scenario in isolation
4. Verify error messages are helpful
## Dependencies
- Depends on Agent 2 (MSW) for proper mock setup
- May affect Agent 6 (Session) error handling
## Success Metrics
- [ ] All 16 error handling tests pass
- [ ] Clear, specific error messages
- [ ] Proper error codes returned
- [ ] Large payloads handled gracefully
- [ ] Unicode/special characters validated
## Progress Tracking
Create `/tests/integration/fixes/agent-3-progress.md` and update after each fix:
```markdown
# Agent 3 Progress
## Fixed Tests
- [ ] should handle invalid params
- [ ] should handle invalid category filter
- [ ] should handle empty search query
- [ ] should handle malformed workflow structure
- [ ] should handle circular workflow references
- [ ] should handle non-existent documentation topics
- [ ] should handle large node info requests
- [ ] should handle large workflow validation
- [ ] should handle workflow with many nodes
- [ ] should handle empty responses gracefully
- [ ] should handle special characters in parameters
- [ ] should handle unicode in parameters
- [ ] should handle null and undefined gracefully
- [ ] should provide helpful error messages
- [ ] should indicate missing required parameters
- [ ] (identify 16th test)
## Blockers
- None yet
## Notes
- [Document validation rules added]
- [Note any error format changes]
```

View File

@@ -1,336 +0,0 @@
# Agent 4: FTS5 Search Fix Brief
## Assignment
Fix 7 failing tests related to FTS5 (Full-Text Search) functionality.
## Files to Fix
- `tests/integration/database/fts5-search.test.ts` (7 tests)
## Specific Failures to Address
### 1. Multi-Column Search (3 retries)
```
FAIL: should search across multiple columns
Expected: 1 result
Actual: 2 results (getting both id:3 and id:1)
Line: 157
```
### 2. NOT Queries (3 retries)
```
FAIL: should support NOT queries
Expected: results.length > 0
Actual: 0 results
Line: 185
```
### 3. FTS Update Trigger (3 retries)
```
FAIL: should automatically sync FTS on update
Error: SqliteError: database disk image is malformed
```
### 4. FTS Delete Trigger (3 retries)
```
FAIL: should automatically sync FTS on delete
Expected: count to be 0
Actual: count is 1 (FTS not synced after delete)
Line: 470
```
### 5. Large Dataset Performance (3 retries)
```
FAIL: should handle large dataset searches efficiently
Error: UNIQUE constraint failed: templates.workflow_id
```
### 6. FTS Index Rebuild (3 retries)
```
FAIL: should optimize rebuilding FTS index
Similar constraint/performance issues
```
### 7. Empty Search Terms (2 retries)
```
FAIL: should handle empty search terms
Test logic or assertion issue
```
## Root Causes
1. **FTS Synchronization**: Triggers not properly syncing FTS table with source
2. **Query Construction**: NOT queries and multi-column searches incorrectly built
3. **Data Constraints**: Test data violating UNIQUE constraints
4. **Database Corruption**: Shared database state causing corruption
## Recommended Fixes
### 1. Fix Multi-Column Search
```typescript
// The issue is likely in how the FTS query is constructed
it('should search across multiple columns', async () => {
// Ensure clean state
await db.exec('DELETE FROM templates');
await db.exec('DELETE FROM templates_fts');
// Insert test data
await db.prepare(`
INSERT INTO templates (workflow_id, name, description, nodes, workflow_json)
VALUES (?, ?, ?, ?, ?)
`).run(
'wf-1',
'Email Workflow',
'Send emails automatically',
JSON.stringify(['Gmail', 'SendGrid']),
'{}'
);
await db.prepare(`
INSERT INTO templates (workflow_id, name, description, nodes, workflow_json)
VALUES (?, ?, ?, ?, ?)
`).run(
'wf-2',
'Data Processing',
'Process data with email notifications',
JSON.stringify(['Transform', 'Filter']),
'{}'
);
// Search for "email" - should only match first template
const results = await db.prepare(`
SELECT t.* FROM templates t
JOIN templates_fts fts ON t.workflow_id = fts.workflow_id
WHERE templates_fts MATCH 'email'
ORDER BY rank
`).all();
expect(results).toHaveLength(1);
expect(results[0].workflow_id).toBe('wf-1');
});
```
### 2. Fix NOT Query Support
```typescript
it('should support NOT queries', async () => {
// Clear and setup data
await db.exec('DELETE FROM templates');
await db.exec('DELETE FROM templates_fts');
// Insert templates with and without "webhook"
const templates = [
{ id: 'wf-1', name: 'Webhook Handler', description: 'Handle webhooks' },
{ id: 'wf-2', name: 'Data Processor', description: 'Process data' },
{ id: 'wf-3', name: 'Email Sender', description: 'Send emails' }
];
for (const t of templates) {
await db.prepare(`
INSERT INTO templates (workflow_id, name, description, nodes, workflow_json)
VALUES (?, ?, ?, '[]', '{}')
`).run(t.id, t.name, t.description);
}
// FTS5 NOT query syntax
const results = await db.prepare(`
SELECT t.* FROM templates t
JOIN templates_fts fts ON t.workflow_id = fts.workflow_id
WHERE templates_fts MATCH 'NOT webhook'
ORDER BY t.workflow_id
`).all();
expect(results.length).toBe(2);
expect(results.every((r: any) => !r.name.toLowerCase().includes('webhook'))).toBe(true);
});
```
### 3. Fix FTS Trigger Synchronization
```typescript
// Ensure triggers are properly created
async function createFTSTriggers(db: Database): Promise<void> {
// Drop existing triggers
await db.exec(`
DROP TRIGGER IF EXISTS templates_ai;
DROP TRIGGER IF EXISTS templates_au;
DROP TRIGGER IF EXISTS templates_ad;
`);
// Insert trigger
await db.exec(`
CREATE TRIGGER templates_ai AFTER INSERT ON templates
BEGIN
INSERT INTO templates_fts (workflow_id, name, description, nodes)
VALUES (new.workflow_id, new.name, new.description, new.nodes);
END;
`);
// Update trigger
await db.exec(`
CREATE TRIGGER templates_au AFTER UPDATE ON templates
BEGIN
UPDATE templates_fts
SET name = new.name,
description = new.description,
nodes = new.nodes
WHERE workflow_id = new.workflow_id;
END;
`);
// Delete trigger
await db.exec(`
CREATE TRIGGER templates_ad AFTER DELETE ON templates
BEGIN
DELETE FROM templates_fts WHERE workflow_id = old.workflow_id;
END;
`);
}
// In the update test
it('should automatically sync FTS on update', async () => {
// Ensure triggers exist
await createFTSTriggers(db);
// Insert initial data
const workflowId = `test-update-${Date.now()}`;
await db.prepare(`
INSERT INTO templates (workflow_id, name, description, nodes, workflow_json)
VALUES (?, 'Original Name', 'Original Description', '[]', '{}')
`).run(workflowId);
// Update the template
await db.prepare(`
UPDATE templates
SET name = 'Updated Webhook Handler'
WHERE workflow_id = ?
`).run(workflowId);
// Search for "webhook" in FTS
const results = await db.prepare(`
SELECT * FROM templates_fts WHERE templates_fts MATCH 'webhook'
`).all();
expect(results).toHaveLength(1);
expect(results[0].name).toBe('Updated Webhook Handler');
});
```
### 4. Fix Delete Synchronization
```typescript
it('should automatically sync FTS on delete', async () => {
// Ensure triggers exist
await createFTSTriggers(db);
const workflowId = `test-delete-${Date.now()}`;
// Insert template
await db.prepare(`
INSERT INTO templates (workflow_id, name, description, nodes, workflow_json)
VALUES (?, 'Deletable Template', 'Will be deleted', '[]', '{}')
`).run(workflowId);
// Verify it's in FTS
const before = await db.prepare(
'SELECT COUNT(*) as count FROM templates_fts WHERE workflow_id = ?'
).get(workflowId);
expect(before.count).toBe(1);
// Delete from main table
await db.prepare('DELETE FROM templates WHERE workflow_id = ?').run(workflowId);
// Verify it's removed from FTS
const after = await db.prepare(
'SELECT COUNT(*) as count FROM templates_fts WHERE workflow_id = ?'
).get(workflowId);
expect(after.count).toBe(0);
});
```
### 5. Fix Large Dataset Test
```typescript
it('should handle large dataset searches efficiently', async () => {
// Clear existing data
await db.exec('DELETE FROM templates');
await db.exec('DELETE FROM templates_fts');
// Insert many templates with unique IDs
const stmt = db.prepare(`
INSERT INTO templates (workflow_id, name, description, nodes, workflow_json)
VALUES (?, ?, ?, ?, ?)
`);
for (let i = 0; i < 1000; i++) {
stmt.run(
`perf-test-${i}-${Date.now()}`, // Ensure unique workflow_id
`Template ${i}`,
i % 10 === 0 ? 'Contains webhook keyword' : 'Regular template',
JSON.stringify([`Node${i}`]),
'{}'
);
}
const start = Date.now();
const results = await db.prepare(`
SELECT t.* FROM templates t
JOIN templates_fts fts ON t.workflow_id = fts.workflow_id
WHERE templates_fts MATCH 'webhook'
`).all();
const duration = Date.now() - start;
expect(results).toHaveLength(100); // 10% have "webhook"
expect(duration).toBeLessThan(100); // Should be fast
});
```
### 6. Handle Empty Search Terms
```typescript
it('should handle empty search terms', async () => {
// Empty string should either return all or throw error
try {
const results = await db.prepare(`
SELECT * FROM templates_fts WHERE templates_fts MATCH ?
`).all('');
// If it doesn't throw, it should return empty
expect(results).toHaveLength(0);
} catch (error: any) {
// FTS5 might throw on empty query
expect(error.message).toMatch(/syntax|empty|invalid/i);
}
});
```
## Testing Strategy
1. Isolate each test with clean database state
2. Ensure FTS triggers are properly created
3. Use unique IDs to avoid constraint violations
4. Test both positive and negative cases
## Dependencies
- Coordinate with Agent 1 on database isolation strategy
- FTS schema must match main table schema
## Success Metrics
- [ ] All 7 FTS5 tests pass
- [ ] FTS stays synchronized with source table
- [ ] Performance tests complete under threshold
- [ ] No database corruption errors
## Progress Tracking
Create `/tests/integration/fixes/agent-4-progress.md` and update after each fix:
```markdown
# Agent 4 Progress
## Fixed Tests
- [ ] should search across multiple columns
- [ ] should support NOT queries
- [ ] should automatically sync FTS on update
- [ ] should automatically sync FTS on delete
- [ ] should handle large dataset searches efficiently
- [ ] should optimize rebuilding FTS index
- [ ] should handle empty search terms
## Blockers
- None yet
## Notes
- [Document any FTS-specific findings]
- [Note trigger modifications]
```

View File

@@ -1,387 +0,0 @@
# Agent 5: Performance Thresholds Fix Brief
## Assignment
Fix 15 failing tests related to performance benchmarks and thresholds across MCP and database operations.
## Files to Fix
- `tests/integration/mcp-protocol/performance.test.ts` (2 tests based on output)
- `tests/integration/database/performance.test.ts` (estimated 13 tests)
## Specific Failures to Address
### MCP Performance Tests
#### 1. Large Node Lists (3 retries)
```
FAIL: should handle large node lists efficiently
TypeError: Cannot read properties of undefined (reading 'text')
Lines: 178, 181
```
#### 2. Large Workflow Validation (3 retries)
```
FAIL: should handle large workflow validation efficiently
TypeError: Cannot read properties of undefined (reading 'text')
Lines: 220, 223
```
### Database Performance Tests
Based on test structure, likely failures include:
- Bulk insert performance
- Query optimization tests
- Index performance
- Connection pool efficiency
- Memory usage tests
- Concurrent operation benchmarks
## Root Causes
1. **Undefined Responses**: MCP client returning undefined instead of proper response
2. **Timeout Thresholds**: CI environment slower than local development
3. **Memory Pressure**: Large data sets causing memory issues
4. **Missing Optimizations**: Database queries not using indexes
## Recommended Fixes
### 1. Fix MCP Large Data Handling
```typescript
// Fix large node list test
it('should handle large node lists efficiently', async () => {
const start = Date.now();
// Ensure proper response structure
const response = await mcpClient.request('tools/call', {
name: 'list_nodes',
arguments: {
limit: 500 // Large but reasonable
}
});
const duration = Date.now() - start;
// Check response is defined
expect(response).toBeDefined();
expect(response.content).toBeDefined();
expect(response.content[0]).toBeDefined();
expect(response.content[0].text).toBeDefined();
// Parse nodes from response
const nodes = JSON.parse(response.content[0].text);
// Adjust threshold for CI
const threshold = process.env.CI ? 200 : 100;
expect(duration).toBeLessThan(threshold);
expect(nodes.length).toBeGreaterThan(100);
});
// Fix large workflow validation test
it('should handle large workflow validation efficiently', async () => {
// Create large workflow
const workflow = {
name: 'Large Test Workflow',
nodes: Array.from({ length: 100 }, (_, i) => ({
id: `node-${i}`,
name: `Node ${i}`,
type: 'n8n-nodes-base.httpRequest',
typeVersion: 1,
position: [100 * i, 100],
parameters: {
url: 'https://example.com',
method: 'GET'
}
})),
connections: {}
};
// Add connections
for (let i = 0; i < 99; i++) {
workflow.connections[`node-${i}`] = {
main: [[{ node: `node-${i + 1}`, type: 'main', index: 0 }]]
};
}
const start = Date.now();
const response = await mcpClient.request('tools/call', {
name: 'validate_workflow',
arguments: { workflow }
});
const duration = Date.now() - start;
// Ensure response exists
expect(response).toBeDefined();
expect(response.content).toBeDefined();
expect(response.content[0]).toBeDefined();
expect(response.content[0].text).toBeDefined();
const validation = JSON.parse(response.content[0].text);
// Higher threshold for large workflows
const threshold = process.env.CI ? 1000 : 500;
expect(duration).toBeLessThan(threshold);
expect(validation).toHaveProperty('valid');
});
```
### 2. Database Performance Test Template
```typescript
// Common setup for database performance tests
describe('Database Performance', () => {
let db: Database;
let repository: NodeRepository;
beforeEach(async () => {
// Use in-memory database for consistent performance
db = new Database(':memory:');
await initializeSchema(db);
repository = new NodeRepository(db);
// Enable performance optimizations
await db.exec(`
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA cache_size = -64000;
PRAGMA temp_store = MEMORY;
`);
});
afterEach(async () => {
await db.close();
});
it('should handle bulk inserts efficiently', async () => {
const nodes = Array.from({ length: 1000 }, (_, i) => ({
type: `test.node${i}`,
displayName: `Test Node ${i}`,
name: `testNode${i}`,
description: 'Performance test node',
version: 1,
properties: {}
}));
const start = Date.now();
// Use transaction for bulk insert
await db.transaction(() => {
const stmt = db.prepare(`
INSERT INTO nodes (type, display_name, name, description, version, properties)
VALUES (?, ?, ?, ?, ?, ?)
`);
for (const node of nodes) {
stmt.run(
node.type,
node.displayName,
node.name,
node.description,
node.version,
JSON.stringify(node.properties)
);
}
})();
const duration = Date.now() - start;
// Adjust for CI environment
const threshold = process.env.CI ? 500 : 200;
expect(duration).toBeLessThan(threshold);
// Verify all inserted
const count = await db.prepare('SELECT COUNT(*) as count FROM nodes').get();
expect(count.count).toBe(1000);
});
it('should query with indexes efficiently', async () => {
// Insert test data
await seedTestData(db, 5000);
// Ensure indexes exist
await db.exec(`
CREATE INDEX IF NOT EXISTS idx_nodes_package ON nodes(package);
CREATE INDEX IF NOT EXISTS idx_nodes_category ON nodes(category);
`);
const start = Date.now();
// Query using index
const results = await db.prepare(`
SELECT * FROM nodes
WHERE package = ? AND category = ?
LIMIT 100
`).all('n8n-nodes-base', 'transform');
const duration = Date.now() - start;
const threshold = process.env.CI ? 50 : 20;
expect(duration).toBeLessThan(threshold);
expect(results.length).toBeGreaterThan(0);
});
});
```
### 3. Memory Efficiency Tests
```typescript
it('should handle memory efficiently during large operations', async () => {
const initialMemory = process.memoryUsage().heapUsed;
// Perform memory-intensive operation
const batchSize = 100;
const batches = 10;
for (let batch = 0; batch < batches; batch++) {
const nodes = generateTestNodes(batchSize);
await repository.saveNodes(nodes);
// Force garbage collection if available
if (global.gc) {
global.gc();
}
}
const finalMemory = process.memoryUsage().heapUsed;
const memoryIncrease = finalMemory - initialMemory;
// Memory increase should be reasonable
const maxIncreaseMB = 50;
expect(memoryIncrease / 1024 / 1024).toBeLessThan(maxIncreaseMB);
});
```
### 4. Connection Pool Performance
```typescript
it('should handle concurrent connections efficiently', async () => {
const operations = 100;
const concurrency = 10;
const start = Date.now();
// Run operations in batches
const batches = Math.ceil(operations / concurrency);
for (let i = 0; i < batches; i++) {
const promises = [];
for (let j = 0; j < concurrency && i * concurrency + j < operations; j++) {
promises.push(
repository.getNode(`n8n-nodes-base.httpRequest`)
);
}
await Promise.all(promises);
}
const duration = Date.now() - start;
// Should handle concurrent operations efficiently
const threshold = process.env.CI ? 1000 : 500;
expect(duration).toBeLessThan(threshold);
// Average time per operation should be low
const avgTime = duration / operations;
expect(avgTime).toBeLessThan(10);
});
```
### 5. Performance Monitoring Helper
```typescript
// Helper to track performance metrics
class PerformanceMonitor {
private metrics: Map<string, number[]> = new Map();
measure<T>(name: string, fn: () => T): T {
const start = performance.now();
try {
return fn();
} finally {
const duration = performance.now() - start;
if (!this.metrics.has(name)) {
this.metrics.set(name, []);
}
this.metrics.get(name)!.push(duration);
}
}
async measureAsync<T>(name: string, fn: () => Promise<T>): Promise<T> {
const start = performance.now();
try {
return await fn();
} finally {
const duration = performance.now() - start;
if (!this.metrics.has(name)) {
this.metrics.set(name, []);
}
this.metrics.get(name)!.push(duration);
}
}
getStats(name: string) {
const times = this.metrics.get(name) || [];
if (times.length === 0) return null;
return {
count: times.length,
min: Math.min(...times),
max: Math.max(...times),
avg: times.reduce((a, b) => a + b, 0) / times.length,
p95: this.percentile(times, 0.95),
p99: this.percentile(times, 0.99)
};
}
private percentile(arr: number[], p: number): number {
const sorted = [...arr].sort((a, b) => a - b);
const index = Math.ceil(sorted.length * p) - 1;
return sorted[index];
}
}
```
## Testing Strategy
1. Use environment-aware thresholds
2. Isolate performance tests from external factors
3. Use in-memory databases for consistency
4. Monitor memory usage in addition to time
5. Test both average and worst-case scenarios
## Dependencies
- All other agents should complete fixes first
- Performance baselines depend on optimized code
## Success Metrics
- [ ] All 15 performance tests pass
- [ ] CI and local thresholds properly configured
- [ ] No memory leaks detected
- [ ] Consistent performance across runs
- [ ] P95 latency within acceptable range
## Progress Tracking
Create `/tests/integration/fixes/agent-5-progress.md` and update after each fix:
```markdown
# Agent 5 Progress
## Fixed Tests - MCP Performance
- [ ] should handle large node lists efficiently
- [ ] should handle large workflow validation efficiently
## Fixed Tests - Database Performance
- [ ] Bulk insert performance
- [ ] Query optimization with indexes
- [ ] Connection pool efficiency
- [ ] Memory usage during large operations
- [ ] Concurrent read performance
- [ ] Transaction performance
- [ ] Full-text search performance
- [ ] Join query performance
- [ ] Aggregation performance
- [ ] Update performance
- [ ] Delete performance
- [ ] Vacuum performance
- [ ] Cache effectiveness
## Blockers
- None yet
## Performance Improvements
- [List optimizations made]
- [Document new thresholds]
```

View File

@@ -1,46 +0,0 @@
# Agent 5 Progress - Performance Test Fixes
## Summary
**ALL 15 PERFORMANCE TESTS FIXED AND PASSING**
### MCP Performance Tests (1 failure) - ✅ FIXED
- **should handle large node lists efficiently** - ✅ FIXED
- Fixed response parsing to handle object with nodes property
- Changed to use production database for realistic performance testing
- All MCP performance tests now passing
### Database Performance Tests (2 failures) - ✅ FIXED
1. **should perform FTS5 searches efficiently** - ✅ FIXED
- Changed search terms to lowercase (FTS5 with quotes is case-sensitive)
- All FTS5 searches now passing
2. **should benefit from proper indexing** - ✅ FIXED
- Added environment-aware thresholds (CI: 50ms, local: 20ms)
- All index performance tests now passing
## Fixed Tests - MCP Performance
- [x] should handle large node lists efficiently
- [x] should handle large workflow validation efficiently
## Fixed Tests - Database Performance
- [x] should perform FTS5 searches efficiently
- [x] should benefit from proper indexing
## Performance Improvements
- ✅ Implemented environment-aware thresholds throughout all tests
- CI thresholds are 2x higher than local to account for slower environments
- ✅ Fixed FTS5 search case sensitivity
- ✅ Added proper response structure handling for MCP tests
- ✅ Fixed list_nodes response parsing (returns object with nodes array)
- ✅ Use production database for realistic performance benchmarks
## Test Results
All 27 performance tests passing:
- 10 Database Performance Tests ✅
- 17 MCP Performance Tests ✅
## Key Fixes Applied
1. **Environment-aware thresholds**: `const threshold = process.env.CI ? 200 : 100;`
2. **FTS5 case sensitivity**: Changed search terms to lowercase
3. **Response parsing**: Handle MCP response format correctly
4. **Database selection**: Use production DB for realistic benchmarks

View File

@@ -1,64 +0,0 @@
# Agent 6 Progress
## Fixed Issues
- [x] Fixed N8NDocumentationMCPServer to respect NODE_DB_PATH environment variable
- [x] Added proper async cleanup with delays in afterEach hooks
- [x] Reduced timeout values to reasonable levels (10-15 seconds)
- [x] Fixed test hanging by suppressing logger output in test mode
- [x] Fixed in-memory database schema initialization for tests
- [x] Fixed missing properties in TestableN8NMCPServer (transports and connections)
- [x] Added missing sharedMcpServer variable definition
## Final Status
All requested fixes have been implemented. However, there appears to be a broader issue with integration tests timing out in the test environment, not specific to the session management tests.
## Root Cause Analysis
1. **Database Initialization**: In-memory database wasn't getting schema - FIXED
2. **Logger Interference**: Logger output was interfering with tests - FIXED
3. **Resource Cleanup**: Missing proper cleanup between tests - FIXED
4. **Test Environment Issue**: All integration tests are timing out, suggesting a vitest or environment configuration issue
## Implemented Fixes
### 1. Database Path Support
```typescript
// Added support for NODE_DB_PATH environment variable
const envDbPath = process.env.NODE_DB_PATH;
if (envDbPath && (envDbPath === ':memory:' || existsSync(envDbPath))) {
dbPath = envDbPath;
}
```
### 2. In-Memory Schema Initialization
```typescript
// Added schema initialization for in-memory databases
if (dbPath === ':memory:') {
await this.initializeInMemorySchema();
}
```
### 3. Logger Suppression in Tests
```typescript
// Suppress logging in test mode unless DEBUG=true
if (this.isStdio || this.isDisabled || (this.isTest && process.env.DEBUG !== 'true')) {
return;
}
```
### 4. Proper Cleanup with Delays
```typescript
// Added delays after client.close() to ensure proper cleanup
await client.close();
await new Promise(resolve => setTimeout(resolve, 50));
await mcpServer.close();
```
## Test Results
- Unit tests: PASS
- Single integration test: PASS (when run with -t flag)
- Full integration suite: TIMEOUT (appears to be environment issue)
## Notes
- The session management test fixes are complete and working
- The timeout issue affects all integration tests, not just session management
- This suggests a broader test environment or vitest configuration issue that's outside the scope of the session management fixes

View File

@@ -1,327 +0,0 @@
# Agent 6: Session Management Fix Brief
## Assignment
Fix 5 failing tests related to MCP session management and state persistence.
## Files to Fix
- `tests/integration/mcp-protocol/session-management.test.ts` (5 tests)
## Specific Failures to Address
Based on the timeout issue observed, the session management tests are likely failing due to:
1. **Session Creation Timeout**
- Session initialization taking too long
- Missing or slow handshake process
2. **Session State Persistence**
- State not properly saved between requests
- Session data corruption or loss
3. **Concurrent Session Handling**
- Race conditions with multiple sessions
- Session ID conflicts
4. **Session Cleanup**
- Sessions not properly terminated
- Resource leaks causing subsequent timeouts
5. **Session Recovery**
- Failed session recovery after disconnect
- Invalid session state after errors
## Root Causes
1. **Timeout Configuration**: Default timeout too short for session operations
2. **State Management**: Session state not properly isolated
3. **Resource Cleanup**: Sessions leaving connections open
4. **Synchronization**: Async operations not properly awaited
## Recommended Fixes
### 1. Fix Session Creation and Timeout
```typescript
describe('Session Management', () => {
let mcpClient: MCPClient;
let sessionManager: SessionManager;
// Increase timeout for session tests
jest.setTimeout(30000);
beforeEach(async () => {
sessionManager = new SessionManager();
mcpClient = new MCPClient({
sessionManager,
timeout: 10000 // Explicit timeout
});
// Ensure clean session state
await sessionManager.clearAllSessions();
});
afterEach(async () => {
// Proper cleanup
await mcpClient.close();
await sessionManager.clearAllSessions();
});
it('should create new session successfully', async () => {
const sessionId = await mcpClient.createSession({
clientId: 'test-client',
capabilities: ['tools', 'resources']
});
expect(sessionId).toBeDefined();
expect(typeof sessionId).toBe('string');
// Verify session is active
const session = await sessionManager.getSession(sessionId);
expect(session).toBeDefined();
expect(session.status).toBe('active');
});
});
```
### 2. Implement Proper Session State Management
```typescript
class SessionManager {
private sessions: Map<string, Session> = new Map();
private locks: Map<string, Promise<void>> = new Map();
async createSession(config: SessionConfig): Promise<string> {
const sessionId = `session-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
const session: Session = {
id: sessionId,
clientId: config.clientId,
capabilities: config.capabilities,
state: {},
status: 'active',
createdAt: new Date(),
lastActivity: new Date()
};
this.sessions.set(sessionId, session);
// Initialize session state
await this.initializeSessionState(sessionId);
return sessionId;
}
async getSession(sessionId: string): Promise<Session | null> {
const session = this.sessions.get(sessionId);
if (session) {
session.lastActivity = new Date();
}
return session || null;
}
async updateSessionState(sessionId: string, updates: Partial<SessionState>): Promise<void> {
// Use lock to prevent concurrent updates
const lockKey = `update-${sessionId}`;
while (this.locks.has(lockKey)) {
await this.locks.get(lockKey);
}
const lockPromise = this._updateSessionState(sessionId, updates);
this.locks.set(lockKey, lockPromise);
try {
await lockPromise;
} finally {
this.locks.delete(lockKey);
}
}
private async _updateSessionState(sessionId: string, updates: Partial<SessionState>): Promise<void> {
const session = this.sessions.get(sessionId);
if (!session) {
throw new Error(`Session ${sessionId} not found`);
}
session.state = { ...session.state, ...updates };
session.lastActivity = new Date();
}
async clearAllSessions(): Promise<void> {
// Wait for all locks to clear
await Promise.all(Array.from(this.locks.values()));
// Close all sessions
for (const session of this.sessions.values()) {
await this.closeSession(session.id);
}
this.sessions.clear();
}
private async closeSession(sessionId: string): Promise<void> {
const session = this.sessions.get(sessionId);
if (session) {
session.status = 'closed';
// Cleanup any resources
if (session.resources) {
await this.cleanupSessionResources(session);
}
}
}
}
```
### 3. Fix Concurrent Session Tests
```typescript
it('should handle concurrent sessions', async () => {
const numSessions = 5;
const sessionPromises = [];
// Create multiple sessions concurrently
for (let i = 0; i < numSessions; i++) {
sessionPromises.push(
mcpClient.createSession({
clientId: `client-${i}`,
capabilities: ['tools']
})
);
}
const sessionIds = await Promise.all(sessionPromises);
// All sessions should be unique
const uniqueIds = new Set(sessionIds);
expect(uniqueIds.size).toBe(numSessions);
// Each session should be independently accessible
const verifyPromises = sessionIds.map(async (id) => {
const session = await sessionManager.getSession(id);
expect(session).toBeDefined();
expect(session.status).toBe('active');
});
await Promise.all(verifyPromises);
});
```
### 4. Implement Session Recovery
```typescript
it('should recover session after disconnect', async () => {
// Create session
const sessionId = await mcpClient.createSession({
clientId: 'test-client',
capabilities: ['tools']
});
// Store some state
await mcpClient.request('session/update', {
sessionId,
state: { counter: 5, lastTool: 'list_nodes' }
});
// Simulate disconnect
await mcpClient.disconnect();
// Reconnect with same session ID
const newClient = new MCPClient({ sessionManager });
await newClient.resumeSession(sessionId);
// Verify state is preserved
const session = await sessionManager.getSession(sessionId);
expect(session.state.counter).toBe(5);
expect(session.state.lastTool).toBe('list_nodes');
});
```
### 5. Add Session Timeout Handling
```typescript
it('should handle session timeouts gracefully', async () => {
// Create session with short timeout
const sessionId = await mcpClient.createSession({
clientId: 'test-client',
capabilities: ['tools'],
timeout: 1000 // 1 second
});
// Wait for timeout
await new Promise(resolve => setTimeout(resolve, 1500));
// Session should be expired
const session = await sessionManager.getSession(sessionId);
expect(session.status).toBe('expired');
// Attempting to use expired session should create new one
const response = await mcpClient.request('tools/list', { sessionId });
expect(response.newSessionId).toBeDefined();
expect(response.newSessionId).not.toBe(sessionId);
});
```
### 6. Session Cleanup Helper
```typescript
class SessionCleanupService {
private cleanupInterval: NodeJS.Timeout | null = null;
start(sessionManager: SessionManager, intervalMs: number = 60000): void {
this.cleanupInterval = setInterval(async () => {
await this.cleanupExpiredSessions(sessionManager);
}, intervalMs);
}
stop(): void {
if (this.cleanupInterval) {
clearInterval(this.cleanupInterval);
this.cleanupInterval = null;
}
}
async cleanupExpiredSessions(sessionManager: SessionManager): Promise<void> {
const now = new Date();
const sessions = await sessionManager.getAllSessions();
for (const session of sessions) {
const inactiveTime = now.getTime() - session.lastActivity.getTime();
// Expire after 30 minutes of inactivity
if (inactiveTime > 30 * 60 * 1000) {
await sessionManager.expireSession(session.id);
}
}
}
}
```
## Testing Strategy
1. Increase timeouts for session tests
2. Ensure proper cleanup between tests
3. Test both success and failure scenarios
4. Verify resource cleanup
5. Test concurrent session scenarios
## Dependencies
- Depends on Agent 3 (MCP Error) for proper error handling
- May need MSW handlers from Agent 2 for session API mocking
## Success Metrics
- [ ] All 5 session management tests pass
- [ ] No timeout errors
- [ ] Sessions properly isolated
- [ ] Resources cleaned up after tests
- [ ] Concurrent sessions handled correctly
## Progress Tracking
Create `/tests/integration/fixes/agent-6-progress.md` and update after each fix:
```markdown
# Agent 6 Progress
## Fixed Tests
- [ ] should create new session successfully
- [ ] should persist session state
- [ ] should handle concurrent sessions
- [ ] should recover session after disconnect
- [ ] should handle session timeouts gracefully
## Blockers
- None yet
## Notes
- [Document session management improvements]
- [Note any timeout adjustments made]
```

View File

@@ -1,39 +0,0 @@
# Performance Index Test Fix
## Issue
The test "should benefit from proper indexing" was failing because it expected significant performance improvements from indexes, but the test setup didn't properly validate index usage or set realistic expectations.
## Root Cause
1. Small dataset (5000 rows) might not show significant index benefits
2. No verification that indexes actually exist
3. No verification that queries use indexes
4. Unrealistic expectation of >50% performance improvement
5. No comparison with non-indexed queries
## Solution
1. **Increased dataset size**: Changed from 5000 to 10000 rows to make index benefits more apparent
2. **Added index verification**: Verify that expected indexes exist in the database
3. **Added query plan analysis**: Check if queries actually use indexes (with understanding that SQLite optimizer might choose full table scan for small datasets)
4. **Adjusted expectations**: Removed the arbitrary 50% improvement requirement
5. **Added comparison query**: Added a non-indexed query on description column for comparison
6. **Better documentation**: Added comments explaining SQLite optimizer behavior
## Key Changes
```typescript
// Before: Just ran queries and expected them to be fast
indexedQueries.forEach((query, i) => {
const stop = monitor.start(`indexed_query_${i}`);
const results = query();
stop();
});
// After: Verify indexes exist and check query plans
const indexes = db.prepare("SELECT name FROM sqlite_master WHERE type='index' AND tbl_name='nodes'").all();
expect(indexNames).toContain('idx_package');
const plan = db.prepare(`EXPLAIN QUERY PLAN SELECT * FROM nodes WHERE ${column} = ?`).all('test');
const usesIndex = plan.some(row => row.detail?.includes('USING INDEX'));
```
## Result
All performance tests now pass reliably, with proper validation of index existence and usage.

File diff suppressed because one or more lines are too long

View File

@@ -1,760 +0,0 @@
{
"totalTests": 6,
"passed": 6,
"failed": 0,
"startTime": "2025-06-08T10:57:55.233Z",
"endTime": "2025-06-08T10:57:59.249Z",
"tests": [
{
"name": "Basic Node Extraction",
"status": "passed",
"startTime": "2025-06-08T10:57:55.236Z",
"endTime": "2025-06-08T10:57:55.342Z",
"error": null,
"details": {
"results": [
{
"nodeType": "@n8n/n8n-nodes-langchain.Agent",
"extracted": false,
"error": "Node source code not found for: @n8n/n8n-nodes-langchain.Agent"
},
{
"nodeType": "n8n-nodes-base.Function",
"extracted": true,
"codeLength": 7449,
"hasCredentials": false,
"hasPackageInfo": true,
"location": "node_modules/n8n-nodes-base/dist/nodes/Function/Function.node.js"
},
{
"nodeType": "n8n-nodes-base.Webhook",
"extracted": true,
"codeLength": 10667,
"hasCredentials": false,
"hasPackageInfo": true,
"location": "node_modules/n8n-nodes-base/dist/nodes/Webhook/Webhook.node.js"
}
],
"successCount": 2,
"totalTested": 3
}
},
{
"name": "List Available Nodes",
"status": "passed",
"startTime": "2025-06-08T10:57:55.342Z",
"endTime": "2025-06-08T10:57:55.689Z",
"error": null,
"details": {
"totalNodes": 439,
"packages": [
"unknown"
],
"nodesByPackage": {
"unknown": [
"ActionNetwork",
"ActiveCampaign",
"ActiveCampaignTrigger",
"AcuitySchedulingTrigger",
"Adalo",
"Affinity",
"AffinityTrigger",
"AgileCrm",
"Airtable",
"AirtableTrigger",
"AirtableV1",
"Amqp",
"AmqpTrigger",
"ApiTemplateIo",
"Asana",
"AsanaTrigger",
"Automizy",
"Autopilot",
"AutopilotTrigger",
"AwsLambda",
"AwsSns",
"AwsSnsTrigger",
"AwsCertificateManager",
"AwsComprehend",
"AwsDynamoDB",
"AwsElb",
"AwsRekognition",
"AwsS3",
"AwsS3V1",
"AwsS3V2",
"AwsSes",
"AwsSqs",
"AwsTextract",
"AwsTranscribe",
"Bannerbear",
"Baserow",
"Beeminder",
"BitbucketTrigger",
"Bitly",
"Bitwarden",
"Box",
"BoxTrigger",
"Brandfetch",
"Brevo",
"BrevoTrigger",
"Bubble",
"CalTrigger",
"CalendlyTrigger",
"Chargebee",
"ChargebeeTrigger",
"CircleCi",
"CiscoWebex",
"CiscoWebexTrigger",
"CitrixAdc",
"Clearbit",
"ClickUp",
"ClickUpTrigger",
"Clockify",
"ClockifyTrigger",
"Cloudflare",
"Cockpit",
"Coda",
"Code",
"CoinGecko",
"CompareDatasets",
"Compression",
"Contentful",
"ConvertKit",
"ConvertKitTrigger",
"Copper",
"CopperTrigger",
"Cortex",
"CrateDb",
"Cron",
"CrowdDev",
"CrowdDevTrigger",
"Crypto",
"CustomerIo",
"CustomerIoTrigger",
"DateTime",
"DateTimeV1",
"DateTimeV2",
"DebugHelper",
"DeepL",
"Demio",
"Dhl",
"Discord",
"Discourse",
"Disqus",
"Drift",
"Dropbox",
"Dropcontact",
"E2eTest",
"ERPNext",
"EditImage",
"Egoi",
"ElasticSecurity",
"Elasticsearch",
"EmailReadImap",
"EmailReadImapV1",
"EmailReadImapV2",
"EmailSend",
"EmailSendV1",
"EmailSendV2",
"Emelia",
"EmeliaTrigger",
"ErrorTrigger",
"EventbriteTrigger",
"ExecuteCommand",
"ExecuteWorkflow",
"ExecuteWorkflowTrigger",
"ExecutionData",
"FacebookGraphApi",
"FacebookTrigger",
"FacebookLeadAdsTrigger",
"FigmaTrigger",
"FileMaker",
"Filter",
"Flow",
"FlowTrigger",
"FormTrigger",
"FormIoTrigger",
"FormstackTrigger",
"Freshdesk",
"Freshservice",
"FreshworksCrm",
"Ftp",
"Function",
"FunctionItem",
"GetResponse",
"GetResponseTrigger",
"Ghost",
"Git",
"Github",
"GithubTrigger",
"Gitlab",
"GitlabTrigger",
"GoToWebinar",
"GoogleAds",
"GoogleAnalytics",
"GoogleAnalyticsV1",
"GoogleBigQuery",
"GoogleBigQueryV1",
"GoogleBooks",
"GoogleCalendar",
"GoogleCalendarTrigger",
"GoogleChat",
"GoogleCloudNaturalLanguage",
"GoogleCloudStorage",
"GoogleContacts",
"GoogleDocs",
"GoogleDrive",
"GoogleDriveTrigger",
"GoogleDriveV1",
"GoogleFirebaseCloudFirestore",
"GoogleFirebaseRealtimeDatabase",
"GSuiteAdmin",
"Gmail",
"GmailTrigger",
"GmailV1",
"GmailV2",
"GooglePerspective",
"GoogleSheets",
"GoogleSheetsTrigger",
"GoogleSlides",
"GoogleTasks",
"GoogleTranslate",
"YouTube",
"Gotify",
"Grafana",
"GraphQL",
"Grist",
"GumroadTrigger",
"HackerNews",
"HaloPSA",
"Harvest",
"HelpScout",
"HelpScoutTrigger",
"HighLevel",
"HomeAssistant",
"Html",
"HtmlExtract",
"HttpRequest",
"HttpRequestV1",
"HttpRequestV2",
"HttpRequestV3",
"Hubspot",
"HubspotTrigger",
"HubspotV1",
"HubspotV2",
"HumanticAi",
"Hunter",
"ICalendar",
"If",
"Intercom",
"Interval",
"InvoiceNinja",
"InvoiceNinjaTrigger",
"ItemLists",
"ItemListsV1",
"ItemListsV2",
"Iterable",
"Jenkins",
"Jira",
"JiraTrigger",
"JotFormTrigger",
"Kafka",
"KafkaTrigger",
"Keap",
"KeapTrigger",
"Kitemaker",
"KoBoToolbox",
"KoBoToolboxTrigger",
"Ldap",
"Lemlist",
"LemlistTrigger",
"Line",
"Linear",
"LinearTrigger",
"LingvaNex",
"LinkedIn",
"LocalFileTrigger",
"LoneScale",
"LoneScaleTrigger",
"Mqtt",
"MqttTrigger",
"Magento2",
"Mailcheck",
"Mailchimp",
"MailchimpTrigger",
"MailerLite",
"MailerLiteTrigger",
"Mailgun",
"Mailjet",
"MailjetTrigger",
"Mandrill",
"ManualTrigger",
"Markdown",
"Marketstack",
"Matrix",
"Mattermost",
"Mautic",
"MauticTrigger",
"Medium",
"Merge",
"MergeV1",
"MergeV2",
"MessageBird",
"Metabase",
"MicrosoftDynamicsCrm",
"MicrosoftExcel",
"MicrosoftExcelV1",
"MicrosoftGraphSecurity",
"MicrosoftOneDrive",
"MicrosoftOutlook",
"MicrosoftOutlookV1",
"MicrosoftSql",
"MicrosoftTeams",
"MicrosoftToDo",
"Mindee",
"Misp",
"Mocean",
"MondayCom",
"MongoDb",
"MonicaCrm",
"MoveBinaryData",
"Msg91",
"MySql",
"MySqlV1",
"N8n",
"N8nTrainingCustomerDatastore",
"N8nTrainingCustomerMessenger",
"N8nTrigger",
"Nasa",
"Netlify",
"NetlifyTrigger",
"NextCloud",
"NoOp",
"NocoDB",
"Notion",
"NotionTrigger",
"Npm",
"Odoo",
"OneSimpleApi",
"Onfleet",
"OnfleetTrigger",
"OpenAi",
"OpenThesaurus",
"OpenWeatherMap",
"Orbit",
"Oura",
"Paddle",
"PagerDuty",
"PayPal",
"PayPalTrigger",
"Peekalink",
"Phantombuster",
"PhilipsHue",
"Pipedrive",
"PipedriveTrigger",
"Plivo",
"PostBin",
"PostHog",
"Postgres",
"PostgresTrigger",
"PostgresV1",
"PostmarkTrigger",
"ProfitWell",
"Pushbullet",
"Pushcut",
"PushcutTrigger",
"Pushover",
"QuestDb",
"QuickBase",
"QuickBooks",
"QuickChart",
"RabbitMQ",
"RabbitMQTrigger",
"Raindrop",
"ReadBinaryFile",
"ReadBinaryFiles",
"ReadPDF",
"Reddit",
"Redis",
"RedisTrigger",
"RenameKeys",
"RespondToWebhook",
"Rocketchat",
"RssFeedRead",
"RssFeedReadTrigger",
"Rundeck",
"S3",
"Salesforce",
"Salesmate",
"ScheduleTrigger",
"SeaTable",
"SeaTableTrigger",
"SecurityScorecard",
"Segment",
"SendGrid",
"Sendy",
"SentryIo",
"ServiceNow",
"Set",
"SetV1",
"SetV2",
"Shopify",
"ShopifyTrigger",
"Signl4",
"Slack",
"SlackV1",
"SlackV2",
"Sms77",
"Snowflake",
"SplitInBatches",
"SplitInBatchesV1",
"SplitInBatchesV2",
"SplitInBatchesV3",
"Splunk",
"Spontit",
"Spotify",
"SpreadsheetFile",
"SseTrigger",
"Ssh",
"Stackby",
"Start",
"StickyNote",
"StopAndError",
"Storyblok",
"Strapi",
"Strava",
"StravaTrigger",
"Stripe",
"StripeTrigger",
"Supabase",
"SurveyMonkeyTrigger",
"Switch",
"SwitchV1",
"SwitchV2",
"SyncroMsp",
"Taiga",
"TaigaTrigger",
"Tapfiliate",
"Telegram",
"TelegramTrigger",
"TheHive",
"TheHiveTrigger",
"TheHiveProjectTrigger",
"TimescaleDb",
"Todoist",
"TodoistV1",
"TodoistV2",
"TogglTrigger",
"Totp",
"TravisCi",
"Trello",
"TrelloTrigger",
"Twake",
"Twilio",
"Twist",
"Twitter",
"TwitterV1",
"TwitterV2",
"TypeformTrigger",
"UProc",
"UnleashedSoftware",
"Uplead",
"UptimeRobot",
"UrlScanIo",
"VenafiTlsProtectDatacenter",
"VenafiTlsProtectDatacenterTrigger",
"VenafiTlsProtectCloud",
"VenafiTlsProtectCloudTrigger",
"Vero",
"Vonage",
"Wait",
"Webflow",
"WebflowTrigger",
"Webhook",
"Wekan",
"WhatsApp",
"Wise",
"WiseTrigger",
"WooCommerce",
"WooCommerceTrigger",
"Wordpress",
"WorkableTrigger",
"WorkflowTrigger",
"WriteBinaryFile",
"WufooTrigger",
"Xero",
"Xml",
"Yourls",
"Zammad",
"Zendesk",
"ZendeskTrigger",
"ZohoCrm",
"Zoom",
"Zulip"
]
},
"sampleNodes": [
{
"name": "ActionNetwork",
"displayName": "Action Network",
"description": "Consume the Action Network API",
"location": "node_modules/n8n-nodes-base/dist/nodes/ActionNetwork/ActionNetwork.node.js"
},
{
"name": "ActiveCampaign",
"displayName": "ActiveCampaign",
"description": "Create and edit data in ActiveCampaign",
"location": "node_modules/n8n-nodes-base/dist/nodes/ActiveCampaign/ActiveCampaign.node.js"
},
{
"name": "ActiveCampaignTrigger",
"displayName": "ActiveCampaign Trigger",
"description": "Handle ActiveCampaign events via webhooks",
"location": "node_modules/n8n-nodes-base/dist/nodes/ActiveCampaign/ActiveCampaignTrigger.node.js"
},
{
"name": "AcuitySchedulingTrigger",
"displayName": "Acuity Scheduling Trigger",
"description": "Handle Acuity Scheduling events via webhooks",
"location": "node_modules/n8n-nodes-base/dist/nodes/AcuityScheduling/AcuitySchedulingTrigger.node.js"
},
{
"name": "Adalo",
"displayName": "Adalo",
"description": "Consume Adalo API",
"location": "node_modules/n8n-nodes-base/dist/nodes/Adalo/Adalo.node.js"
}
]
}
},
{
"name": "Bulk Node Extraction",
"status": "passed",
"startTime": "2025-06-08T10:57:55.689Z",
"endTime": "2025-06-08T10:57:58.574Z",
"error": null,
"details": {
"totalAttempted": 10,
"successCount": 6,
"failureCount": 4,
"timeElapsed": 2581,
"results": [
{
"success": true,
"data": {
"nodeType": "ActionNetwork",
"name": "ActionNetwork",
"codeLength": 15810,
"codeHash": "c0a880f5754b6b532ff787bdb253dc49ffd7f470f28aeddda5be0c73f9f9935f",
"hasCredentials": true,
"hasPackageInfo": true,
"location": "node_modules/n8n-nodes-base/dist/nodes/ActionNetwork/ActionNetwork.node.js",
"extractedAt": "2025-06-08T10:57:56.009Z"
}
},
{
"success": true,
"data": {
"nodeType": "ActiveCampaign",
"name": "ActiveCampaign",
"codeLength": 38399,
"codeHash": "5ea90671718d20eecb6cddae2e21c91470fdb778e8be97106ee2539303422ad2",
"hasCredentials": true,
"hasPackageInfo": true,
"location": "node_modules/n8n-nodes-base/dist/nodes/ActiveCampaign/ActiveCampaign.node.js",
"extractedAt": "2025-06-08T10:57:56.032Z"
}
},
{
"success": false,
"nodeType": "ActiveCampaignTrigger",
"error": "Node source code not found for: ActiveCampaignTrigger"
},
{
"success": false,
"nodeType": "AcuitySchedulingTrigger",
"error": "Node source code not found for: AcuitySchedulingTrigger"
},
{
"success": true,
"data": {
"nodeType": "Adalo",
"name": "Adalo",
"codeLength": 8234,
"codeHash": "0fbcb0b60141307fdc3394154af1b2c3133fa6181aac336249c6c211fd24846f",
"hasCredentials": true,
"hasPackageInfo": true,
"location": "node_modules/n8n-nodes-base/dist/nodes/Adalo/Adalo.node.js",
"extractedAt": "2025-06-08T10:57:57.330Z"
}
},
{
"success": true,
"data": {
"nodeType": "Affinity",
"name": "Affinity",
"codeLength": 16217,
"codeHash": "e605ea187767403dfa55cd374690f7df563a0baa7ca6991d86d522dc101a2846",
"hasCredentials": true,
"hasPackageInfo": true,
"location": "node_modules/n8n-nodes-base/dist/nodes/Affinity/Affinity.node.js",
"extractedAt": "2025-06-08T10:57:57.343Z"
}
},
{
"success": false,
"nodeType": "AffinityTrigger",
"error": "Node source code not found for: AffinityTrigger"
},
{
"success": true,
"data": {
"nodeType": "AgileCrm",
"name": "AgileCrm",
"codeLength": 28115,
"codeHash": "ce71c3b5dec23a48d24c5775e9bb79006ce395bed62b306c56340b5c772379c2",
"hasCredentials": true,
"hasPackageInfo": true,
"location": "node_modules/n8n-nodes-base/dist/nodes/AgileCrm/AgileCrm.node.js",
"extractedAt": "2025-06-08T10:57:57.925Z"
}
},
{
"success": true,
"data": {
"nodeType": "Airtable",
"name": "Airtable",
"codeLength": 936,
"codeHash": "2d67e72931697178946f5127b43e954649c4c5e7ad9e29764796404ae96e7db5",
"hasCredentials": true,
"hasPackageInfo": true,
"location": "node_modules/n8n-nodes-base/dist/nodes/Airtable/Airtable.node.js",
"extractedAt": "2025-06-08T10:57:57.941Z"
}
},
{
"success": false,
"nodeType": "AirtableTrigger",
"error": "Node source code not found for: AirtableTrigger"
}
]
}
},
{
"name": "Database Schema Validation",
"status": "passed",
"startTime": "2025-06-08T10:57:58.574Z",
"endTime": "2025-06-08T10:57:58.575Z",
"error": null,
"details": {
"schemaValid": true,
"tablesCount": 4,
"estimatedStoragePerNode": 16834
}
},
{
"name": "Error Handling",
"status": "passed",
"startTime": "2025-06-08T10:57:58.575Z",
"endTime": "2025-06-08T10:57:59.244Z",
"error": null,
"details": {
"totalTests": 3,
"passed": 2,
"results": [
{
"name": "Non-existent node",
"nodeType": "non-existent-package.FakeNode",
"expectedError": "not found",
"passed": true,
"actualError": "Node source code not found for: non-existent-package.FakeNode"
},
{
"name": "Invalid node type format",
"nodeType": "",
"expectedError": "invalid",
"passed": false,
"actualError": "Node source code not found for: "
},
{
"name": "Malformed package name",
"nodeType": "@invalid@package.Node",
"expectedError": "not found",
"passed": true,
"actualError": "Node source code not found for: @invalid@package.Node"
}
]
}
},
{
"name": "MCP Server Integration",
"status": "passed",
"startTime": "2025-06-08T10:57:59.244Z",
"endTime": "2025-06-08T10:57:59.249Z",
"error": null,
"details": {
"serverCreated": true,
"config": {
"port": 3000,
"host": "0.0.0.0",
"authToken": "test-token"
}
}
}
],
"extractedNodes": 6,
"databaseSchema": {
"tables": {
"nodes": {
"columns": {
"id": "UUID PRIMARY KEY",
"node_type": "VARCHAR(255) UNIQUE NOT NULL",
"name": "VARCHAR(255) NOT NULL",
"package_name": "VARCHAR(255)",
"display_name": "VARCHAR(255)",
"description": "TEXT",
"version": "VARCHAR(50)",
"code_hash": "VARCHAR(64) NOT NULL",
"code_length": "INTEGER NOT NULL",
"source_location": "TEXT",
"extracted_at": "TIMESTAMP NOT NULL",
"updated_at": "TIMESTAMP"
},
"indexes": [
"node_type",
"package_name",
"code_hash"
]
},
"node_source_code": {
"columns": {
"id": "UUID PRIMARY KEY",
"node_id": "UUID REFERENCES nodes(id)",
"source_code": "TEXT NOT NULL",
"compiled_code": "TEXT",
"source_map": "TEXT"
}
},
"node_credentials": {
"columns": {
"id": "UUID PRIMARY KEY",
"node_id": "UUID REFERENCES nodes(id)",
"credential_type": "VARCHAR(255) NOT NULL",
"credential_code": "TEXT NOT NULL",
"required_fields": "JSONB"
}
},
"node_metadata": {
"columns": {
"id": "UUID PRIMARY KEY",
"node_id": "UUID REFERENCES nodes(id)",
"package_info": "JSONB",
"dependencies": "JSONB",
"icon": "TEXT",
"categories": "TEXT[]",
"documentation_url": "TEXT"
}
}
}
}
}

View File

@@ -1,102 +0,0 @@
# Parser Test Coverage Summary
## Overview
Created comprehensive unit tests for the parser components with the following results:
### Test Results
- **Total Tests**: 99
- **Passing Tests**: 89 (89.9%)
- **Failing Tests**: 10 (10.1%)
### Coverage by File
#### node-parser.ts
- **Lines**: 93.10% (81/87)
- **Branches**: 84.31% (43/51)
- **Functions**: 100% (8/8)
- **Statements**: 93.10% (81/87)
#### property-extractor.ts
- **Lines**: 95.18% (79/83)
- **Branches**: 85.96% (49/57)
- **Functions**: 100% (8/8)
- **Statements**: 95.18% (79/83)
#### simple-parser.ts
- **Lines**: 91.26% (94/103)
- **Branches**: 78.75% (63/80)
- **Functions**: 100% (7/7)
- **Statements**: 91.26% (94/103)
### Overall Parser Coverage
- **Lines**: 92.67% (254/274)
- **Branches**: 82.19% (155/189)
- **Functions**: 100% (23/23)
- **Statements**: 92.67% (254/274)
## Test Structure
### 1. Node Parser Tests (tests/unit/parsers/node-parser.test.ts)
- Basic programmatic and declarative node parsing
- Node type detection (trigger, webhook, AI tool)
- Version extraction and versioned node detection
- Package name handling
- Category extraction
- Edge cases and error handling
### 2. Property Extractor Tests (tests/unit/parsers/property-extractor.test.ts)
- Property extraction from various node structures
- Operation extraction (declarative and programmatic)
- Credential extraction
- AI tool capability detection
- Nested property handling
- Versioned node property extraction
- Edge cases including circular references
### 3. Simple Parser Tests (tests/unit/parsers/simple-parser.test.ts)
- Basic node parsing
- Trigger detection methods
- Operation extraction patterns
- Version extraction logic
- Versioned node detection
- Category field precedence
- Error handling
## Test Infrastructure
### Factory Pattern
Created comprehensive test factories in `tests/fixtures/factories/parser-node.factory.ts`:
- `programmaticNodeFactory` - Creates programmatic node definitions
- `declarativeNodeFactory` - Creates declarative node definitions with routing
- `triggerNodeFactory` - Creates trigger nodes
- `webhookNodeFactory` - Creates webhook nodes
- `aiToolNodeFactory` - Creates AI tool nodes
- `versionedNodeClassFactory` - Creates versioned node structures
- `propertyFactory` and variants - Creates various property types
- `malformedNodeFactory` - Creates invalid nodes for error testing
### Test Patterns
- Used Vitest with proper mocking of dependencies
- Followed AAA (Arrange-Act-Assert) pattern
- Created focused test cases for each functionality
- Included edge cases and error scenarios
- Used factory pattern for consistent test data
## Remaining Issues
### Failing Tests (10)
1. **Version extraction from baseDescription** - Parser looks for baseDescription at different levels
2. **Category extraction precedence** - Simple parser handles category fields differently
3. **Property extractor instantiation** - Static properties are being extracted when instantiation fails
4. **Operation extraction from routing.operations** - Need to handle the operations object structure
5. **VersionedNodeType parsing** - Constructor name detection not working as expected
### Recommendations for Fixes
1. Align version extraction logic between parsers
2. Standardize category field precedence
3. Fix property extraction for failed instantiation
4. Complete operation extraction from all routing patterns
5. Improve versioned node detection logic
## Conclusion
Achieved over 90% line coverage on all parser files, with 100% function coverage. The test suite provides a solid foundation for maintaining and refactoring the parser components. The remaining failing tests are mostly related to edge cases and implementation details that can be addressed in future iterations.

View File

@@ -1,81 +0,0 @@
# ConfigValidator Test Summary
## Task Completed: 3.1 - Unit Tests for ConfigValidator
### Overview
Created comprehensive unit tests for the ConfigValidator service with 44 test cases covering all major functionality.
### Test Coverage
- **Statement Coverage**: 95.21%
- **Branch Coverage**: 92.94%
- **Function Coverage**: 100%
- **Line Coverage**: 95.21%
### Test Categories
#### 1. Basic Validation (Original 26 tests)
- Required fields validation
- Property type validation
- Option value validation
- Property visibility based on displayOptions
- Node-specific validation (HTTP Request, Webhook, Database, Code)
- Security checks
- Syntax validation for JavaScript and Python
- n8n-specific patterns
#### 2. Edge Cases and Additional Coverage (18 new tests)
- Null and undefined value handling
- Nested displayOptions conditions
- Hide conditions in displayOptions
- $helpers usage validation
- External library warnings
- Crypto module usage
- API authentication warnings
- SQL performance suggestions
- Empty code handling
- Complex return patterns
- Console.log/print() warnings
- $json usage warnings
- Internal property handling
- Async/await validation
### Key Features Tested
1. **Required Field Validation**
- Missing required properties
- Conditional required fields based on displayOptions
2. **Type Validation**
- String, number, boolean type checking
- Null/undefined handling
3. **Security Validation**
- Hardcoded credentials detection
- SQL injection warnings
- eval/exec usage
- Infinite loop detection
4. **Code Node Validation**
- JavaScript syntax checking
- Python syntax checking
- n8n return format validation
- Missing return statements
- External library usage
5. **Performance Suggestions**
- SELECT * warnings
- Unused property warnings
- Common property suggestions
6. **Node-Specific Validation**
- HTTP Request: URL validation, body requirements
- Webhook: Response mode validation
- Database: Query security
- Code: Syntax and patterns
### Test Infrastructure
- Uses Vitest testing framework
- Mocks better-sqlite3 database
- Uses node factory from fixtures
- Follows established test patterns
- Comprehensive assertions for errors, warnings, and suggestions

View File

@@ -1,128 +0,0 @@
# Database Testing Utilities Summary
## Overview
We've created comprehensive database testing utilities for the n8n-mcp project that provide a complete toolkit for database-related testing scenarios.
## Created Files
### 1. `/tests/utils/database-utils.ts`
The main utilities file containing:
- **createTestDatabase()** - Creates test databases (in-memory or file-based)
- **seedTestNodes()** - Seeds test node data
- **seedTestTemplates()** - Seeds test template data
- **createTestNode()** - Factory for creating test nodes
- **createTestTemplate()** - Factory for creating test templates
- **resetDatabase()** - Clears and reinitializes database
- **createDatabaseSnapshot()** - Creates database state snapshots
- **restoreDatabaseSnapshot()** - Restores from snapshots
- **loadFixtures()** - Loads data from JSON fixtures
- **dbHelpers** - Collection of common database operations
- **createMockDatabaseAdapter()** - Creates mock adapter for unit tests
- **withTransaction()** - Transaction testing helper
- **measureDatabaseOperation()** - Performance measurement helper
### 2. `/tests/unit/utils/database-utils.test.ts`
Comprehensive unit tests covering all utility functions with 22 test cases.
### 3. `/tests/fixtures/database/test-nodes.json`
Example fixture file showing the correct format for nodes and templates.
### 4. `/tests/examples/using-database-utils.test.ts`
Practical examples showing how to use the utilities in real test scenarios.
### 5. `/tests/integration/database-integration.test.ts`
Integration test examples demonstrating complex database operations.
### 6. `/tests/utils/README.md`
Documentation explaining how to use the database utilities.
## Key Features
### 1. Flexible Database Creation
```typescript
// In-memory for unit tests (fast, isolated)
const testDb = await createTestDatabase();
// File-based for integration tests
const testDb = await createTestDatabase({
inMemory: false,
dbPath: './test.db'
});
```
### 2. Easy Data Seeding
```typescript
// Seed with defaults
await seedTestNodes(testDb.nodeRepository);
// Seed with custom data
await seedTestNodes(testDb.nodeRepository, [
{ nodeType: 'custom.node', displayName: 'Custom' }
]);
```
### 3. State Management
```typescript
// Create snapshot
const snapshot = await createDatabaseSnapshot(testDb.adapter);
// Do risky operations...
// Restore if needed
await restoreDatabaseSnapshot(testDb.adapter, snapshot);
```
### 4. Fixture Support
```typescript
// Load complex scenarios from JSON
await loadFixtures(testDb.adapter, './fixtures/scenario.json');
```
### 5. Helper Functions
```typescript
// Common operations
dbHelpers.countRows(adapter, 'nodes');
dbHelpers.nodeExists(adapter, 'node-type');
dbHelpers.getAllNodeTypes(adapter);
dbHelpers.clearTable(adapter, 'templates');
```
## TypeScript Support
All utilities are fully typed with proper interfaces:
- `TestDatabase`
- `TestDatabaseOptions`
- `DatabaseSnapshot`
## Performance Considerations
- In-memory databases for unit tests (milliseconds)
- File-based databases for integration tests
- Transaction support for atomic operations
- Performance measurement utilities included
## Best Practices
1. Always cleanup databases after tests
2. Use in-memory for unit tests
3. Use snapshots for complex state management
4. Keep fixtures versioned with your tests
5. Test both empty and populated database states
## Integration with Existing Code
The utilities work seamlessly with:
- `DatabaseAdapter` from the main codebase
- `NodeRepository` for node operations
- `TemplateRepository` for template operations
- All existing database schemas
## Testing Coverage
- ✅ All utilities have comprehensive unit tests
- ✅ Integration test examples provided
- ✅ Performance testing included
- ✅ Transaction testing supported
- ✅ Mock adapter for isolated unit tests
## Usage in CI/CD
The utilities support:
- Parallel test execution (isolated databases)
- Consistent test data across runs
- Fast execution with in-memory databases
- No external dependencies required