fix: AI node connection validation in partial workflow updates (#357) (#358)

* fix: AI node connection validation in partial workflow updates (#357)

Fix critical validation issue where n8n_update_partial_workflow incorrectly
required 'main' connections for AI nodes that exclusively use AI-specific
connection types (ai_languageModel, ai_memory, ai_embedding, ai_vectorStore, ai_tool).

Problem:
- Workflows containing AI nodes could not be updated via n8n_update_partial_workflow
- Validation incorrectly expected ALL nodes to have 'main' connections
- AI nodes only have AI-specific connection types, never 'main'

Root Cause:
- Zod schema in src/services/n8n-validation.ts defined 'main' as required field
- Schema didn't support AI-specific connection types

Fixed:
- Made 'main' connection optional in Zod schema
- Added support for all AI connection types: ai_tool, ai_languageModel, ai_memory,
  ai_embedding, ai_vectorStore
- Created comprehensive test suite (13 tests) covering all AI connection scenarios
- Updated documentation to clarify AI nodes don't require 'main' connections

Testing:
- All 13 new integration tests passing
- Tested with actual workflow 019Vrw56aROeEzVj from issue #357
- Zero breaking changes (making required fields optional is always safe)

Files Changed:
- src/services/n8n-validation.ts - Fixed Zod schema
- tests/integration/workflow-diff/ai-node-connection-validation.test.ts - New test suite
- src/mcp/tool-docs/workflow_management/n8n-update-partial-workflow.ts - Updated docs
- package.json - Version bump to 2.21.1
- CHANGELOG.md - Comprehensive release notes

Closes #357

🤖 Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

Conceived by Romuald Członkowski - www.aiadvisors.pl/en

* fix: Add missing id parameter in test file and JSDoc comment

Address code review feedback from PR #358:
- Add 'id' field to all applyDiff calls in test file (fixes TypeScript errors)
- Add JSDoc comment explaining why 'main' is optional in schema
- Ensures TypeScript compilation succeeds

Changes:
- tests/integration/workflow-diff/ai-node-connection-validation.test.ts:
  Added id parameter to all 13 test cases
- src/services/n8n-validation.ts:
  Added JSDoc explaining optional main connections

Testing:
- npm run typecheck: PASS 
- npm run build: PASS 
- All 13 tests: PASS 

🤖 Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Romuald Członkowski
2025-10-24 00:11:35 +02:00
committed by GitHub
parent 551fea841b
commit 5702a64a01
6 changed files with 883 additions and 10 deletions

View File

@@ -7,6 +7,139 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
## [2.21.1] - 2025-10-23
### 🐛 Bug Fixes
**Issue #357: Fix AI Node Connection Validation in Partial Workflow Updates**
Fixed critical validation issue where `n8n_update_partial_workflow` incorrectly required `main` connections for AI nodes that exclusively use AI-specific connection types (`ai_languageModel`, `ai_memory`, `ai_embedding`, `ai_vectorStore`, `ai_tool`).
#### Problem
Workflows containing AI nodes (OpenAI Chat Model, Postgres Chat Memory, Embeddings OpenAI, Supabase Vector Store) could not be updated via `n8n_update_partial_workflow`, even for trivial changes to unrelated nodes. The validation logic incorrectly expected ALL nodes to have `main` connections, causing false positive errors:
```
Invalid connections: [
{
"code": "invalid_type",
"expected": "array",
"received": "undefined",
"path": ["OpenAI Chat Model", "main"],
"message": "Required"
}
]
```
**Impact**: Users could not update any workflows containing AI Agent nodes via MCP tools, forcing manual updates through the n8n UI.
#### Root Cause
The Zod schema in `src/services/n8n-validation.ts` (lines 27-39) defined `main` connections as a **required field** for all nodes, without support for AI-specific connection types:
```typescript
// BEFORE (Broken):
export const workflowConnectionSchema = z.record(
z.object({
main: z.array(...), // Required - WRONG for AI nodes!
})
);
```
AI nodes use specialized connection types exclusively:
- **ai_languageModel** - Language models (OpenAI, Anthropic, etc.)
- **ai_memory** - Memory systems (Postgres Chat Memory, etc.)
- **ai_embedding** - Embedding models (Embeddings OpenAI, etc.)
- **ai_vectorStore** - Vector stores (Supabase Vector Store, etc.)
- **ai_tool** - Tools for AI agents
These nodes **never have `main` connections** - they only have their AI-specific connection types.
#### Fixed
**1. Updated Zod Schema** (`src/services/n8n-validation.ts` lines 27-49):
```typescript
// AFTER (Fixed):
const connectionArraySchema = z.array(
z.array(
z.object({
node: z.string(),
type: z.string(),
index: z.number(),
})
)
);
export const workflowConnectionSchema = z.record(
z.object({
main: connectionArraySchema.optional(), // Now optional
error: connectionArraySchema.optional(), // Error connections
ai_tool: connectionArraySchema.optional(), // AI tool connections
ai_languageModel: connectionArraySchema.optional(), // Language model connections
ai_memory: connectionArraySchema.optional(), // Memory connections
ai_embedding: connectionArraySchema.optional(), // Embedding connections
ai_vectorStore: connectionArraySchema.optional(), // Vector store connections
})
);
```
**2. Comprehensive Test Suite** (New file: `tests/integration/workflow-diff/ai-node-connection-validation.test.ts`):
- 13 test scenarios covering all AI connection types
- Tests for AI nodes with ONLY AI-specific connections (no `main`)
- Tests for mixed workflows (regular nodes + AI nodes)
- Tests for the exact scenario from issue #357
- All tests passing ✅
**3. Updated Documentation** (`src/mcp/tool-docs/workflow_management/n8n-update-partial-workflow.ts`):
- Added clarification that AI nodes do NOT require `main` connections
- Documented fix for issue #357
- Updated best practices for AI workflows
#### Testing
**Before Fix**:
-`n8n_validate_workflow`: Returns `valid: true` (correct)
-`n8n_update_partial_workflow`: FAILS with "main connections required" errors
- ❌ Cannot update workflows containing AI nodes at all
**After Fix**:
-`n8n_validate_workflow`: Returns `valid: true` (still correct)
-`n8n_update_partial_workflow`: SUCCEEDS without validation errors
- ✅ AI nodes correctly recognized with AI-specific connection types only
- ✅ All 13 new integration tests passing
- ✅ Tested with actual workflow `019Vrw56aROeEzVj` from issue #357
#### Impact
**Zero Breaking Changes**:
- Making required fields optional is always backward compatible
- All existing workflows continue working
- Validation now correctly matches n8n's actual connection model
**Fixes**:
- Users can now update AI workflows via `n8n_update_partial_workflow`
- AI nodes no longer generate false positive validation errors
- Consistent validation between `n8n_validate_workflow` and `n8n_update_partial_workflow`
#### Files Changed
**Modified (3 files)**:
- `src/services/n8n-validation.ts` - Fixed Zod schema to support all connection types
- `src/mcp/tool-docs/workflow_management/n8n-update-partial-workflow.ts` - Updated documentation
- `package.json` - Version bump to 2.21.1
**Added (1 file)**:
- `tests/integration/workflow-diff/ai-node-connection-validation.test.ts` - Comprehensive test suite (13 tests)
#### References
- **Issue**: #357 - n8n_update_partial_workflow incorrectly validates AI nodes requiring 'main' connections
- **Workflow**: `019Vrw56aROeEzVj` (WOO_Workflow_21_POST_Chat_Send_AI_Agent)
- **Investigation**: Deep code analysis by Explore agent identified exact root cause in Zod schema
- **Confirmation**: n8n-mcp-tester agent verified fix with real workflow
Conceived by Romuald Członkowski - [www.aiadvisors.pl/en](https://www.aiadvisors.pl/en)
## [2.21.0] - 2025-10-23 ## [2.21.0] - 2025-10-23
### ✨ Features ### ✨ Features

Binary file not shown.

View File

@@ -1,6 +1,6 @@
{ {
"name": "n8n-mcp", "name": "n8n-mcp",
"version": "2.21.0", "version": "2.21.1",
"description": "Integration between n8n workflow automation and Model Context Protocol (MCP)", "description": "Integration between n8n workflow automation and Model Context Protocol (MCP)",
"main": "dist/index.js", "main": "dist/index.js",
"types": "dist/index.d.ts", "types": "dist/index.d.ts",

View File

@@ -81,6 +81,10 @@ Full support for all 8 AI connection types used in n8n AI workflows:
- Multiple tools: Batch multiple \`sourceOutput: "ai_tool"\` connections to one AI Agent - Multiple tools: Batch multiple \`sourceOutput: "ai_tool"\` connections to one AI Agent
- Vector retrieval: Chain ai_embedding → ai_vectorStore → ai_tool → AI Agent - Vector retrieval: Chain ai_embedding → ai_vectorStore → ai_tool → AI Agent
**Important Notes**:
- **AI nodes do NOT require main connections**: Nodes like OpenAI Chat Model, Postgres Chat Memory, Embeddings OpenAI, and Supabase Vector Store use AI-specific connection types exclusively. They should ONLY have connections like \`ai_languageModel\`, \`ai_memory\`, \`ai_embedding\`, or \`ai_tool\` - NOT \`main\` connections.
- **Fixed in v2.21.1**: Validation now correctly recognizes AI nodes that only have AI-specific connections without requiring \`main\` connections (resolves issue #357).
**Best Practices**: **Best Practices**:
- Always specify \`sourceOutput\` for AI connections (defaults to "main" if omitted) - Always specify \`sourceOutput\` for AI connections (defaults to "main" if omitted)
- Connect language model BEFORE creating/enabling AI Agent (validation requirement) - Connect language model BEFORE creating/enabling AI Agent (validation requirement)

View File

@@ -24,9 +24,8 @@ export const workflowNodeSchema = z.object({
executeOnce: z.boolean().optional(), executeOnce: z.boolean().optional(),
}); });
export const workflowConnectionSchema = z.record( // Connection array schema used by all connection types
z.object({ const connectionArraySchema = z.array(
main: z.array(
z.array( z.array(
z.object({ z.object({
node: z.string(), node: z.string(),
@@ -34,7 +33,22 @@ export const workflowConnectionSchema = z.record(
index: z.number(), index: z.number(),
}) })
) )
), );
/**
* Workflow connection schema supporting all connection types.
* Note: 'main' is optional because AI nodes exclusively use AI-specific
* connection types (ai_languageModel, ai_memory, etc.) without main connections.
*/
export const workflowConnectionSchema = z.record(
z.object({
main: connectionArraySchema.optional(),
error: connectionArraySchema.optional(),
ai_tool: connectionArraySchema.optional(),
ai_languageModel: connectionArraySchema.optional(),
ai_memory: connectionArraySchema.optional(),
ai_embedding: connectionArraySchema.optional(),
ai_vectorStore: connectionArraySchema.optional(),
}) })
); );

View File

@@ -0,0 +1,722 @@
/**
* Integration tests for AI node connection validation in workflow diff operations
* Tests that AI nodes with AI-specific connection types (ai_languageModel, ai_memory, etc.)
* are properly validated without requiring main connections
*
* Related to issue #357
*/
import { describe, test, expect } from 'vitest';
import { WorkflowDiffEngine } from '../../../src/services/workflow-diff-engine';
describe('AI Node Connection Validation', () => {
describe('AI-specific connection types', () => {
test('should accept workflow with ai_languageModel connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'AI Language Model Test',
nodes: [
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'llm-node',
name: 'OpenAI Chat Model',
type: '@n8n/n8n-nodes-langchain.lmChatOpenAi',
typeVersion: 1,
position: [200, 0],
parameters: {}
}
],
connections: {
'OpenAI Chat Model': {
ai_languageModel: [
[{ node: 'AI Agent', type: 'ai_languageModel', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
});
test('should accept workflow with ai_memory connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'AI Memory Test',
nodes: [
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'memory-node',
name: 'Postgres Chat Memory',
type: '@n8n/n8n-nodes-langchain.memoryPostgresChat',
typeVersion: 1,
position: [200, 0],
parameters: {}
}
],
connections: {
'Postgres Chat Memory': {
ai_memory: [
[{ node: 'AI Agent', type: 'ai_memory', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
});
test('should accept workflow with ai_embedding connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'AI Embedding Test',
nodes: [
{
id: 'vectorstore-node',
name: 'Vector Store',
type: '@n8n/n8n-nodes-langchain.vectorStoreSupabase',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'embedding-node',
name: 'Embeddings OpenAI',
type: '@n8n/n8n-nodes-langchain.embeddingsOpenAi',
typeVersion: 1,
position: [200, 0],
parameters: {}
}
],
connections: {
'Embeddings OpenAI': {
ai_embedding: [
[{ node: 'Vector Store', type: 'ai_embedding', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
});
test('should accept workflow with ai_tool connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'AI Tool Test',
nodes: [
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'vectorstore-node',
name: 'Vector Store Tool',
type: '@n8n/n8n-nodes-langchain.vectorStoreSupabase',
typeVersion: 1,
position: [200, 0],
parameters: {}
}
],
connections: {
'Vector Store Tool': {
ai_tool: [
[{ node: 'AI Agent', type: 'ai_tool', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
});
test('should accept workflow with ai_vectorStore connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'AI Vector Store Test',
nodes: [
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'vectorstore-node',
name: 'Supabase Vector Store',
type: '@n8n/n8n-nodes-langchain.vectorStoreSupabase',
typeVersion: 1,
position: [200, 0],
parameters: {}
}
],
connections: {
'Supabase Vector Store': {
ai_vectorStore: [
[{ node: 'AI Agent', type: 'ai_vectorStore', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
});
});
describe('Mixed connection types', () => {
test('should accept workflow mixing main and AI connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'Mixed Connections Test',
nodes: [
{
id: 'webhook-node',
name: 'Webhook',
type: 'n8n-nodes-base.webhook',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1,
position: [200, 0],
parameters: {}
},
{
id: 'llm-node',
name: 'OpenAI Chat Model',
type: '@n8n/n8n-nodes-langchain.lmChatOpenAi',
typeVersion: 1,
position: [200, 200],
parameters: {}
},
{
id: 'respond-node',
name: 'Respond to Webhook',
type: 'n8n-nodes-base.respondToWebhook',
typeVersion: 1,
position: [400, 0],
parameters: {}
}
],
connections: {
'Webhook': {
main: [
[{ node: 'AI Agent', type: 'main', index: 0 }]
]
},
'AI Agent': {
main: [
[{ node: 'Respond to Webhook', type: 'main', index: 0 }]
]
},
'OpenAI Chat Model': {
ai_languageModel: [
[{ node: 'AI Agent', type: 'ai_languageModel', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
});
test('should accept workflow with error connections alongside AI connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'Error + AI Connections Test',
nodes: [
{
id: 'webhook-node',
name: 'Webhook',
type: 'n8n-nodes-base.webhook',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1,
position: [200, 0],
parameters: {}
},
{
id: 'llm-node',
name: 'OpenAI Chat Model',
type: '@n8n/n8n-nodes-langchain.lmChatOpenAi',
typeVersion: 1,
position: [200, 200],
parameters: {}
},
{
id: 'error-handler',
name: 'Error Handler',
type: 'n8n-nodes-base.set',
typeVersion: 1,
position: [200, -200],
parameters: {}
}
],
connections: {
'Webhook': {
main: [
[{ node: 'AI Agent', type: 'main', index: 0 }]
]
},
'AI Agent': {
error: [
[{ node: 'Error Handler', type: 'main', index: 0 }]
]
},
'OpenAI Chat Model': {
ai_languageModel: [
[{ node: 'AI Agent', type: 'ai_languageModel', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
});
});
describe('Complex AI workflow (Issue #357 scenario)', () => {
test('should accept full AI agent workflow with RAG components', async () => {
// Simplified version of the workflow from issue #357
const workflow = {
id: 'test-workflow',
name: 'AI Agent with RAG',
nodes: [
{
id: 'webhook-node',
name: 'Webhook',
type: 'n8n-nodes-base.webhook',
typeVersion: 2,
position: [0, 0],
parameters: {}
},
{
id: 'code-node',
name: 'Prepare Inputs',
type: 'n8n-nodes-base.code',
typeVersion: 2,
position: [200, 0],
parameters: {}
},
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1.7,
position: [400, 0],
parameters: {}
},
{
id: 'llm-node',
name: 'OpenAI Chat Model',
type: '@n8n/n8n-nodes-langchain.lmChatOpenAi',
typeVersion: 1,
position: [400, 200],
parameters: {}
},
{
id: 'memory-node',
name: 'Postgres Chat Memory',
type: '@n8n/n8n-nodes-langchain.memoryPostgresChat',
typeVersion: 1.1,
position: [500, 200],
parameters: {}
},
{
id: 'embedding-node',
name: 'Embeddings OpenAI',
type: '@n8n/n8n-nodes-langchain.embeddingsOpenAi',
typeVersion: 1,
position: [600, 400],
parameters: {}
},
{
id: 'vectorstore-node',
name: 'Supabase Vector Store',
type: '@n8n/n8n-nodes-langchain.vectorStoreSupabase',
typeVersion: 1.3,
position: [600, 200],
parameters: {}
},
{
id: 'respond-node',
name: 'Respond to Webhook',
type: 'n8n-nodes-base.respondToWebhook',
typeVersion: 1.1,
position: [600, 0],
parameters: {}
}
],
connections: {
'Webhook': {
main: [
[{ node: 'Prepare Inputs', type: 'main', index: 0 }]
]
},
'Prepare Inputs': {
main: [
[{ node: 'AI Agent', type: 'main', index: 0 }]
]
},
'AI Agent': {
main: [
[{ node: 'Respond to Webhook', type: 'main', index: 0 }]
]
},
'OpenAI Chat Model': {
ai_languageModel: [
[{ node: 'AI Agent', type: 'ai_languageModel', index: 0 }]
]
},
'Postgres Chat Memory': {
ai_memory: [
[{ node: 'AI Agent', type: 'ai_memory', index: 0 }]
]
},
'Embeddings OpenAI': {
ai_embedding: [
[{ node: 'Supabase Vector Store', type: 'ai_embedding', index: 0 }]
]
},
'Supabase Vector Store': {
ai_tool: [
[{ node: 'AI Agent', type: 'ai_tool', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
expect(result.errors || []).toHaveLength(0);
});
test('should successfully update AI workflow nodes without connection errors', async () => {
// Test that we can update nodes in an AI workflow without triggering validation errors
const workflow = {
id: 'test-workflow',
name: 'AI Workflow Update Test',
nodes: [
{
id: 'webhook-node',
name: 'Webhook',
type: 'n8n-nodes-base.webhook',
typeVersion: 2,
position: [0, 0],
parameters: { path: 'test' }
},
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1,
position: [200, 0],
parameters: {}
},
{
id: 'llm-node',
name: 'OpenAI Chat Model',
type: '@n8n/n8n-nodes-langchain.lmChatOpenAi',
typeVersion: 1,
position: [200, 200],
parameters: {}
}
],
connections: {
'Webhook': {
main: [
[{ node: 'AI Agent', type: 'main', index: 0 }]
]
},
'OpenAI Chat Model': {
ai_languageModel: [
[{ node: 'AI Agent', type: 'ai_languageModel', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
// Update the webhook node (unrelated to AI nodes)
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: [
{
type: 'updateNode',
nodeId: 'webhook-node',
updates: {
notes: 'Updated webhook configuration'
}
}
]
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
expect(result.errors || []).toHaveLength(0);
// Verify the update was applied
const updatedNode = result.workflow.nodes.find((n: any) => n.id === 'webhook-node');
expect(updatedNode?.notes).toBe('Updated webhook configuration');
});
});
describe('Node-only AI nodes (no main connections)', () => {
test('should accept AI nodes with ONLY ai_languageModel connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'AI Node Without Main',
nodes: [
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'llm-node',
name: 'OpenAI Chat Model',
type: '@n8n/n8n-nodes-langchain.lmChatOpenAi',
typeVersion: 1,
position: [200, 0],
parameters: {}
}
],
connections: {
// OpenAI Chat Model has NO main connections, ONLY ai_languageModel
'OpenAI Chat Model': {
ai_languageModel: [
[{ node: 'AI Agent', type: 'ai_languageModel', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
expect(result.errors || []).toHaveLength(0);
});
test('should accept AI nodes with ONLY ai_memory connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'Memory Node Without Main',
nodes: [
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'memory-node',
name: 'Postgres Chat Memory',
type: '@n8n/n8n-nodes-langchain.memoryPostgresChat',
typeVersion: 1,
position: [200, 0],
parameters: {}
}
],
connections: {
// Memory node has NO main connections, ONLY ai_memory
'Postgres Chat Memory': {
ai_memory: [
[{ node: 'AI Agent', type: 'ai_memory', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
expect(result.errors || []).toHaveLength(0);
});
test('should accept embedding nodes with ONLY ai_embedding connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'Embedding Node Without Main',
nodes: [
{
id: 'vectorstore-node',
name: 'Vector Store',
type: '@n8n/n8n-nodes-langchain.vectorStoreSupabase',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'embedding-node',
name: 'Embeddings OpenAI',
type: '@n8n/n8n-nodes-langchain.embeddingsOpenAi',
typeVersion: 1,
position: [200, 0],
parameters: {}
}
],
connections: {
// Embedding node has NO main connections, ONLY ai_embedding
'Embeddings OpenAI': {
ai_embedding: [
[{ node: 'Vector Store', type: 'ai_embedding', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
expect(result.errors || []).toHaveLength(0);
});
test('should accept vector store nodes with ONLY ai_tool connections', async () => {
const workflow = {
id: 'test-workflow',
name: 'Vector Store Node Without Main',
nodes: [
{
id: 'agent-node',
name: 'AI Agent',
type: '@n8n/n8n-nodes-langchain.agent',
typeVersion: 1,
position: [0, 0],
parameters: {}
},
{
id: 'vectorstore-node',
name: 'Supabase Vector Store',
type: '@n8n/n8n-nodes-langchain.vectorStoreSupabase',
typeVersion: 1,
position: [200, 0],
parameters: {}
}
],
connections: {
// Vector store has NO main connections, ONLY ai_tool
'Supabase Vector Store': {
ai_tool: [
[{ node: 'AI Agent', type: 'ai_tool', index: 0 }]
]
}
}
};
const engine = new WorkflowDiffEngine();
const result = await engine.applyDiff(workflow as any, {
id: workflow.id,
operations: []
});
expect(result.success).toBe(true);
expect(result.workflow).toBeDefined();
expect(result.errors || []).toHaveLength(0);
});
});
});