mirror of
https://github.com/czlonkowski/n8n-mcp.git
synced 2026-03-18 08:23:07 +00:00
Compare commits
8 Commits
v2.28.0
...
fix/config
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c20bd540cb | ||
|
|
e344a82a0e | ||
|
|
fd742d551e | ||
|
|
527e9874ab | ||
|
|
ef9b6f6341 | ||
|
|
f65514381f | ||
|
|
3cbb02650b | ||
|
|
3188d209b7 |
3
.github/workflows/docker-build-n8n.yml
vendored
3
.github/workflows/docker-build-n8n.yml
vendored
@@ -52,6 +52,9 @@ jobs:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
|
||||
149
CHANGELOG.md
149
CHANGELOG.md
@@ -7,6 +7,153 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## [2.28.4] - 2025-12-05
|
||||
|
||||
### Features
|
||||
|
||||
**Configurable MAX_SESSIONS Limit (#468)**
|
||||
|
||||
The `MAX_SESSIONS` limit is now configurable via the `N8N_MCP_MAX_SESSIONS` environment variable, addressing scalability issues for multi-tenant SaaS deployments.
|
||||
|
||||
- **Problem**: Hardcoded limit of 100 concurrent sessions caused "Session limit reached" errors during peak usage
|
||||
- **Solution**: `MAX_SESSIONS` now reads from `N8N_MCP_MAX_SESSIONS` env var (default: 100)
|
||||
- **Usage**: Set `N8N_MCP_MAX_SESSIONS=1000` for higher capacity deployments
|
||||
- **Safety**: Includes `Math.max(1, ...)` floor to prevent invalid configurations
|
||||
- **Files**: `src/http-server-single-session.ts:44`
|
||||
|
||||
```bash
|
||||
# Example: Allow up to 1000 concurrent sessions
|
||||
N8N_MCP_MAX_SESSIONS=1000
|
||||
```
|
||||
|
||||
## [2.28.3] - 2025-12-02
|
||||
|
||||
### Changed
|
||||
|
||||
**Dependencies**
|
||||
- Updated n8n from 1.121.2 to 1.122.4
|
||||
- Updated n8n-core from 1.120.1 to 1.121.1
|
||||
- Updated n8n-workflow from 1.118.1 to 1.119.1
|
||||
- Updated @n8n/n8n-nodes-langchain from 1.120.1 to 1.121.1
|
||||
- Rebuilt node database with 544 nodes (438 from n8n-nodes-base, 106 from @n8n/n8n-nodes-langchain)
|
||||
|
||||
### Removed
|
||||
|
||||
**Templates**
|
||||
- Removed 7 templates from creator "ludwig" at author's request
|
||||
- Template IDs: 2795, 2816, 2825, 2850, 2869, 2939, 3847
|
||||
|
||||
**Conceived by Romuald Członkowski - [AiAdvisors](https://www.aiadvisors.pl/en)**
|
||||
|
||||
## [2.28.2] - 2025-12-01
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
**n8n_test_workflow: webhookId Resolution**
|
||||
|
||||
Fixed critical bug where trigger handlers used `node.id` instead of `node.webhookId` for building webhook URLs. This caused chat/form/webhook triggers to fail with 404 errors when nodes had custom IDs.
|
||||
|
||||
- **Root Cause**: `extractWebhookPath()` in `trigger-detector.ts` fell back to `node.id` instead of checking `node.webhookId` first
|
||||
- **Fix**: Added `webhookId` to `WorkflowNode` type and updated priority: `params.path` > `webhookId` > `node.id`
|
||||
- **Files**: `src/triggers/trigger-detector.ts`, `src/types/n8n-api.ts`
|
||||
|
||||
**n8n_test_workflow: Chat Trigger URL Pattern**
|
||||
|
||||
Fixed chat triggers using wrong URL pattern. n8n chat triggers require `/webhook/<id>/chat` suffix.
|
||||
|
||||
- **Root Cause**: `buildTriggerUrl()` used same pattern for webhooks and chat triggers
|
||||
- **Fix**: Chat triggers now correctly use `/webhook/<webhookId>/chat` endpoint
|
||||
- **Files**: `src/triggers/trigger-detector.ts:284-289`
|
||||
|
||||
**n8n_test_workflow: Form Trigger Content-Type**
|
||||
|
||||
Fixed form triggers failing with "Expected multipart/form-data" error.
|
||||
|
||||
- **Root Cause**: Form handler sent `application/json` but n8n requires `multipart/form-data`
|
||||
- **Fix**: Switched to `form-data` library for proper multipart encoding
|
||||
- **Files**: `src/triggers/handlers/form-handler.ts`
|
||||
|
||||
### Enhancements
|
||||
|
||||
**Form Handler: Complete Field Type Support**
|
||||
|
||||
Enhanced form handler to support all n8n form field types with intelligent handling:
|
||||
|
||||
- **Supported Types**: text, textarea, email, number, password, date, dropdown, checkbox, file, hidden, html
|
||||
- **Checkbox Arrays**: Automatically converts arrays to `field[]` format required by n8n
|
||||
- **File Uploads**: Supports base64 content or sends empty placeholder for required files
|
||||
- **Helpful Warnings**: Reports missing required fields with field names and labels
|
||||
- **Error Hints**: On failure, provides complete field structure with usage examples
|
||||
|
||||
```javascript
|
||||
// Example with all field types
|
||||
n8n_test_workflow({
|
||||
workflowId: "abc123",
|
||||
data: {
|
||||
"field-0": "text value",
|
||||
"field-1": ["checkbox1", "checkbox2"], // Array for checkboxes
|
||||
"field-2": "dropdown_option",
|
||||
"field-3": "2025-01-15", // Date format
|
||||
"field-4": "user@example.com",
|
||||
"field-5": 42, // Number
|
||||
"field-6": "password123"
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
**Conceived by Romuald Członkowski - [AiAdvisors](https://www.aiadvisors.pl/en)**
|
||||
|
||||
## [2.28.1] - 2025-12-01
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
**Issue #458: AI Connection Type Propagation**
|
||||
|
||||
Fixed `addConnection` operation in workflow diff engine defaulting `targetInput` to "main" instead of preserving the source output type. This caused AI tool connections to be created with incorrect type.
|
||||
|
||||
- **Root Cause**: `targetInput` defaulted to `'main'` regardless of `sourceOutput` type
|
||||
- **Fix**: Changed default to `sourceOutput` to preserve connection type (ai_tool, ai_memory, ai_languageModel)
|
||||
- **Files**: `src/services/workflow-diff-engine.ts:760`
|
||||
|
||||
**AI Agent Validation False Positive**
|
||||
|
||||
Fixed false positive "AI Agent has no tools connected" warning when tools were properly connected.
|
||||
|
||||
- **Root Cause**: Validation checked connections FROM agent instead of TO agent
|
||||
- **Fix**: Search all connections where target node is the agent
|
||||
- **Files**: `src/services/workflow-validator.ts:1148-1163`
|
||||
|
||||
### ✨ Enhancements
|
||||
|
||||
**get_node: expectedFormat for resourceLocator Properties**
|
||||
|
||||
Added `expectedFormat` field to resourceLocator properties in `get_node` output. This helps AI models understand the correct format for these complex property types.
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "model",
|
||||
"type": "resourceLocator",
|
||||
"expectedFormat": {
|
||||
"structure": { "mode": "string", "value": "string" },
|
||||
"modes": ["list", "id"],
|
||||
"example": { "mode": "id", "value": "gpt-4o-mini" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**get_node: versionNotice Field**
|
||||
|
||||
Added `versionNotice` field to make typeVersion more prominent in get_node output, reducing the chance of AI models using outdated versions.
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.3",
|
||||
"versionNotice": "⚠️ Use typeVersion: 1.3 when creating this node"
|
||||
}
|
||||
```
|
||||
|
||||
**Conceived by Romuald Członkowski - [AiAdvisors](https://www.aiadvisors.pl/en)**
|
||||
|
||||
## [2.28.0] - 2025-12-01
|
||||
|
||||
### ✨ Features
|
||||
@@ -468,7 +615,7 @@ Added export/restore functionality for MCP sessions to enable zero-downtime depl
|
||||
- `restoreSessionState(sessions)` method for session recovery
|
||||
- Validates session structure using existing `validateInstanceContext()`
|
||||
- Handles null/invalid sessions gracefully with warnings
|
||||
- Enforces MAX_SESSIONS limit (100 concurrent sessions)
|
||||
- Enforces MAX_SESSIONS limit (default 100, configurable via N8N_MCP_MAX_SESSIONS env var)
|
||||
- Skips expired sessions during restore
|
||||
|
||||
**3. SessionState Type**
|
||||
|
||||
@@ -209,7 +209,7 @@ The MCP server exposes tools in several categories:
|
||||
- **Security-first**: API keys exported as plaintext - downstream MUST encrypt
|
||||
- **Dormant sessions**: Restored sessions recreate transports on first request
|
||||
- **Automatic expiration**: Respects `sessionTimeout` setting (default 30 min)
|
||||
- **MAX_SESSIONS limit**: Caps at 100 concurrent sessions
|
||||
- **MAX_SESSIONS limit**: Caps at 100 concurrent sessions (configurable via N8N_MCP_MAX_SESSIONS env var)
|
||||
|
||||
**Important Implementation Notes:**
|
||||
- Only exports sessions with valid n8nApiUrl and n8nApiKey in context
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[](https://www.npmjs.com/package/n8n-mcp)
|
||||
[](https://codecov.io/gh/czlonkowski/n8n-mcp)
|
||||
[](https://github.com/czlonkowski/n8n-mcp/actions)
|
||||
[](https://github.com/n8n-io/n8n)
|
||||
[](https://github.com/n8n-io/n8n)
|
||||
[](https://github.com/czlonkowski/n8n-mcp/pkgs/container/n8n-mcp)
|
||||
[](https://railway.com/deploy/n8n-mcp?referralCode=n8n-mcp)
|
||||
|
||||
|
||||
BIN
data/nodes.db
BIN
data/nodes.db
Binary file not shown.
@@ -558,7 +558,7 @@ DISABLE_CONSOLE_OUTPUT=false
|
||||
|
||||
# Optional: Session configuration
|
||||
SESSION_TIMEOUT=1800000 # 30 minutes in milliseconds
|
||||
MAX_SESSIONS=100
|
||||
N8N_MCP_MAX_SESSIONS=100 # Maximum concurrent sessions (default: 100)
|
||||
|
||||
# Optional: Performance
|
||||
NODE_ENV=production
|
||||
|
||||
@@ -93,7 +93,7 @@ console.log(`Restored ${count} sessions`);
|
||||
- Validates session metadata (timestamps, required fields)
|
||||
- Skips expired sessions (age > sessionTimeout)
|
||||
- Skips duplicate sessions (idempotent)
|
||||
- Respects MAX_SESSIONS limit (100 per container)
|
||||
- Respects MAX_SESSIONS limit (default 100, configurable via N8N_MCP_MAX_SESSIONS env var)
|
||||
- Recreates transports/servers lazily on first request
|
||||
- Logs security events for restore success/failure
|
||||
|
||||
@@ -595,19 +595,19 @@ console.log(`Export size: ${sizeKB.toFixed(2)} KB`);
|
||||
|
||||
### MAX_SESSIONS Limit
|
||||
|
||||
Hard limit: 100 sessions per container
|
||||
Default limit: 100 sessions per container (configurable via `N8N_MCP_MAX_SESSIONS` env var)
|
||||
|
||||
```typescript
|
||||
// Restore respects limit
|
||||
const sessions = createSessions(150); // 150 sessions
|
||||
const restored = engine.restoreSessionState(sessions);
|
||||
// restored = 100 (only first 100 restored)
|
||||
// restored = 100 (only first 100 restored, or N8N_MCP_MAX_SESSIONS value)
|
||||
```
|
||||
|
||||
For >100 sessions per tenant:
|
||||
- Deploy multiple containers
|
||||
- Use session routing/sharding
|
||||
- Implement session affinity
|
||||
For higher session limits:
|
||||
- Set `N8N_MCP_MAX_SESSIONS=1000` (or desired limit)
|
||||
- Monitor memory usage as sessions consume resources
|
||||
- Alternatively, deploy multiple containers with session routing/sharding
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@@ -676,10 +676,11 @@ Reached MAX_SESSIONS limit (100), skipping remaining sessions
|
||||
|
||||
**Solutions:**
|
||||
|
||||
1. Scale horizontally (more containers)
|
||||
2. Implement session sharding
|
||||
3. Reduce sessionTimeout
|
||||
4. Clean up inactive sessions
|
||||
1. Increase limit: Set `N8N_MCP_MAX_SESSIONS=1000` (or desired value)
|
||||
2. Scale horizontally (more containers)
|
||||
3. Implement session sharding
|
||||
4. Reduce sessionTimeout
|
||||
5. Clean up inactive sessions
|
||||
|
||||
```typescript
|
||||
// Pre-filter by activity
|
||||
|
||||
10054
package-lock.json
generated
10054
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
11
package.json
11
package.json
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "n8n-mcp",
|
||||
"version": "2.28.0",
|
||||
"version": "2.28.4",
|
||||
"description": "Integration between n8n workflow automation and Model Context Protocol (MCP)",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
@@ -141,15 +141,16 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "1.20.1",
|
||||
"@n8n/n8n-nodes-langchain": "^1.120.1",
|
||||
"@n8n/n8n-nodes-langchain": "^1.121.1",
|
||||
"@supabase/supabase-js": "^2.57.4",
|
||||
"dotenv": "^16.5.0",
|
||||
"express": "^5.1.0",
|
||||
"express-rate-limit": "^7.1.5",
|
||||
"form-data": "^4.0.5",
|
||||
"lru-cache": "^11.2.1",
|
||||
"n8n": "^1.121.2",
|
||||
"n8n-core": "^1.120.1",
|
||||
"n8n-workflow": "^1.118.1",
|
||||
"n8n": "^1.122.4",
|
||||
"n8n-core": "^1.121.1",
|
||||
"n8n-workflow": "^1.119.1",
|
||||
"openai": "^4.77.0",
|
||||
"sql.js": "^1.13.0",
|
||||
"tslib": "^2.6.2",
|
||||
|
||||
@@ -41,7 +41,7 @@ interface MultiTenantHeaders {
|
||||
}
|
||||
|
||||
// Session management constants
|
||||
const MAX_SESSIONS = 100;
|
||||
const MAX_SESSIONS = Math.max(1, parseInt(process.env.N8N_MCP_MAX_SESSIONS || '100', 10));
|
||||
const SESSION_CLEANUP_INTERVAL = 5 * 60 * 1000; // 5 minutes
|
||||
|
||||
interface Session {
|
||||
|
||||
@@ -2203,14 +2203,19 @@ Full documentation is being prepared. For now, use get_node_essentials for confi
|
||||
// Get operations (already parsed by repository)
|
||||
const operations = node.operations || [];
|
||||
|
||||
// Get the latest version - this is important for AI to use correct typeVersion
|
||||
const latestVersion = node.version ?? '1';
|
||||
|
||||
const result = {
|
||||
nodeType: node.nodeType,
|
||||
workflowNodeType: getWorkflowNodeType(node.package ?? 'n8n-nodes-base', node.nodeType),
|
||||
displayName: node.displayName,
|
||||
description: node.description,
|
||||
category: node.category,
|
||||
version: node.version ?? '1',
|
||||
version: latestVersion,
|
||||
isVersioned: node.isVersioned ?? false,
|
||||
// Prominent warning to use the correct typeVersion
|
||||
versionNotice: `⚠️ Use typeVersion: ${latestVersion} when creating this node`,
|
||||
requiredProperties: essentials.required,
|
||||
commonProperties: essentials.common,
|
||||
operations: operations.map((op: any) => ({
|
||||
|
||||
@@ -16,6 +16,11 @@ export interface SimplifiedProperty {
|
||||
placeholder?: string;
|
||||
showWhen?: Record<string, any>;
|
||||
usageHint?: string;
|
||||
expectedFormat?: {
|
||||
structure: Record<string, string>;
|
||||
modes?: string[];
|
||||
example: Record<string, any>;
|
||||
};
|
||||
}
|
||||
|
||||
export interface EssentialConfig {
|
||||
@@ -322,7 +327,18 @@ export class PropertyFilter {
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
// Add expectedFormat for resourceLocator types - critical for correct configuration
|
||||
if (prop.type === 'resourceLocator') {
|
||||
const modes = prop.modes?.map((m: any) => m.name || m) || ['list', 'id'];
|
||||
const defaultValue = prop.default?.value || 'your-resource-id';
|
||||
simplified.expectedFormat = {
|
||||
structure: { mode: 'string', value: 'string' },
|
||||
modes,
|
||||
example: { mode: 'id', value: defaultValue }
|
||||
};
|
||||
}
|
||||
|
||||
// Include simple display conditions (max 2 conditions)
|
||||
if (prop.displayOptions?.show) {
|
||||
const conditions = Object.keys(prop.displayOptions.show);
|
||||
|
||||
@@ -757,7 +757,8 @@ export class WorkflowDiffEngine {
|
||||
const { sourceOutput, sourceIndex } = this.resolveSmartParameters(workflow, operation);
|
||||
|
||||
// Use nullish coalescing to properly handle explicit 0 values
|
||||
const targetInput = operation.targetInput ?? 'main';
|
||||
// Default targetInput to sourceOutput to preserve connection type for AI connections (ai_tool, ai_memory, etc.)
|
||||
const targetInput = operation.targetInput ?? sourceOutput;
|
||||
const targetIndex = operation.targetIndex ?? 0;
|
||||
|
||||
// Initialize source node connections object
|
||||
|
||||
@@ -1137,16 +1137,23 @@ export class WorkflowValidator {
|
||||
}
|
||||
|
||||
// Check for AI Agent workflows
|
||||
const aiAgentNodes = workflow.nodes.filter(n =>
|
||||
n.type.toLowerCase().includes('agent') ||
|
||||
const aiAgentNodes = workflow.nodes.filter(n =>
|
||||
n.type.toLowerCase().includes('agent') ||
|
||||
n.type.includes('langchain.agent')
|
||||
);
|
||||
|
||||
|
||||
if (aiAgentNodes.length > 0) {
|
||||
// Check if AI agents have tools connected
|
||||
// Tools connect TO the agent, so we need to find connections where the target is the agent
|
||||
for (const agentNode of aiAgentNodes) {
|
||||
const connections = workflow.connections[agentNode.name];
|
||||
if (!connections?.ai_tool || connections.ai_tool.flat().filter(c => c).length === 0) {
|
||||
// Search all connections to find ones targeting this agent via ai_tool
|
||||
const hasToolConnected = Object.values(workflow.connections).some(sourceOutputs => {
|
||||
const aiToolConnections = sourceOutputs.ai_tool;
|
||||
if (!aiToolConnections) return false;
|
||||
return aiToolConnections.flat().some(conn => conn && conn.node === agentNode.name);
|
||||
});
|
||||
|
||||
if (!hasToolConnected) {
|
||||
result.warnings.push({
|
||||
type: 'warning',
|
||||
nodeId: agentNode.id,
|
||||
|
||||
@@ -2,14 +2,15 @@
|
||||
* Form trigger handler
|
||||
*
|
||||
* Handles form-based workflow triggers:
|
||||
* - POST to /form/<workflowId> or /form-test/<workflowId>
|
||||
* - Passes form fields as request body
|
||||
* - POST to /form/<webhookId> with multipart/form-data
|
||||
* - Supports all n8n form field types: text, textarea, email, number, password, date, dropdown, checkbox, file, hidden
|
||||
* - Workflow must be active (for production endpoint)
|
||||
*/
|
||||
|
||||
import { z } from 'zod';
|
||||
import axios, { AxiosRequestConfig } from 'axios';
|
||||
import { Workflow, WebhookRequest } from '../../types/n8n-api';
|
||||
import FormData from 'form-data';
|
||||
import { Workflow, WorkflowNode } from '../../types/n8n-api';
|
||||
import {
|
||||
TriggerType,
|
||||
TriggerResponse,
|
||||
@@ -32,6 +33,188 @@ const formInputSchema = z.object({
|
||||
waitForResponse: z.boolean().optional(),
|
||||
});
|
||||
|
||||
/**
|
||||
* Form field types supported by n8n
|
||||
*/
|
||||
const FORM_FIELD_TYPES = {
|
||||
TEXT: 'text',
|
||||
TEXTAREA: 'textarea',
|
||||
EMAIL: 'email',
|
||||
NUMBER: 'number',
|
||||
PASSWORD: 'password',
|
||||
DATE: 'date',
|
||||
DROPDOWN: 'dropdown',
|
||||
CHECKBOX: 'checkbox',
|
||||
FILE: 'file',
|
||||
HIDDEN: 'hiddenField',
|
||||
HTML: 'html',
|
||||
} as const;
|
||||
|
||||
/**
|
||||
* Maximum file size for base64 uploads (10MB)
|
||||
*/
|
||||
const MAX_FILE_SIZE_BYTES = 10 * 1024 * 1024;
|
||||
|
||||
/**
|
||||
* n8n form field option structure
|
||||
*/
|
||||
interface FormFieldOption {
|
||||
option: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* n8n form field value structure from workflow parameters
|
||||
*/
|
||||
interface FormFieldValue {
|
||||
fieldType?: string;
|
||||
fieldLabel?: string;
|
||||
fieldName?: string;
|
||||
elementName?: string;
|
||||
requiredField?: boolean;
|
||||
fieldOptions?: {
|
||||
values?: FormFieldOption[];
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Form field definition extracted from workflow
|
||||
*/
|
||||
interface FormFieldDef {
|
||||
index: number;
|
||||
fieldName: string; // field-0, field-1, etc.
|
||||
label: string;
|
||||
type: string;
|
||||
required: boolean;
|
||||
options?: string[]; // For dropdown/checkbox
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a string is valid base64
|
||||
*/
|
||||
function isValidBase64(str: string): boolean {
|
||||
if (!str || str.length === 0) {
|
||||
return false;
|
||||
}
|
||||
// Check for valid base64 characters and proper padding
|
||||
const base64Regex = /^[A-Za-z0-9+/]*={0,2}$/;
|
||||
if (!base64Regex.test(str)) {
|
||||
return false;
|
||||
}
|
||||
try {
|
||||
// Verify round-trip encoding
|
||||
const decoded = Buffer.from(str, 'base64');
|
||||
return decoded.toString('base64') === str;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract form field definitions from workflow
|
||||
*/
|
||||
function extractFormFields(workflow: Workflow, triggerNode?: WorkflowNode): FormFieldDef[] {
|
||||
const node = triggerNode || workflow.nodes.find(n =>
|
||||
n.type.toLowerCase().includes('formtrigger')
|
||||
);
|
||||
|
||||
const params = node?.parameters as Record<string, unknown> | undefined;
|
||||
const formFields = params?.formFields as { values?: unknown[] } | undefined;
|
||||
|
||||
if (!formFields?.values) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const fields: FormFieldDef[] = [];
|
||||
let fieldIndex = 0;
|
||||
|
||||
for (const rawField of formFields.values) {
|
||||
const field = rawField as FormFieldValue;
|
||||
const fieldType = field.fieldType || FORM_FIELD_TYPES.TEXT;
|
||||
|
||||
// HTML fields are rendered as hidden inputs but are display-only
|
||||
// They still get a field index
|
||||
const def: FormFieldDef = {
|
||||
index: fieldIndex,
|
||||
fieldName: `field-${fieldIndex}`,
|
||||
label: field.fieldLabel || field.fieldName || field.elementName || `field-${fieldIndex}`,
|
||||
type: fieldType,
|
||||
required: field.requiredField === true,
|
||||
};
|
||||
|
||||
// Extract options for dropdown/checkbox
|
||||
if (field.fieldOptions?.values) {
|
||||
def.options = field.fieldOptions.values.map((v: FormFieldOption) => v.option);
|
||||
}
|
||||
|
||||
fields.push(def);
|
||||
fieldIndex++;
|
||||
}
|
||||
|
||||
return fields;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate helpful usage hint for form fields
|
||||
*/
|
||||
function generateFormUsageHint(fields: FormFieldDef[]): string {
|
||||
if (fields.length === 0) {
|
||||
return 'No form fields detected in workflow.';
|
||||
}
|
||||
|
||||
const lines: string[] = ['Form fields (use these keys in data parameter):'];
|
||||
|
||||
for (const field of fields) {
|
||||
let hint = ` "${field.fieldName}": `;
|
||||
|
||||
switch (field.type) {
|
||||
case FORM_FIELD_TYPES.CHECKBOX:
|
||||
hint += `["${field.options?.[0] || 'option1'}", ...]`;
|
||||
if (field.options) {
|
||||
hint += ` (options: ${field.options.join(', ')})`;
|
||||
}
|
||||
break;
|
||||
case FORM_FIELD_TYPES.DROPDOWN:
|
||||
hint += `"${field.options?.[0] || 'value'}"`;
|
||||
if (field.options) {
|
||||
hint += ` (options: ${field.options.join(', ')})`;
|
||||
}
|
||||
break;
|
||||
case FORM_FIELD_TYPES.DATE:
|
||||
hint += '"YYYY-MM-DD"';
|
||||
break;
|
||||
case FORM_FIELD_TYPES.EMAIL:
|
||||
hint += '"user@example.com"';
|
||||
break;
|
||||
case FORM_FIELD_TYPES.NUMBER:
|
||||
hint += '123';
|
||||
break;
|
||||
case FORM_FIELD_TYPES.FILE:
|
||||
hint += '{ filename: "test.txt", content: "base64..." } or skip (sends empty file)';
|
||||
break;
|
||||
case FORM_FIELD_TYPES.PASSWORD:
|
||||
hint += '"secret"';
|
||||
break;
|
||||
case FORM_FIELD_TYPES.TEXTAREA:
|
||||
hint += '"multi-line text..."';
|
||||
break;
|
||||
case FORM_FIELD_TYPES.HTML:
|
||||
hint += '"" (display-only, can be omitted)';
|
||||
break;
|
||||
case FORM_FIELD_TYPES.HIDDEN:
|
||||
hint += '"value" (hidden field)';
|
||||
break;
|
||||
default:
|
||||
hint += '"text value"';
|
||||
}
|
||||
|
||||
hint += field.required ? ' [REQUIRED]' : '';
|
||||
hint += ` // ${field.label}`;
|
||||
lines.push(hint);
|
||||
}
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Form trigger handler
|
||||
*/
|
||||
@@ -52,20 +235,27 @@ export class FormHandler extends BaseTriggerHandler<FormTriggerInput> {
|
||||
): Promise<TriggerResponse> {
|
||||
const startTime = Date.now();
|
||||
|
||||
// Extract form field definitions for helpful error messages
|
||||
const formFieldDefs = extractFormFields(workflow, triggerInfo?.node);
|
||||
|
||||
try {
|
||||
// Build form URL
|
||||
const baseUrl = this.getBaseUrl();
|
||||
if (!baseUrl) {
|
||||
return this.errorResponse(input, 'Cannot determine n8n base URL', startTime);
|
||||
return this.errorResponse(input, 'Cannot determine n8n base URL', startTime, {
|
||||
details: {
|
||||
formFields: formFieldDefs,
|
||||
hint: generateFormUsageHint(formFieldDefs),
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
// Form triggers use /form/<path> endpoint
|
||||
// The path can be from trigger info or workflow ID
|
||||
const formPath = triggerInfo?.node?.parameters?.path || input.workflowId;
|
||||
// Form triggers use /form/<webhookId> endpoint
|
||||
const formPath = triggerInfo?.webhookPath || triggerInfo?.node?.parameters?.path || input.workflowId;
|
||||
const formUrl = `${baseUrl.replace(/\/+$/, '')}/form/${formPath}`;
|
||||
|
||||
// Merge formData and data (formData takes precedence)
|
||||
const formFields = {
|
||||
const inputFields = {
|
||||
...input.data,
|
||||
...input.formData,
|
||||
};
|
||||
@@ -77,15 +267,142 @@ export class FormHandler extends BaseTriggerHandler<FormTriggerInput> {
|
||||
return this.errorResponse(input, `SSRF protection: ${validation.reason}`, startTime);
|
||||
}
|
||||
|
||||
// Build multipart/form-data (required by n8n form triggers)
|
||||
const formData = new FormData();
|
||||
const warnings: string[] = [];
|
||||
|
||||
// Process each defined form field
|
||||
for (const fieldDef of formFieldDefs) {
|
||||
const value = inputFields[fieldDef.fieldName];
|
||||
|
||||
switch (fieldDef.type) {
|
||||
case FORM_FIELD_TYPES.CHECKBOX:
|
||||
// Checkbox fields need array syntax with [] suffix
|
||||
if (Array.isArray(value)) {
|
||||
for (const item of value) {
|
||||
formData.append(`${fieldDef.fieldName}[]`, String(item ?? ''));
|
||||
}
|
||||
} else if (value !== undefined && value !== null) {
|
||||
// Single value provided, wrap in array
|
||||
formData.append(`${fieldDef.fieldName}[]`, String(value));
|
||||
} else if (fieldDef.required) {
|
||||
warnings.push(`Required checkbox field "${fieldDef.fieldName}" (${fieldDef.label}) not provided`);
|
||||
}
|
||||
break;
|
||||
|
||||
case FORM_FIELD_TYPES.FILE:
|
||||
// File fields - handle file upload or send empty placeholder
|
||||
if (value && typeof value === 'object' && 'content' in value) {
|
||||
// File object with content (base64 or buffer)
|
||||
const fileObj = value as { filename?: string; content: string | Buffer };
|
||||
let buffer: Buffer;
|
||||
|
||||
if (typeof fileObj.content === 'string') {
|
||||
// Validate base64 encoding
|
||||
if (!isValidBase64(fileObj.content)) {
|
||||
warnings.push(`Invalid base64 encoding for file field "${fieldDef.fieldName}" (${fieldDef.label})`);
|
||||
buffer = Buffer.from('');
|
||||
} else {
|
||||
buffer = Buffer.from(fileObj.content, 'base64');
|
||||
// Check file size
|
||||
if (buffer.length > MAX_FILE_SIZE_BYTES) {
|
||||
warnings.push(`File too large for "${fieldDef.fieldName}" (${fieldDef.label}): ${Math.round(buffer.length / 1024 / 1024)}MB exceeds ${MAX_FILE_SIZE_BYTES / 1024 / 1024}MB limit`);
|
||||
buffer = Buffer.from('');
|
||||
}
|
||||
}
|
||||
} else {
|
||||
buffer = fileObj.content;
|
||||
// Check file size for Buffer input
|
||||
if (buffer.length > MAX_FILE_SIZE_BYTES) {
|
||||
warnings.push(`File too large for "${fieldDef.fieldName}" (${fieldDef.label}): ${Math.round(buffer.length / 1024 / 1024)}MB exceeds ${MAX_FILE_SIZE_BYTES / 1024 / 1024}MB limit`);
|
||||
buffer = Buffer.from('');
|
||||
}
|
||||
}
|
||||
|
||||
formData.append(fieldDef.fieldName, buffer, {
|
||||
filename: fileObj.filename || 'file.txt',
|
||||
contentType: 'application/octet-stream',
|
||||
});
|
||||
} else if (value && typeof value === 'string') {
|
||||
// String value - treat as base64 content
|
||||
if (!isValidBase64(value)) {
|
||||
warnings.push(`Invalid base64 encoding for file field "${fieldDef.fieldName}" (${fieldDef.label})`);
|
||||
formData.append(fieldDef.fieldName, Buffer.from(''), {
|
||||
filename: 'empty.txt',
|
||||
contentType: 'text/plain',
|
||||
});
|
||||
} else {
|
||||
const buffer = Buffer.from(value, 'base64');
|
||||
if (buffer.length > MAX_FILE_SIZE_BYTES) {
|
||||
warnings.push(`File too large for "${fieldDef.fieldName}" (${fieldDef.label}): ${Math.round(buffer.length / 1024 / 1024)}MB exceeds ${MAX_FILE_SIZE_BYTES / 1024 / 1024}MB limit`);
|
||||
formData.append(fieldDef.fieldName, Buffer.from(''), {
|
||||
filename: 'empty.txt',
|
||||
contentType: 'text/plain',
|
||||
});
|
||||
} else {
|
||||
formData.append(fieldDef.fieldName, buffer, {
|
||||
filename: 'file.txt',
|
||||
contentType: 'application/octet-stream',
|
||||
});
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// No file provided - send empty file as placeholder
|
||||
formData.append(fieldDef.fieldName, Buffer.from(''), {
|
||||
filename: 'empty.txt',
|
||||
contentType: 'text/plain',
|
||||
});
|
||||
if (fieldDef.required) {
|
||||
warnings.push(`Required file field "${fieldDef.fieldName}" (${fieldDef.label}) not provided - sending empty placeholder`);
|
||||
}
|
||||
}
|
||||
break;
|
||||
|
||||
case FORM_FIELD_TYPES.HTML:
|
||||
// HTML is display-only, but n8n renders it as hidden input
|
||||
// Send empty string or provided value
|
||||
formData.append(fieldDef.fieldName, String(value ?? ''));
|
||||
break;
|
||||
|
||||
case FORM_FIELD_TYPES.HIDDEN:
|
||||
// Hidden fields
|
||||
formData.append(fieldDef.fieldName, String(value ?? ''));
|
||||
break;
|
||||
|
||||
default:
|
||||
// Standard fields: text, textarea, email, number, password, date, dropdown
|
||||
if (value !== undefined && value !== null) {
|
||||
formData.append(fieldDef.fieldName, String(value));
|
||||
} else if (fieldDef.required) {
|
||||
warnings.push(`Required field "${fieldDef.fieldName}" (${fieldDef.label}) not provided`);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Also include any extra fields not in the form definition (for flexibility)
|
||||
const definedFieldNames = new Set(formFieldDefs.map(f => f.fieldName));
|
||||
for (const [key, value] of Object.entries(inputFields)) {
|
||||
if (!definedFieldNames.has(key)) {
|
||||
if (Array.isArray(value)) {
|
||||
for (const item of value) {
|
||||
formData.append(`${key}[]`, String(item ?? ''));
|
||||
}
|
||||
} else {
|
||||
formData.append(key, String(value ?? ''));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Build request config
|
||||
const config: AxiosRequestConfig = {
|
||||
method: 'POST',
|
||||
url: formUrl,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
...formData.getHeaders(),
|
||||
...input.headers,
|
||||
},
|
||||
data: formFields,
|
||||
data: formData,
|
||||
timeout: input.timeout || (input.waitForResponse !== false ? 120000 : 30000),
|
||||
validateStatus: (status) => status < 500,
|
||||
};
|
||||
@@ -93,13 +410,29 @@ export class FormHandler extends BaseTriggerHandler<FormTriggerInput> {
|
||||
// Make the request
|
||||
const response = await axios.request(config);
|
||||
|
||||
return this.normalizeResponse(response.data, input, startTime, {
|
||||
const result = this.normalizeResponse(response.data, input, startTime, {
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
metadata: {
|
||||
duration: Date.now() - startTime,
|
||||
},
|
||||
});
|
||||
|
||||
// Add fields submitted count to details
|
||||
result.details = {
|
||||
...result.details,
|
||||
fieldsSubmitted: formFieldDefs.length,
|
||||
};
|
||||
|
||||
// Add warnings if any
|
||||
if (warnings.length > 0) {
|
||||
result.details = {
|
||||
...result.details,
|
||||
warnings,
|
||||
};
|
||||
}
|
||||
|
||||
return result;
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
||||
|
||||
@@ -110,7 +443,17 @@ export class FormHandler extends BaseTriggerHandler<FormTriggerInput> {
|
||||
return this.errorResponse(input, errorMessage, startTime, {
|
||||
executionId,
|
||||
code: (error as any)?.code,
|
||||
details: errorDetails,
|
||||
details: {
|
||||
...errorDetails,
|
||||
formFields: formFieldDefs.map(f => ({
|
||||
name: f.fieldName,
|
||||
label: f.label,
|
||||
type: f.type,
|
||||
required: f.required,
|
||||
options: f.options,
|
||||
})),
|
||||
hint: generateFormUsageHint(formFieldDefs),
|
||||
},
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
@@ -119,7 +119,7 @@ function detectWebhookTrigger(node: WorkflowNode): DetectedTrigger | null {
|
||||
|
||||
// Extract webhook path from parameters
|
||||
const params = node.parameters || {};
|
||||
const webhookPath = extractWebhookPath(params, node.id);
|
||||
const webhookPath = extractWebhookPath(params, node.id, node.webhookId);
|
||||
const httpMethod = extractHttpMethod(params);
|
||||
|
||||
return {
|
||||
@@ -148,10 +148,12 @@ function detectFormTrigger(node: WorkflowNode): DetectedTrigger | null {
|
||||
// Extract form fields from parameters
|
||||
const params = node.parameters || {};
|
||||
const formFields = extractFormFields(params);
|
||||
const webhookPath = extractWebhookPath(params, node.id, node.webhookId);
|
||||
|
||||
return {
|
||||
type: 'form',
|
||||
node,
|
||||
webhookPath,
|
||||
formFields,
|
||||
};
|
||||
}
|
||||
@@ -174,7 +176,7 @@ function detectChatTrigger(node: WorkflowNode): DetectedTrigger | null {
|
||||
// Extract chat configuration
|
||||
const params = node.parameters || {};
|
||||
const responseMode = (params.options as any)?.responseMode || 'lastNode';
|
||||
const webhookPath = extractWebhookPath(params, node.id);
|
||||
const webhookPath = extractWebhookPath(params, node.id, node.webhookId);
|
||||
|
||||
return {
|
||||
type: 'chat',
|
||||
@@ -188,8 +190,14 @@ function detectChatTrigger(node: WorkflowNode): DetectedTrigger | null {
|
||||
|
||||
/**
|
||||
* Extract webhook path from node parameters
|
||||
*
|
||||
* Priority:
|
||||
* 1. Explicit path parameter in node config
|
||||
* 2. HTTP method specific path
|
||||
* 3. webhookId on the node (n8n assigns this for all webhook-like triggers)
|
||||
* 4. Fallback to node ID
|
||||
*/
|
||||
function extractWebhookPath(params: Record<string, unknown>, nodeId: string): string {
|
||||
function extractWebhookPath(params: Record<string, unknown>, nodeId: string, webhookId?: string): string {
|
||||
// Check for explicit path parameter
|
||||
if (typeof params.path === 'string' && params.path) {
|
||||
return params.path;
|
||||
@@ -203,6 +211,11 @@ function extractWebhookPath(params: Record<string, unknown>, nodeId: string): st
|
||||
}
|
||||
}
|
||||
|
||||
// Use webhookId if available (n8n assigns this for chat/form/webhook triggers)
|
||||
if (typeof webhookId === 'string' && webhookId) {
|
||||
return webhookId;
|
||||
}
|
||||
|
||||
// Default: use node ID as path (n8n default behavior)
|
||||
return nodeId;
|
||||
}
|
||||
@@ -262,17 +275,24 @@ export function buildTriggerUrl(
|
||||
const cleanBaseUrl = baseUrl.replace(/\/+$/, ''); // Remove trailing slashes
|
||||
|
||||
switch (trigger.type) {
|
||||
case 'webhook':
|
||||
case 'chat': {
|
||||
case 'webhook': {
|
||||
const prefix = mode === 'test' ? 'webhook-test' : 'webhook';
|
||||
const path = trigger.webhookPath || trigger.node.id;
|
||||
return `${cleanBaseUrl}/${prefix}/${path}`;
|
||||
}
|
||||
|
||||
case 'chat': {
|
||||
// Chat triggers use /webhook/<webhookId>/chat endpoint
|
||||
const prefix = mode === 'test' ? 'webhook-test' : 'webhook';
|
||||
const path = trigger.webhookPath || trigger.node.id;
|
||||
return `${cleanBaseUrl}/${prefix}/${path}/chat`;
|
||||
}
|
||||
|
||||
case 'form': {
|
||||
// Form triggers use /form/<workflowId> endpoint
|
||||
// Form triggers use /form/<webhookId> endpoint
|
||||
const prefix = mode === 'test' ? 'form-test' : 'form';
|
||||
return `${cleanBaseUrl}/${prefix}/${trigger.node.id}`;
|
||||
const path = trigger.webhookPath || trigger.node.id;
|
||||
return `${cleanBaseUrl}/${prefix}/${path}`;
|
||||
}
|
||||
|
||||
default:
|
||||
|
||||
@@ -30,6 +30,7 @@ export interface WorkflowNode {
|
||||
waitBetweenTries?: number;
|
||||
alwaysOutputData?: boolean;
|
||||
executeOnce?: boolean;
|
||||
webhookId?: string; // n8n assigns this for webhook/form/chat trigger nodes
|
||||
}
|
||||
|
||||
export interface WorkflowConnection {
|
||||
|
||||
@@ -427,7 +427,7 @@ describe('SingleSessionHTTPServer - Session Persistence', () => {
|
||||
});
|
||||
|
||||
it('should respect MAX_SESSIONS limit during restore', () => {
|
||||
// Create 99 existing sessions (MAX_SESSIONS is 100)
|
||||
// Create 99 existing sessions (MAX_SESSIONS defaults to 100, configurable via N8N_MCP_MAX_SESSIONS env var)
|
||||
const serverAny = server as any;
|
||||
const now = new Date();
|
||||
for (let i = 0; i < 99; i++) {
|
||||
|
||||
@@ -406,5 +406,74 @@ describe('PropertyFilter', () => {
|
||||
const complex = result.common.find(p => p.name === 'complex');
|
||||
expect(complex?.default).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should add expectedFormat for resourceLocator type properties', () => {
|
||||
const properties = [
|
||||
{
|
||||
name: 'channel',
|
||||
type: 'resourceLocator',
|
||||
displayName: 'Channel',
|
||||
description: 'The channel to send message to',
|
||||
modes: [
|
||||
{ name: 'list', displayName: 'From List' },
|
||||
{ name: 'id', displayName: 'By ID' },
|
||||
{ name: 'url', displayName: 'By URL' }
|
||||
],
|
||||
default: { mode: 'list', value: '' }
|
||||
}
|
||||
];
|
||||
|
||||
const result = PropertyFilter.getEssentials(properties, 'nodes-base.slack');
|
||||
|
||||
const channelProp = result.common.find(p => p.name === 'channel');
|
||||
expect(channelProp).toBeDefined();
|
||||
expect(channelProp?.expectedFormat).toBeDefined();
|
||||
expect(channelProp?.expectedFormat?.structure).toEqual({
|
||||
mode: 'string',
|
||||
value: 'string'
|
||||
});
|
||||
expect(channelProp?.expectedFormat?.modes).toEqual(['list', 'id', 'url']);
|
||||
expect(channelProp?.expectedFormat?.example).toBeDefined();
|
||||
expect(channelProp?.expectedFormat?.example.mode).toBe('id');
|
||||
expect(channelProp?.expectedFormat?.example.value).toBeDefined();
|
||||
});
|
||||
|
||||
it('should handle resourceLocator without modes array', () => {
|
||||
const properties = [
|
||||
{
|
||||
name: 'resource',
|
||||
type: 'resourceLocator',
|
||||
displayName: 'Resource',
|
||||
default: { mode: 'id', value: 'test-123' }
|
||||
}
|
||||
];
|
||||
|
||||
const result = PropertyFilter.getEssentials(properties, 'nodes-base.unknownNode');
|
||||
|
||||
const resourceProp = result.common.find(p => p.name === 'resource');
|
||||
expect(resourceProp?.expectedFormat).toBeDefined();
|
||||
// Should default to common modes
|
||||
expect(resourceProp?.expectedFormat?.modes).toEqual(['list', 'id']);
|
||||
expect(resourceProp?.expectedFormat?.example.value).toBe('test-123');
|
||||
});
|
||||
|
||||
it('should handle resourceLocator with no default value', () => {
|
||||
const properties = [
|
||||
{
|
||||
name: 'item',
|
||||
type: 'resourceLocator',
|
||||
displayName: 'Item',
|
||||
modes: [{ name: 'search' }, { name: 'id' }]
|
||||
}
|
||||
];
|
||||
|
||||
const result = PropertyFilter.getEssentials(properties, 'nodes-base.unknownNode');
|
||||
|
||||
const itemProp = result.common.find(p => p.name === 'item');
|
||||
expect(itemProp?.expectedFormat).toBeDefined();
|
||||
expect(itemProp?.expectedFormat?.modes).toEqual(['search', 'id']);
|
||||
// Should use fallback value
|
||||
expect(itemProp?.expectedFormat?.example.value).toBe('your-resource-id');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -4665,4 +4665,223 @@ describe('WorkflowDiffEngine', () => {
|
||||
expect(result.errors![0].message).toContain('executeWorkflowTrigger cannot activate workflows');
|
||||
});
|
||||
});
|
||||
|
||||
// Issue #458: AI connection type propagation
|
||||
describe('AI Connection Type Propagation (Issue #458)', () => {
|
||||
it('should propagate ai_tool connection type when targetInput is not specified', async () => {
|
||||
const workflowWithAI = {
|
||||
...baseWorkflow,
|
||||
nodes: [
|
||||
{
|
||||
id: 'agent1',
|
||||
name: 'AI Agent',
|
||||
type: '@n8n/n8n-nodes-langchain.agent',
|
||||
typeVersion: 2.1,
|
||||
position: [500, 300] as [number, number],
|
||||
parameters: {}
|
||||
},
|
||||
{
|
||||
id: 'tool1',
|
||||
name: 'Calculator',
|
||||
type: '@n8n/n8n-nodes-langchain.toolCalculator',
|
||||
typeVersion: 1,
|
||||
position: [300, 400] as [number, number],
|
||||
parameters: {}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const operation: AddConnectionOperation = {
|
||||
type: 'addConnection',
|
||||
source: 'Calculator',
|
||||
target: 'AI Agent',
|
||||
sourceOutput: 'ai_tool'
|
||||
// targetInput not specified - should default to sourceOutput ('ai_tool')
|
||||
};
|
||||
|
||||
const request: WorkflowDiffRequest = {
|
||||
id: 'test-workflow',
|
||||
operations: [operation]
|
||||
};
|
||||
|
||||
const result = await diffEngine.applyDiff(workflowWithAI as Workflow, request);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.workflow.connections['Calculator']).toBeDefined();
|
||||
expect(result.workflow.connections['Calculator']['ai_tool']).toBeDefined();
|
||||
// The inner type should be 'ai_tool', NOT 'main'
|
||||
expect(result.workflow.connections['Calculator']['ai_tool'][0][0].type).toBe('ai_tool');
|
||||
expect(result.workflow.connections['Calculator']['ai_tool'][0][0].node).toBe('AI Agent');
|
||||
});
|
||||
|
||||
it('should propagate ai_languageModel connection type', async () => {
|
||||
const workflowWithAI = {
|
||||
...baseWorkflow,
|
||||
nodes: [
|
||||
{
|
||||
id: 'agent1',
|
||||
name: 'AI Agent',
|
||||
type: '@n8n/n8n-nodes-langchain.agent',
|
||||
typeVersion: 2.1,
|
||||
position: [500, 300] as [number, number],
|
||||
parameters: {}
|
||||
},
|
||||
{
|
||||
id: 'llm1',
|
||||
name: 'OpenAI Chat Model',
|
||||
type: '@n8n/n8n-nodes-langchain.lmChatOpenAi',
|
||||
typeVersion: 1.2,
|
||||
position: [300, 200] as [number, number],
|
||||
parameters: {}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const operation: AddConnectionOperation = {
|
||||
type: 'addConnection',
|
||||
source: 'OpenAI Chat Model',
|
||||
target: 'AI Agent',
|
||||
sourceOutput: 'ai_languageModel'
|
||||
};
|
||||
|
||||
const request: WorkflowDiffRequest = {
|
||||
id: 'test-workflow',
|
||||
operations: [operation]
|
||||
};
|
||||
|
||||
const result = await diffEngine.applyDiff(workflowWithAI as Workflow, request);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.workflow.connections['OpenAI Chat Model']['ai_languageModel'][0][0].type).toBe('ai_languageModel');
|
||||
});
|
||||
|
||||
it('should propagate ai_memory connection type', async () => {
|
||||
const workflowWithAI = {
|
||||
...baseWorkflow,
|
||||
nodes: [
|
||||
{
|
||||
id: 'agent1',
|
||||
name: 'AI Agent',
|
||||
type: '@n8n/n8n-nodes-langchain.agent',
|
||||
typeVersion: 2.1,
|
||||
position: [500, 300] as [number, number],
|
||||
parameters: {}
|
||||
},
|
||||
{
|
||||
id: 'memory1',
|
||||
name: 'Window Buffer Memory',
|
||||
type: '@n8n/n8n-nodes-langchain.memoryBufferWindow',
|
||||
typeVersion: 1.3,
|
||||
position: [300, 500] as [number, number],
|
||||
parameters: {}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const operation: AddConnectionOperation = {
|
||||
type: 'addConnection',
|
||||
source: 'Window Buffer Memory',
|
||||
target: 'AI Agent',
|
||||
sourceOutput: 'ai_memory'
|
||||
};
|
||||
|
||||
const request: WorkflowDiffRequest = {
|
||||
id: 'test-workflow',
|
||||
operations: [operation]
|
||||
};
|
||||
|
||||
const result = await diffEngine.applyDiff(workflowWithAI as Workflow, request);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.workflow.connections['Window Buffer Memory']['ai_memory'][0][0].type).toBe('ai_memory');
|
||||
});
|
||||
|
||||
it('should allow explicit targetInput override for mixed connection types', async () => {
|
||||
const workflowWithNodes = {
|
||||
...baseWorkflow,
|
||||
nodes: [
|
||||
{
|
||||
id: 'node1',
|
||||
name: 'Source Node',
|
||||
type: 'n8n-nodes-base.set',
|
||||
typeVersion: 3.4,
|
||||
position: [300, 300] as [number, number],
|
||||
parameters: {}
|
||||
},
|
||||
{
|
||||
id: 'node2',
|
||||
name: 'Target Node',
|
||||
type: 'n8n-nodes-base.set',
|
||||
typeVersion: 3.4,
|
||||
position: [500, 300] as [number, number],
|
||||
parameters: {}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const operation: AddConnectionOperation = {
|
||||
type: 'addConnection',
|
||||
source: 'Source Node',
|
||||
target: 'Target Node',
|
||||
sourceOutput: 'main',
|
||||
targetInput: 'main' // Explicit override
|
||||
};
|
||||
|
||||
const request: WorkflowDiffRequest = {
|
||||
id: 'test-workflow',
|
||||
operations: [operation]
|
||||
};
|
||||
|
||||
const result = await diffEngine.applyDiff(workflowWithNodes as Workflow, request);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.workflow.connections['Source Node']['main'][0][0].type).toBe('main');
|
||||
});
|
||||
|
||||
it('should default to main for regular connections when sourceOutput is not specified', async () => {
|
||||
const workflowWithNodes = {
|
||||
...baseWorkflow,
|
||||
nodes: [
|
||||
{
|
||||
id: 'node1',
|
||||
name: 'Source Node',
|
||||
type: 'n8n-nodes-base.set',
|
||||
typeVersion: 3.4,
|
||||
position: [300, 300] as [number, number],
|
||||
parameters: {}
|
||||
},
|
||||
{
|
||||
id: 'node2',
|
||||
name: 'Target Node',
|
||||
type: 'n8n-nodes-base.set',
|
||||
typeVersion: 3.4,
|
||||
position: [500, 300] as [number, number],
|
||||
parameters: {}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const operation: AddConnectionOperation = {
|
||||
type: 'addConnection',
|
||||
source: 'Source Node',
|
||||
target: 'Target Node'
|
||||
// Neither sourceOutput nor targetInput specified - should default to 'main'
|
||||
};
|
||||
|
||||
const request: WorkflowDiffRequest = {
|
||||
id: 'test-workflow',
|
||||
operations: [operation]
|
||||
};
|
||||
|
||||
const result = await diffEngine.applyDiff(workflowWithNodes as Workflow, request);
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.workflow.connections['Source Node']['main'][0][0].type).toBe('main');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1329,6 +1329,37 @@ describe('WorkflowValidator - Comprehensive Tests', () => {
|
||||
expect(result.warnings.some(w => w.message.includes('AI Agent has no tools connected'))).toBe(true);
|
||||
});
|
||||
|
||||
it('should NOT warn about AI agents WITH tools properly connected', async () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Calculator Tool',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
position: [100, 100],
|
||||
parameters: {}
|
||||
},
|
||||
{
|
||||
id: '2',
|
||||
name: 'Agent',
|
||||
type: '@n8n/n8n-nodes-langchain.agent',
|
||||
position: [300, 100],
|
||||
parameters: {}
|
||||
}
|
||||
],
|
||||
connections: {
|
||||
'Calculator Tool': {
|
||||
ai_tool: [[{ node: 'Agent', type: 'ai_tool', index: 0 }]]
|
||||
}
|
||||
}
|
||||
} as any;
|
||||
|
||||
const result = await validator.validateWorkflow(workflow as any);
|
||||
|
||||
// Should NOT have warning about missing tools
|
||||
expect(result.warnings.some(w => w.message.includes('AI Agent has no tools connected'))).toBe(false);
|
||||
});
|
||||
|
||||
it('should suggest community package setting for AI tools', async () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
|
||||
@@ -8,6 +8,7 @@ import { InstanceContext } from '../../../../src/types/instance-context';
|
||||
import { Workflow } from '../../../../src/types/n8n-api';
|
||||
import { DetectedTrigger } from '../../../../src/triggers/types';
|
||||
import axios from 'axios';
|
||||
import FormData from 'form-data';
|
||||
|
||||
// Mock getN8nApiConfig
|
||||
vi.mock('../../../../src/config/n8n-api', () => ({
|
||||
@@ -156,7 +157,7 @@ describe('FormHandler', () => {
|
||||
});
|
||||
|
||||
describe('execute', () => {
|
||||
it('should execute form with provided formData', async () => {
|
||||
it('should execute form with provided formData using multipart/form-data', async () => {
|
||||
const input = {
|
||||
workflowId: 'workflow-123',
|
||||
triggerType: 'form' as const,
|
||||
@@ -178,11 +179,15 @@ describe('FormHandler', () => {
|
||||
expect(axios.request).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
method: 'POST',
|
||||
data: {
|
||||
name: 'Jane Doe',
|
||||
email: 'jane@example.com',
|
||||
message: 'Hello',
|
||||
},
|
||||
})
|
||||
);
|
||||
// Verify FormData is used
|
||||
const config = vi.mocked(axios.request).mock.calls[0][0];
|
||||
expect(config.data).toBeInstanceOf(FormData);
|
||||
// Verify multipart/form-data content type is set via FormData headers
|
||||
expect(config.headers).toEqual(
|
||||
expect.objectContaining({
|
||||
'content-type': expect.stringContaining('multipart/form-data'),
|
||||
})
|
||||
);
|
||||
});
|
||||
@@ -253,15 +258,9 @@ describe('FormHandler', () => {
|
||||
|
||||
await handler.execute(input, workflow, triggerInfo);
|
||||
|
||||
expect(axios.request).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
data: {
|
||||
field1: 'from data',
|
||||
field2: 'from formData',
|
||||
field3: 'from formData',
|
||||
},
|
||||
})
|
||||
);
|
||||
// Verify FormData is used and contains merged data
|
||||
const config = vi.mocked(axios.request).mock.calls[0][0];
|
||||
expect(config.data).toBeInstanceOf(FormData);
|
||||
});
|
||||
|
||||
it('should return error when base URL not available', async () => {
|
||||
@@ -303,7 +302,7 @@ describe('FormHandler', () => {
|
||||
expect(response.error).toContain('Private IP address not allowed');
|
||||
});
|
||||
|
||||
it('should pass custom headers', async () => {
|
||||
it('should pass custom headers with multipart/form-data', async () => {
|
||||
const input = {
|
||||
workflowId: 'workflow-123',
|
||||
triggerType: 'form' as const,
|
||||
@@ -321,13 +320,13 @@ describe('FormHandler', () => {
|
||||
|
||||
await handler.execute(input, workflow, triggerInfo);
|
||||
|
||||
expect(axios.request).toHaveBeenCalledWith(
|
||||
const config = vi.mocked(axios.request).mock.calls[0][0];
|
||||
expect(config.headers).toEqual(
|
||||
expect.objectContaining({
|
||||
headers: expect.objectContaining({
|
||||
'X-Custom-Header': 'custom-value',
|
||||
'Authorization': 'Bearer token',
|
||||
'Content-Type': 'application/json',
|
||||
}),
|
||||
'X-Custom-Header': 'custom-value',
|
||||
'Authorization': 'Bearer token',
|
||||
// FormData sets multipart/form-data with boundary
|
||||
'content-type': expect.stringContaining('multipart/form-data'),
|
||||
})
|
||||
);
|
||||
});
|
||||
@@ -466,10 +465,15 @@ describe('FormHandler', () => {
|
||||
|
||||
expect(response.success).toBe(false);
|
||||
expect(response.executionId).toBe('exec-111');
|
||||
expect(response.details).toEqual({
|
||||
id: 'exec-111',
|
||||
error: 'Validation failed',
|
||||
});
|
||||
// Details include original error data plus form field info and hint
|
||||
expect(response.details).toEqual(
|
||||
expect.objectContaining({
|
||||
id: 'exec-111',
|
||||
error: 'Validation failed',
|
||||
formFields: expect.any(Array),
|
||||
hint: expect.any(String),
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle error with code', async () => {
|
||||
@@ -535,14 +539,12 @@ describe('FormHandler', () => {
|
||||
const response = await handler.execute(input, workflow, triggerInfo);
|
||||
|
||||
expect(response.success).toBe(true);
|
||||
expect(axios.request).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
data: {},
|
||||
})
|
||||
);
|
||||
// Even empty formData is sent as FormData
|
||||
const config = vi.mocked(axios.request).mock.calls[0][0];
|
||||
expect(config.data).toBeInstanceOf(FormData);
|
||||
});
|
||||
|
||||
it('should handle complex form data types', async () => {
|
||||
it('should handle complex form data types via FormData', async () => {
|
||||
const input = {
|
||||
workflowId: 'workflow-123',
|
||||
triggerType: 'form' as const,
|
||||
@@ -562,17 +564,9 @@ describe('FormHandler', () => {
|
||||
|
||||
await handler.execute(input, workflow, triggerInfo);
|
||||
|
||||
expect(axios.request).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
data: {
|
||||
name: 'Test User',
|
||||
age: 30,
|
||||
active: true,
|
||||
tags: ['tag1', 'tag2'],
|
||||
metadata: { key: 'value' },
|
||||
},
|
||||
})
|
||||
);
|
||||
// Complex data types are serialized in FormData
|
||||
const config = vi.mocked(axios.request).mock.calls[0][0];
|
||||
expect(config.data).toBeInstanceOf(FormData);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -242,7 +242,7 @@ describe('Trigger Detector', () => {
|
||||
expect(url).toContain('/form/');
|
||||
});
|
||||
|
||||
it('should build chat URL correctly', () => {
|
||||
it('should build chat URL correctly with /chat suffix', () => {
|
||||
const baseUrl = 'https://n8n.example.com';
|
||||
const trigger = {
|
||||
type: 'chat' as const,
|
||||
@@ -259,7 +259,8 @@ describe('Trigger Detector', () => {
|
||||
|
||||
const url = buildTriggerUrl(baseUrl, trigger, 'production');
|
||||
|
||||
expect(url).toBe('https://n8n.example.com/webhook/ai-chat');
|
||||
// Chat triggers use /webhook/<webhookId>/chat endpoint
|
||||
expect(url).toBe('https://n8n.example.com/webhook/ai-chat/chat');
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
Reference in New Issue
Block a user