Compare commits

..

12 Commits

Author SHA1 Message Date
Romuald Członkowski
3417c6701c Merge pull request #685 from czlonkowski/fix/patterns-docs-and-token-trim-683
fix: trim patterns response tokens and complete documentation (v2.42.3)
2026-03-31 08:11:50 +02:00
czlonkowski
5b68d7fa53 fix: trim patterns response tokens and add missing documentation (#683)
- Trim task-specific patterns ~64%: drop displayName, shorten field names
  (frequency→freq), cap chains at 5, use short node type names in chains
- Add patterns mode to tools_documentation essentials, full docs, and
  tools-documentation.ts overview (was completely missing)
- Document includeOperations omission behavior for trigger/freeform nodes
- Add patterns return shape to search_templates returns documentation
- Trim search_nodes examples from 11 to 6

Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 00:03:35 +02:00
Romuald Członkowski
54e445bc6a Merge pull request #682 from czlonkowski/fix/include-patterns-in-npm-681
fix: include workflow-patterns.json in npm package (#681)
2026-03-30 22:27:13 +02:00
czlonkowski
49638b98cf fix: include workflow-patterns.json in npm package (#681)
Added data/workflow-patterns.json to the files array in package.json so
the patterns file ships with the npm package. Without this, the
search_templates patterns mode returns an error asking users to manually
run the mine script.

Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 21:59:52 +02:00
Romuald Członkowski
fa766eff3c Merge pull request #680 from czlonkowski/fix/community-node-resource-grouping 2026-03-30 20:56:48 +02:00
czlonkowski
b890bd9601 fix: restore community nodes with resource-grouped operations (v2.42.1)
- Restore 584 community nodes from n8n 2.13.3 DB snapshot
- Re-extract operations from properties_schema with resource grouping
  (366 community nodes now have named resources, up from 10)
- Fix community-node-service.ts extractOperations() to extract resource
  from displayOptions.show.resource
- Bump version to 2.42.1

Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 18:18:19 +02:00
czlonkowski
7f7150b2f3 fix: add resource grouping to community node operations and restore 584 community nodes
Community nodes copied from a pre-fix DB had flat operations without resource
grouping. Fixed by:

1. Updating community-node-service.ts extractOperations() to extract resource
   from displayOptions.show.resource (same fix as property-extractor.ts)
2. Re-extracting operations from properties_schema for all 1,396 nodes
3. Restoring 584 community nodes from the n8n 2.13.3 DB snapshot
4. Rebuilding FTS5 index

Result: 366 community nodes now have resource-grouped operations (up from 10).
64 community nodes remain flat (they have no resource/operation pattern).

Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 17:46:22 +02:00
Romuald Członkowski
89a9e5ed1c Merge pull request #679 from czlonkowski/feat/search-operations-and-workflow-patterns 2026-03-30 16:54:55 +02:00
czlonkowski
bda2009b77 feat: add includeOperations to search_nodes and patterns mode to search_templates
- Add `includeOperations` flag to search_nodes that returns resource/operation
  trees per result (e.g., Slack: 7 resources, 44 operations). Saves a get_node
  round-trip when building workflows.
- Add `searchMode: "patterns"` to search_templates — lightweight workflow pattern
  summaries mined from 2,700+ templates (node frequency, co-occurrence, connection
  chains across 10 task categories).
- Fix operations extraction: use filter() instead of find() to capture ALL
  operation properties, each mapped to its resource via displayOptions.
- Fix FTS-to-LIKE fallback silently dropping search options.
- Add mine-workflow-patterns.ts script (npm run mine:patterns).
- Restore 584 community nodes in database after rebuild.

Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 15:45:03 +02:00
Romuald Członkowski
20ebfbb0fc fix: validation bugs — If/Switch version check, Set false positive, bare expressions (#675, #676, #677) (#678)
- #675: Wire `validateConditionNodeStructure` into `WorkflowValidator` with
  version-conditional checks (If v2.2+ requires options, v2.0-2.1 validates
  operators, v1.x skipped; Switch v3.2+ requires options)
- #676: Fix `validateSet` to check `assignments.assignments` (v3+) alongside
  `config.values` (v1/v2), eliminating false positive warnings
- #677: Add anchored heuristic pre-pass in `ExpressionValidator` detecting bare
  `$json`, `$node`, `$input`, `$execution`, `$workflow`, `$prevNode`, `$env`,
  `$now`, `$today`, `$itemIndex`, `$runIndex` references missing `={{ }}`

25 new tests across 3 test files.

Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 12:29:04 +02:00
Romuald Członkowski
6e4a9d520d fix: raise session timeout default, fix VS Code MCP compatibility (#674)
* fix: raise session timeout default and fix VS Code MCP compatibility (#626, #600, #611)

- #626: Raise SESSION_TIMEOUT_MINUTES default from 5 to 30 minutes.
  Complex editing sessions easily exceed 5 min between LLM calls.

- #600: Add z.preprocess(tryParseJson, ...) to operations parameter
  in n8n_update_partial_workflow. VS Code extension sends arrays as
  JSON strings.

- #611: Strip undefined values from tool args via JSON round-trip
  before Zod validation. VS Code sends explicit undefined which
  Zod's .optional() rejects.

Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: deduplicate tryParseJson — export from handlers-n8n-manager

tryParseJson was duplicated in handlers-workflow-diff.ts. Now imported
from handlers-n8n-manager.ts where it was already defined. Updated
test mock to use importOriginal so the real function is available.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 18:56:30 +01:00
Romuald Członkowski
fb2d306dc3 fix: intercept stdout writes to prevent JSON-RPC corruption in stdio mode (#673)
* fix: intercept process.stdout.write to prevent JSON-RPC corruption in stdio mode (#628, #627, #567)

Console method suppression alone was insufficient — native modules, n8n packages,
and third-party code can call process.stdout.write() directly, leaking debug output
(refCount, dbPath, clientVersion, protocolVersion, etc.) into the MCP JSON-RPC stream.

Added stdout write interceptor that only allows JSON-RPC messages through (objects
containing "jsonrpc" field). All other writes are redirected to stderr. This fixes
the flood of "Unexpected token is not valid JSON" warnings on every new Claude
Desktop chat.

Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* ci: add Docker Hub login to fix buildx bootstrap rate limiting

GitHub-hosted runners hit Docker Hub anonymous pull limits when
setup-buildx-action pulls moby/buildkit. Add docker/login-action
for Docker Hub before setup-buildx-action in all 4 workflows:
docker-build.yml, docker-build-fast.yml, docker-build-n8n.yml, release.yml.

Uses DOCKERHUB_USERNAME and DOCKERHUB_TOKEN repository secrets.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 17:26:43 +01:00
31 changed files with 4029 additions and 174 deletions

View File

@@ -29,9 +29,15 @@ jobs:
with:
lfs: true
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3

View File

@@ -55,6 +55,12 @@ jobs:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

View File

@@ -71,13 +71,19 @@ jobs:
"
echo "✅ Synced package.runtime.json to version $VERSION"
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
@@ -85,7 +91,7 @@ jobs:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
@@ -173,13 +179,19 @@ jobs:
"
echo "✅ Synced package.runtime.json to version $VERSION"
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3

View File

@@ -441,12 +441,18 @@ jobs:
"
echo "✅ Synced package.runtime.json to version $VERSION"
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:

View File

@@ -7,6 +7,88 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [2.42.3] - 2026-03-30
### Improved
- **Patterns response trimmed for token efficiency** (Issue #683): Task-specific patterns response reduced ~64% — dropped redundant `displayName`, shortened field names (`frequency``freq`), capped chains at 5, and shortened chain node types to last segment.
- **`patterns` mode added to `tools_documentation`**: Was missing from both essentials and full docs. AI agents can now discover patterns mode through the standard documentation flow.
- **`includeOperations` omission behavior documented**: Added note that trigger nodes and freeform nodes (Code, HTTP Request) omit the `operationsTree` field.
- **`search_nodes` examples trimmed**: Reduced from 11 to 6 examples in full docs, removing near-duplicates.
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
## [2.42.2] - 2026-03-30
### Fixed
- **`workflow-patterns.json` missing from npm package** (Issue #681): Added `data/workflow-patterns.json` to the `files` array in `package.json` so the patterns file is included in the published npm package and works out of the box without manual generation.
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
## [2.42.1] - 2026-03-30
### Fixed
- **Community nodes missing from database after rebuild**: Restored 584 community nodes from the n8n 2.13.3 snapshot and re-extracted operations with resource grouping from `properties_schema`. 366 community nodes now have proper resource-grouped operations.
- **Community node service missing resource extraction**: `extractOperations()` in `community-node-service.ts` was not extracting `resource` from `displayOptions.show.resource`, same issue that was fixed in `property-extractor.ts` in v2.42.0.
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
## [2.42.0] - 2026-03-30
### Added
- **`includeOperations` flag for search_nodes**: Opt-in parameter that returns a resource/operation tree per search result, grouped by resource (e.g., Slack returns 7 resources with 44 operations). Saves a mandatory `get_node` round-trip when building workflows. Adds ~100-300 tokens per result.
- **`searchMode: "patterns"` for search_templates**: New lightweight mode that serves workflow pattern summaries mined from 2,700+ templates. Returns common node combinations, connection chains, and frequency data per task category (10 categories: ai_automation, webhook_processing, scheduling, etc.). Use `task` parameter for category-specific patterns or omit for overview.
- **Workflow pattern mining script** (`npm run mine:patterns`): Extracts node frequency, co-occurrence, and connection topology from the template database. Two-pass pipeline: Pass 1 analyzes `nodes_used` metadata (no decompression), Pass 2 decompresses workflows for connection analysis. Produces `data/workflow-patterns.json` with 554 node types, 3,201 edges, and 5,246 chains.
### Fixed
- **Operations extraction now includes resource grouping**: The property extractor was using `find()` to get only the first `operation` property, but n8n nodes have multiple operation properties each mapped to a different resource via `displayOptions.show.resource`. Changed to `filter()` to capture all operation properties. Slack went from 17 flat operations to 44 operations across 7 named resources.
- **FTS-to-LIKE fallback dropped search options**: When the FTS5 search fell back to LIKE-based search (e.g., for "http request"), the `options` object (including `includeOperations`, `includeExamples`, `source`) was silently lost. Now correctly passed through.
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
## [2.41.4] - 2026-03-30
### Fixed
- **`validate_workflow` misses `conditions.options` check for If/Switch nodes** (Issue #675): Added version-conditional validation — If v2.2+ and Switch v3.2+ now require `conditions.options` metadata, If v2.0-2.1 validates operator structures, and v1.x is left unchecked. Previously only caught by `n8n_create_workflow` pre-flight but not by offline `validate_workflow`.
- **False positive "Set node has no fields configured" for Set v3+** (Issue #676): The `validateSet` checker now recognizes `config.assignments.assignments` (v3+ schema) in addition to `config.values` (v1/v2 schema). Updated suggestion text to match current UI terminology.
- **Expression validator does not detect unwrapped n8n expressions** (Issue #677): Added heuristic pre-pass that detects bare `$json`, `$node`, `$input`, `$execution`, `$workflow`, `$prevNode`, `$env`, `$now`, `$today`, `$itemIndex`, and `$runIndex` references missing `={{ }}` wrappers. Uses anchored patterns to avoid false positives. Emits warnings, not errors.
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
## [2.41.3] - 2026-03-27
### Fixed
- **Session timeout default too low** (Issue #626): Raised `SESSION_TIMEOUT_MINUTES` default from 5 to 30 minutes. The 5-minute default caused sessions to expire mid-operation during complex multi-step workflows (validate → get structure → patch → validate), forcing users to retry. Configurable via environment variable.
- **Operations array received as string from VS Code** (Issue #600): Added `z.preprocess` JSON string parsing to the `operations` parameter in `n8n_update_partial_workflow`. The VS Code MCP extension serializes arrays as JSON strings — the Zod schema now transparently parses them before validation.
- **`undefined` values rejected in MCP tool calls from VS Code** (Issue #611): Strip explicit `undefined` values from tool arguments before Zod validation. VS Code sends `undefined` as a value which Zod's `.optional()` rejects (it expects the field to be missing, not present-but-undefined).
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
## [2.41.2] - 2026-03-27
### Fixed
- **MCP initialization floods Claude Desktop with JSON parse errors** (Issues #628, #627, #567): Intercept `process.stdout.write` in stdio mode to redirect non-JSON-RPC output to stderr. Console method suppression alone was insufficient — native modules (better-sqlite3), n8n packages, and third-party code can call `process.stdout.write()` directly, corrupting the JSON-RPC stream. Only writes containing valid JSON-RPC messages (`{"jsonrpc":...}`) are now allowed through stdout; everything else is redirected to stderr. This fixes the flood of "Unexpected token is not valid JSON" warnings on every new chat in Claude Desktop, including leaked `refCount`, `dbPath`, `clientVersion`, `protocolVersion`, and other debug strings.
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
## [2.41.1] - 2026-03-27
### Fixed

Binary file not shown.

2670
data/workflow-patterns.json Normal file

File diff suppressed because it is too large Load Diff

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "n8n-mcp",
"version": "2.41.1",
"version": "2.41.3",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "n8n-mcp",
"version": "2.41.1",
"version": "2.41.3",
"license": "MIT",
"dependencies": {
"@modelcontextprotocol/sdk": "1.28.0",

View File

@@ -1,6 +1,6 @@
{
"name": "n8n-mcp",
"version": "2.41.1",
"version": "2.42.3",
"description": "Integration between n8n workflow automation and Model Context Protocol (MCP)",
"main": "dist/index.js",
"types": "dist/index.d.ts",
@@ -90,6 +90,7 @@
"test:docker:unit": "./scripts/test-docker-config.sh unit",
"test:docker:integration": "./scripts/test-docker-config.sh integration",
"test:docker:security": "./scripts/test-docker-config.sh security",
"mine:patterns": "tsx src/scripts/mine-workflow-patterns.ts",
"sanitize:templates": "node dist/scripts/sanitize-templates.js",
"db:rebuild": "node dist/scripts/rebuild-database.js",
"db:init": "node -e \"new (require('./dist/services/sqlite-storage-service').SQLiteStorageService)(); console.log('Database initialized')\"",
@@ -123,6 +124,7 @@
"dist/**/*",
"ui-apps/dist/**/*",
"data/nodes.db",
"data/workflow-patterns.json",
".env.example",
"README.md",
"LICENSE",

View File

@@ -343,10 +343,17 @@ export class CommunityNodeService {
const operations: any[] = [];
// Check properties for resource/operation pattern
// Nodes can have multiple operation properties, each mapped to a resource via displayOptions
if (nodeDesc.properties) {
for (const prop of nodeDesc.properties) {
if (prop.name === 'operation' && prop.options) {
operations.push(...prop.options);
if ((prop.name === 'operation' || prop.name === 'action') && prop.options) {
const resource = prop.displayOptions?.show?.resource?.[0];
for (const op of prop.options) {
operations.push({
...op,
...(resource ? { resource } : {})
});
}
}
}
}

View File

@@ -107,11 +107,10 @@ export class SingleSessionHTTPServer {
private session: Session | null = null; // Keep for SSE compatibility
private consoleManager = new ConsoleManager();
private expressServer: any;
// Session timeout reduced from 30 minutes to 5 minutes for faster cleanup
// Configurable via SESSION_TIMEOUT_MINUTES environment variable
// This prevents memory buildup from stale sessions
// Session timeout — configurable via SESSION_TIMEOUT_MINUTES environment variable
// Default 30 minutes: balances memory cleanup with real editing sessions (#626)
private sessionTimeout = parseInt(
process.env.SESSION_TIMEOUT_MINUTES || '5', 10
process.env.SESSION_TIMEOUT_MINUTES || '30', 10
) * 60 * 1000;
private authToken: string | null = null;
private cleanupTimer: NodeJS.Timeout | null = null;

View File

@@ -2731,7 +2731,7 @@ const updateTableSchema = tableIdSchema.extend({
// MCP transports may serialize JSON objects/arrays as strings.
// Parse them back, but return the original value on failure so Zod reports a proper type error.
function tryParseJson(val: unknown): unknown {
export function tryParseJson(val: unknown): unknown {
if (typeof val !== 'string') return val;
try { return JSON.parse(val); } catch { return val; }
}

View File

@@ -7,7 +7,7 @@ import { z } from 'zod';
import { McpToolResponse } from '../types/n8n-api';
import { WorkflowDiffRequest, WorkflowDiffOperation, WorkflowDiffValidationError } from '../types/workflow-diff';
import { WorkflowDiffEngine } from '../services/workflow-diff-engine';
import { getN8nApiClient } from './handlers-n8n-manager';
import { getN8nApiClient, tryParseJson } from './handlers-n8n-manager';
import { N8nApiError, getUserFriendlyErrorMessage } from '../utils/n8n-errors';
import { logger } from '../utils/logger';
import { InstanceContext } from '../types/instance-context';
@@ -39,7 +39,7 @@ const NODE_TARGETING_OPERATIONS = new Set([
// Zod schema for the diff request
const workflowDiffSchema = z.object({
id: z.string(),
operations: z.array(z.object({
operations: z.preprocess(tryParseJson, z.array(z.object({
type: z.string(),
description: z.string().optional(),
// Node operations
@@ -87,7 +87,7 @@ const workflowDiffSchema = z.object({
}
}
return op;
})),
}))),
validateOnly: z.boolean().optional(),
continueOnError: z.boolean().optional(),
createBackup: z.boolean().optional(),

View File

@@ -7,7 +7,7 @@ import {
ListResourcesRequestSchema,
ReadResourceRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
import { existsSync, promises as fs } from 'fs';
import { existsSync, readFileSync, promises as fs } from 'fs';
import path from 'path';
import { n8nDocumentationToolsFinal } from './tools';
import { UIAppRegistry } from './ui';
@@ -748,6 +748,13 @@ export class N8NDocumentationMCPServer {
// tool's inputSchema as the source of truth.
processedArgs = this.coerceStringifiedJsonParams(name, processedArgs);
// Strip undefined values from args (#611) — VS Code extension sends
// explicit undefined values which Zod's .optional() rejects.
// Removing them makes Zod treat them as missing (which .optional() allows).
if (processedArgs) {
processedArgs = JSON.parse(JSON.stringify(processedArgs));
}
try {
logger.debug(`Executing tool: ${name}`, { args: processedArgs });
const startTime = Date.now();
@@ -1303,6 +1310,7 @@ export class N8NDocumentationMCPServer {
return this.searchNodes(args.query, limit, {
mode: args.mode,
includeExamples: args.includeExamples,
includeOperations: args.includeOperations,
source: args.source
});
case 'get_node':
@@ -1407,6 +1415,8 @@ export class N8NDocumentationMCPServer {
requiredService: args.requiredService,
targetAudience: args.targetAudience
}, searchLimit, searchOffset);
case 'patterns':
return this.getWorkflowPatterns(args.task as string | undefined, searchLimit);
case 'keyword':
default:
if (!args.query) {
@@ -1674,6 +1684,7 @@ export class N8NDocumentationMCPServer {
mode?: 'OR' | 'AND' | 'FUZZY';
includeSource?: boolean;
includeExamples?: boolean;
includeOperations?: boolean;
source?: 'all' | 'core' | 'community' | 'verified';
}
): Promise<any> {
@@ -1716,6 +1727,7 @@ export class N8NDocumentationMCPServer {
options?: {
includeSource?: boolean;
includeExamples?: boolean;
includeOperations?: boolean;
source?: 'all' | 'core' | 'community' | 'verified';
}
): Promise<any> {
@@ -1729,7 +1741,7 @@ export class N8NDocumentationMCPServer {
// For FUZZY mode, use LIKE search with typo patterns
if (mode === 'FUZZY') {
return this.searchNodesFuzzy(cleanedQuery, limit);
return this.searchNodesFuzzy(cleanedQuery, limit, { includeOperations: options?.includeOperations });
}
let ftsQuery: string;
@@ -1820,7 +1832,7 @@ export class N8NDocumentationMCPServer {
if (cleanedQuery.toLowerCase().includes('http') && !hasHttpRequest) {
// FTS missed HTTP Request, fall back to LIKE search
logger.debug('FTS missed HTTP Request node, augmenting with LIKE search');
return this.searchNodesLIKE(query, limit);
return this.searchNodesLIKE(query, limit, options);
}
const result: any = {
@@ -1848,6 +1860,14 @@ export class N8NDocumentationMCPServer {
}
}
// Add operations tree if requested
if (options?.includeOperations) {
const opsTree = this.buildOperationsTree(node.operations);
if (opsTree) {
nodeResult.operationsTree = opsTree;
}
}
return nodeResult;
}),
totalCount: scoredNodes.length
@@ -1915,7 +1935,13 @@ export class N8NDocumentationMCPServer {
}
}
private async searchNodesFuzzy(query: string, limit: number): Promise<any> {
private async searchNodesFuzzy(
query: string,
limit: number,
options?: {
includeOperations?: boolean;
}
): Promise<any> {
if (!this.db) throw new Error('Database not initialized');
// Split into words for fuzzy matching
@@ -1956,14 +1982,26 @@ export class N8NDocumentationMCPServer {
return {
query,
mode: 'FUZZY',
results: matchingNodes.map(node => ({
nodeType: node.node_type,
workflowNodeType: getWorkflowNodeType(node.package_name, node.node_type),
displayName: node.display_name,
description: node.description,
category: node.category,
package: node.package_name
})),
results: matchingNodes.map(node => {
const nodeResult: any = {
nodeType: node.node_type,
workflowNodeType: getWorkflowNodeType(node.package_name, node.node_type),
displayName: node.display_name,
description: node.description,
category: node.category,
package: node.package_name
};
// Add operations tree if requested
if (options?.includeOperations) {
const opsTree = this.buildOperationsTree(node.operations);
if (opsTree) {
nodeResult.operationsTree = opsTree;
}
}
return nodeResult;
}),
totalCount: matchingNodes.length
};
}
@@ -2068,6 +2106,7 @@ export class N8NDocumentationMCPServer {
options?: {
includeSource?: boolean;
includeExamples?: boolean;
includeOperations?: boolean;
source?: 'all' | 'core' | 'community' | 'verified';
}
): Promise<any> {
@@ -2127,6 +2166,14 @@ export class N8NDocumentationMCPServer {
}
}
// Add operations tree if requested
if (options?.includeOperations) {
const opsTree = this.buildOperationsTree(node.operations);
if (opsTree) {
nodeResult.operationsTree = opsTree;
}
}
return nodeResult;
}),
totalCount: rankedNodes.length
@@ -2213,6 +2260,14 @@ export class N8NDocumentationMCPServer {
}
}
// Add operations tree if requested
if (options?.includeOperations) {
const opsTree = this.buildOperationsTree(node.operations);
if (opsTree) {
nodeResult.operationsTree = opsTree;
}
}
return nodeResult;
}),
totalCount: rankedNodes.length
@@ -2583,6 +2638,51 @@ Full documentation is being prepared. For now, use get_node_essentials for confi
};
}
/**
* Parse raw operations data and group by resource into a compact tree.
* Returns undefined when there are no operations (e.g. trigger nodes, Code node).
*/
private buildOperationsTree(operationsRaw: string | any[] | null | undefined): Array<{resource: string, operations: string[]}> | undefined {
if (!operationsRaw) return undefined;
let ops: any[];
if (typeof operationsRaw === 'string') {
try {
ops = JSON.parse(operationsRaw);
} catch {
return undefined;
}
} else if (Array.isArray(operationsRaw)) {
ops = operationsRaw;
} else {
return undefined;
}
if (!Array.isArray(ops) || ops.length === 0) return undefined;
// Group by resource
const byResource = new Map<string, string[]>();
for (const op of ops) {
const resource = op.resource || 'default';
const opName = op.name || op.operation;
if (!opName) continue;
if (!byResource.has(resource)) {
byResource.set(resource, []);
}
const list = byResource.get(resource)!;
if (!list.includes(opName)) {
list.push(opName);
}
}
if (byResource.size === 0) return undefined;
return Array.from(byResource.entries()).map(([resource, operations]) => ({
resource,
operations
}));
}
private async getNodeEssentials(nodeType: string, includeExamples?: boolean): Promise<any> {
await this.ensureInitialized();
if (!this.repository) throw new Error('Repository not initialized');
@@ -3780,6 +3880,71 @@ Full documentation is being prepared. For now, use get_node_essentials for confi
};
}
private workflowPatternsCache: {
generatedAt: string;
templateCount: number;
categories: Record<string, {
templateCount: number;
pattern: string;
nodes?: Array<{ type: string; frequency: number; role: string; displayName: string }>;
commonChains?: Array<{ chain: string[]; count: number; frequency: number }>;
}>;
} | null = null;
private getWorkflowPatterns(category?: string, limit: number = 10): any {
// Load patterns file (cached after first load)
if (!this.workflowPatternsCache) {
try {
const patternsPath = path.join(__dirname, '..', '..', 'data', 'workflow-patterns.json');
if (existsSync(patternsPath)) {
this.workflowPatternsCache = JSON.parse(readFileSync(patternsPath, 'utf-8'));
} else {
return { error: 'Workflow patterns not generated yet. Run: npm run mine:patterns' };
}
} catch (e) {
return { error: `Failed to load workflow patterns: ${e instanceof Error ? e.message : String(e)}` };
}
}
const patterns = this.workflowPatternsCache!;
if (category) {
// Return specific category pattern data (trimmed for token efficiency)
const categoryData = patterns.categories[category];
if (!categoryData) {
const available = Object.keys(patterns.categories);
return { error: `Unknown category "${category}". Available: ${available.join(', ')}` };
}
const MAX_CHAINS = 5;
return {
category,
templateCount: categoryData.templateCount,
pattern: categoryData.pattern,
nodes: categoryData.nodes?.slice(0, limit).map(n => ({
type: n.type, freq: n.frequency, role: n.role
})),
chains: categoryData.commonChains?.slice(0, MAX_CHAINS).map(c => ({
path: c.chain.map(t => t.split('.').pop() ?? t), count: c.count, freq: c.frequency
})),
};
}
// Return overview of all categories
const overview = Object.entries(patterns.categories).map(([name, data]) => ({
category: name,
templateCount: data.templateCount,
pattern: data.pattern,
topNodes: data.nodes?.slice(0, 5).map(n => n.displayName || n.type),
}));
return {
templateCount: patterns.templateCount,
generatedAt: patterns.generatedAt,
categories: overview,
tip: 'Use search_templates({searchMode: "patterns", task: "category_name"}) for full pattern data with nodes, chains, and tips.',
};
}
private async getTemplatesForTask(task: string, limit: number = 10, offset: number = 0): Promise<any> {
await this.ensureInitialized();
if (!this.templateService) throw new Error('Template service not initialized');

View File

@@ -39,6 +39,25 @@ console.clear = () => {};
console.count = () => {};
console.countReset = () => {};
// CRITICAL: Intercept process.stdout.write to prevent non-JSON-RPC output (#628, #627, #567)
// Console suppression alone is insufficient — native modules (better-sqlite3), n8n packages,
// and third-party code can call process.stdout.write() directly, corrupting the JSON-RPC stream.
// Only allow writes that look like JSON-RPC messages; redirect everything else to stderr.
const originalStdoutWrite = process.stdout.write.bind(process.stdout);
const stderrWrite = process.stderr.write.bind(process.stderr);
process.stdout.write = function (chunk: any, encodingOrCallback?: any, callback?: any): boolean {
const str = typeof chunk === 'string' ? chunk : chunk.toString();
// JSON-RPC messages are JSON objects with "jsonrpc" field — let those through
// The MCP SDK sends one JSON object per write call
const trimmed = str.trimStart();
if (trimmed.startsWith('{') && trimmed.includes('"jsonrpc"')) {
return originalStdoutWrite(chunk, encodingOrCallback, callback);
}
// Redirect everything else to stderr so it doesn't corrupt the protocol
return stderrWrite(chunk, encodingOrCallback, callback);
} as typeof process.stdout.write;
// Import and run the server AFTER suppressing output
import { N8NDocumentationMCPServer } from './server';

View File

@@ -5,7 +5,7 @@ export const searchNodesDoc: ToolDocumentation = {
category: 'discovery',
essentials: {
description: 'Text search across node names and descriptions. Returns most relevant nodes first, with frequently-used nodes (HTTP Request, Webhook, Set, Code, Slack) prioritized in results. Searches all 800+ nodes including 300+ verified community nodes.',
keyParameters: ['query', 'mode', 'limit', 'source', 'includeExamples'],
keyParameters: ['query', 'mode', 'limit', 'source', 'includeExamples', 'includeOperations'],
example: 'search_nodes({query: "webhook"})',
performance: '<20ms even for complex queries',
tips: [
@@ -14,7 +14,8 @@ export const searchNodesDoc: ToolDocumentation = {
'FUZZY mode: Handles typos and spelling errors',
'Use quotes for exact phrases: "google sheets"',
'Use source="community" to search only community nodes',
'Use source="verified" for verified community nodes only'
'Use source="verified" for verified community nodes only',
'Use includeOperations=true to get resource/operation trees without a separate get_node call'
]
},
full: {
@@ -24,20 +25,17 @@ export const searchNodesDoc: ToolDocumentation = {
limit: { type: 'number', description: 'Maximum results to return. Default: 20, Max: 100', required: false },
mode: { type: 'string', description: 'Search mode: "OR" (any word matches, default), "AND" (all words required), "FUZZY" (typo-tolerant)', required: false },
source: { type: 'string', description: 'Filter by node source: "all" (default, everything), "core" (n8n base nodes only), "community" (community nodes only), "verified" (verified community nodes only)', required: false },
includeExamples: { type: 'boolean', description: 'Include top 2 real-world configuration examples from popular templates for each node. Default: false. Adds ~200-400 tokens per node.', required: false }
includeExamples: { type: 'boolean', description: 'Include top 2 real-world configuration examples from popular templates for each node. Default: false. Adds ~200-400 tokens per node.', required: false },
includeOperations: { type: 'boolean', description: 'Include resource/operation tree per node. Default: false. Adds ~100-300 tokens per result but saves a get_node round-trip. Only returned for nodes with resource/operation patterns — trigger nodes and freeform nodes (Code, HTTP Request) omit this field.', required: false }
},
returns: 'Array of node objects sorted by relevance score. Each object contains: nodeType, displayName, description, category, relevance score. For community nodes, also includes: isCommunity (boolean), isVerified (boolean), authorName (string), npmDownloads (number). Common nodes appear first when relevance is similar.',
examples: [
'search_nodes({query: "webhook"}) - Returns Webhook node as top result',
'search_nodes({query: "database"}) - Returns MySQL, Postgres, MongoDB, Redis, etc.',
'search_nodes({query: "google sheets", mode: "AND"}) - Requires both words',
'search_nodes({query: "slak", mode: "FUZZY"}) - Finds Slack despite typo',
'search_nodes({query: "http api"}) - Finds HTTP Request, GraphQL, REST nodes',
'search_nodes({query: "transform data"}) - Finds Set, Code, Function, Item Lists nodes',
'search_nodes({query: "scraping", source: "community"}) - Find community scraping nodes',
'search_nodes({query: "pdf", source: "verified"}) - Find verified community PDF nodes',
'search_nodes({query: "brightdata"}) - Find BrightData community node',
'search_nodes({query: "slack", includeExamples: true}) - Get Slack with template examples'
'search_nodes({query: "slack", includeExamples: true}) - Get Slack with template examples',
'search_nodes({query: "slack", includeOperations: true}) - Get Slack with resource/operation tree (7 resources, 44 ops)'
],
useCases: [
'Finding nodes when you know partial names',

View File

@@ -4,7 +4,7 @@ export const searchTemplatesDoc: ToolDocumentation = {
name: 'search_templates',
category: 'templates',
essentials: {
description: 'Unified template search with multiple modes: keyword search, by node types, by task type, or by metadata. 2,700+ templates available.',
description: 'Unified template search with multiple modes: keyword search, by node types, by task type, by metadata, or patterns. 2,700+ templates available.',
keyParameters: ['searchMode', 'query', 'nodeTypes', 'task', 'limit'],
example: 'search_templates({searchMode: "by_task", task: "webhook_processing"})',
performance: 'Fast (<100ms) - FTS5 full-text search',
@@ -12,7 +12,9 @@ export const searchTemplatesDoc: ToolDocumentation = {
'searchMode="keyword" (default): Search by name/description',
'searchMode="by_nodes": Find templates using specific nodes',
'searchMode="by_task": Get curated templates for common tasks',
'searchMode="by_metadata": Filter by complexity, services, audience'
'searchMode="by_metadata": Filter by complexity, services, audience',
'searchMode="patterns": Workflow pattern summaries across 10 task categories',
'patterns without task: overview of all categories. patterns with task: node frequencies + connection chains'
]
},
full: {
@@ -21,14 +23,15 @@ export const searchTemplatesDoc: ToolDocumentation = {
- by_nodes: Find templates that use specific node types
- by_task: Get curated templates for predefined task categories
- by_metadata: Filter by complexity, setup time, required services, or target audience
- patterns: Lightweight workflow pattern summaries mined from 2,700+ templates
**Available Task Types (for searchMode="by_task"):**
**Available Task Types (for searchMode="by_task" and "patterns"):**
ai_automation, data_sync, webhook_processing, email_automation, slack_integration, data_transformation, file_processing, scheduling, api_integration, database_operations`,
parameters: {
searchMode: {
type: 'string',
required: false,
description: 'Search mode: "keyword" (default), "by_nodes", "by_task", "by_metadata"'
description: 'Search mode: "keyword" (default), "by_nodes", "by_task", "by_metadata", "patterns"'
},
query: {
type: 'string',
@@ -43,7 +46,7 @@ ai_automation, data_sync, webhook_processing, email_automation, slack_integratio
task: {
type: 'string',
required: false,
description: 'For searchMode=by_task: Task type (ai_automation, data_sync, webhook_processing, email_automation, slack_integration, data_transformation, file_processing, scheduling, api_integration, database_operations)'
description: 'For searchMode=by_task: Task type. For searchMode=patterns: optional category filter (omit for overview of all categories). Values: ai_automation, data_sync, webhook_processing, email_automation, slack_integration, data_transformation, file_processing, scheduling, api_integration, database_operations'
},
complexity: {
type: 'string',
@@ -91,38 +94,45 @@ ai_automation, data_sync, webhook_processing, email_automation, slack_integratio
description: 'Pagination offset (default 0)'
}
},
returns: `Returns an object containing:
returns: `For keyword/by_nodes/by_task/by_metadata modes:
- templates: Array of matching templates
- id: Template ID for get_template()
- name: Template name
- description: What the workflow does
- author: Creator information
- nodes: Array of node types used
- views: Popularity metric
- created: Creation date
- url: Link to template
- metadata: AI-generated metadata (complexity, services, etc.)
- name, description, author, nodes, views, created, url, metadata
- totalFound: Total matching templates
- searchMode: The mode used`,
- searchMode: The mode used
For patterns mode (no task):
- templateCount, generatedAt
- categories: Array of {category, templateCount, pattern, topNodes}
- tip: How to drill into a specific category
For patterns mode (with task):
- category, templateCount, pattern
- nodes: Array of {type, freq, role} (top nodes by frequency, limited by 'limit')
- chains: Array of {path, count, freq} (top 5 connection chains, short node names)`,
examples: [
'// Keyword search (default)\nsearch_templates({query: "chatbot"})',
'// Find templates using specific nodes\nsearch_templates({searchMode: "by_nodes", nodeTypes: ["n8n-nodes-base.httpRequest", "n8n-nodes-base.slack"]})',
'// Get templates for a task type\nsearch_templates({searchMode: "by_task", task: "webhook_processing"})',
'// Filter by metadata\nsearch_templates({searchMode: "by_metadata", complexity: "simple", requiredService: "openai"})',
'// Combine metadata filters\nsearch_templates({searchMode: "by_metadata", maxSetupMinutes: 30, targetAudience: "developers"})'
'// Combine metadata filters\nsearch_templates({searchMode: "by_metadata", maxSetupMinutes: 30, targetAudience: "developers"})',
'// Pattern overview — all categories with top nodes (~550 tokens)\nsearch_templates({searchMode: "patterns"})',
'// Detailed patterns for a category — node frequencies + connection chains\nsearch_templates({searchMode: "patterns", task: "ai_automation"})'
],
useCases: [
'Find workflows by business purpose (keyword search)',
'Find templates using specific integrations (by_nodes)',
'Get pre-built solutions for common tasks (by_task)',
'Filter by complexity for team skill level (by_metadata)',
'Find templates requiring specific services (by_metadata)'
'Understand common workflow shapes before building (patterns)',
'Architecture planning — which nodes go together (patterns)'
],
performance: `Fast performance across all modes:
- keyword: <50ms with FTS5 indexing
- by_nodes: <100ms with indexed lookups
- by_task: <50ms from curated cache
- by_metadata: <100ms with filtered queries`,
- by_metadata: <100ms with filtered queries
- patterns: <10ms (pre-mined, cached in memory)`,
bestPractices: [
'Use searchMode="by_task" for common automation patterns',
'Use searchMode="by_nodes" when you know which integrations you need',

View File

@@ -125,6 +125,7 @@ When working with Code nodes, always start by calling the relevant guide:
- searchMode='by_nodes': Find templates using specific nodes
- searchMode='by_task': Curated task-based templates
- searchMode='by_metadata': Filter by complexity/services
- searchMode='patterns': Workflow pattern summaries from 2,700+ templates
**n8n API Tools** (13 tools, requires N8N_API_URL configuration)
- n8n_create_workflow - Create new workflows

View File

@@ -57,6 +57,11 @@ export const n8nDocumentationToolsFinal: ToolDefinition[] = [
description: 'Include top 2 real-world configuration examples from popular templates (default: false)',
default: false,
},
includeOperations: {
type: 'boolean',
default: false,
description: 'Include resource/operation tree per node. Adds ~100-300 tokens per result but saves a get_node round-trip.',
},
source: {
type: 'string',
enum: ['all', 'core', 'community', 'verified'],
@@ -242,14 +247,14 @@ export const n8nDocumentationToolsFinal: ToolDefinition[] = [
},
{
name: 'search_templates',
description: `Search templates with multiple modes. Use searchMode='keyword' for text search, 'by_nodes' to find templates using specific nodes, 'by_task' for curated task-based templates, 'by_metadata' for filtering by complexity/setup time/services.`,
description: `Search templates with multiple modes. Use searchMode='keyword' for text search, 'by_nodes' to find templates using specific nodes, 'by_task' for curated task-based templates, 'by_metadata' for filtering by complexity/setup time/services, 'patterns' for lightweight workflow pattern summaries mined from 2700+ templates.`,
inputSchema: {
type: 'object',
properties: {
searchMode: {
type: 'string',
enum: ['keyword', 'by_nodes', 'by_task', 'by_metadata'],
description: 'Search mode. keyword=text search (default), by_nodes=find by node types, by_task=curated task templates, by_metadata=filter by complexity/services',
enum: ['keyword', 'by_nodes', 'by_task', 'by_metadata', 'patterns'],
description: 'Search mode. keyword=text search (default), by_nodes=find by node types, by_task=curated task templates, by_metadata=filter by complexity/services, patterns=lightweight workflow pattern summaries',
default: 'keyword',
},
// For searchMode='keyword'
@@ -271,7 +276,7 @@ export const n8nDocumentationToolsFinal: ToolDefinition[] = [
items: { type: 'string' },
description: 'For searchMode=by_nodes: array of node types (e.g., ["n8n-nodes-base.httpRequest", "n8n-nodes-base.slack"])',
},
// For searchMode='by_task'
// For searchMode='by_task' or 'patterns'
task: {
type: 'string',
enum: [
@@ -286,7 +291,7 @@ export const n8nDocumentationToolsFinal: ToolDefinition[] = [
'api_integration',
'database_operations'
],
description: 'For searchMode=by_task: the type of task',
description: 'For searchMode=by_task: the type of task. For searchMode=patterns: optional category filter (omit for overview of all categories).',
},
// For searchMode='by_metadata'
category: {

View File

@@ -131,18 +131,23 @@ export class PropertyExtractor {
}
}
// Programmatic nodes - look for operation property in properties
// Programmatic nodes - look for operation properties in properties
// Note: nodes can have MULTIPLE operation properties, each with displayOptions.show.resource
// mapping to a different resource (e.g., Slack has 7 operation props for channel, message, etc.)
if (description.properties && Array.isArray(description.properties)) {
const operationProp = description.properties.find(
const operationProps = description.properties.filter(
(p: any) => p.name === 'operation' || p.name === 'action'
);
if (operationProp?.options) {
for (const operationProp of operationProps) {
if (!operationProp?.options) continue;
const resource = operationProp.displayOptions?.show?.resource?.[0];
operationProp.options.forEach((op: any) => {
operations.push({
operation: op.value,
name: op.name,
description: op.description
description: op.description,
...(resource ? { resource } : {})
});
});
}

View File

@@ -0,0 +1,532 @@
#!/usr/bin/env node
import { createDatabaseAdapter } from '../database/database-adapter';
import * as zlib from 'zlib';
import * as fs from 'fs';
import * as path from 'path';
// Node types to exclude from analysis
const EXCLUDED_TYPES = new Set([
'n8n-nodes-base.stickyNote',
'n8n-nodes-base.noOp',
'n8n-nodes-base.manualTrigger',
]);
// Category-to-node mapping for classification
const TASK_NODE_MAPPING: Record<string, string[]> = {
ai_automation: [
'nodes-langchain.agent', 'nodes-langchain.openAi', 'nodes-langchain.chainLlm',
'nodes-langchain.lmChatOpenAi', 'nodes-langchain.lmChatAnthropic',
'nodes-langchain.chainSummarization', 'nodes-langchain.toolWorkflow',
'nodes-langchain.memoryBufferWindow', 'nodes-langchain.outputParserStructured',
],
webhook_processing: [
'nodes-base.webhook', 'nodes-base.respondToWebhook',
],
email_automation: [
'nodes-base.gmail', 'nodes-base.emailSend', 'nodes-base.microsoftOutlook',
'nodes-base.emailReadImap',
],
slack_integration: [
'nodes-base.slack', 'nodes-base.slackTrigger',
],
data_sync: [
'nodes-base.googleSheets', 'nodes-base.airtable', 'nodes-base.postgres',
'nodes-base.mysql', 'nodes-base.mongoDb',
],
data_transformation: [
'nodes-base.set', 'nodes-base.code', 'nodes-base.splitInBatches',
'nodes-base.merge', 'nodes-base.itemLists', 'nodes-base.filter',
'nodes-base.if', 'nodes-base.switch',
],
scheduling: [
'nodes-base.scheduleTrigger', 'nodes-base.cron',
],
api_integration: [
'nodes-base.httpRequest', 'nodes-base.webhook', 'nodes-base.graphql',
],
database_operations: [
'nodes-base.postgres', 'nodes-base.mongoDb', 'nodes-base.redis',
'nodes-base.mysql', 'nodes-base.mySql',
],
file_processing: [
'nodes-base.readBinaryFiles', 'nodes-base.writeBinaryFile',
'nodes-base.spreadsheetFile', 'nodes-base.googleDrive',
],
};
// Display name mapping for common node types (used for pattern strings)
const DISPLAY_NAMES: Record<string, string> = {
'n8n-nodes-base.webhook': 'Webhook',
'n8n-nodes-base.httpRequest': 'HTTP Request',
'n8n-nodes-base.code': 'Code',
'n8n-nodes-base.set': 'Set',
'n8n-nodes-base.if': 'If',
'n8n-nodes-base.switch': 'Switch',
'n8n-nodes-base.merge': 'Merge',
'n8n-nodes-base.filter': 'Filter',
'n8n-nodes-base.splitInBatches': 'Split In Batches',
'n8n-nodes-base.itemLists': 'Item Lists',
'n8n-nodes-base.respondToWebhook': 'Respond to Webhook',
'n8n-nodes-base.gmail': 'Gmail',
'n8n-nodes-base.emailSend': 'Send Email',
'n8n-nodes-base.slack': 'Slack',
'n8n-nodes-base.slackTrigger': 'Slack Trigger',
'n8n-nodes-base.googleSheets': 'Google Sheets',
'n8n-nodes-base.airtable': 'Airtable',
'n8n-nodes-base.postgres': 'Postgres',
'n8n-nodes-base.mysql': 'MySQL',
'n8n-nodes-base.mongoDb': 'MongoDB',
'n8n-nodes-base.redis': 'Redis',
'n8n-nodes-base.scheduleTrigger': 'Schedule Trigger',
'n8n-nodes-base.cron': 'Cron',
'n8n-nodes-base.googleDrive': 'Google Drive',
'n8n-nodes-base.spreadsheetFile': 'Spreadsheet File',
'n8n-nodes-base.readBinaryFiles': 'Read Binary Files',
'n8n-nodes-base.writeBinaryFile': 'Write Binary File',
'n8n-nodes-base.graphql': 'GraphQL',
'n8n-nodes-base.microsoftOutlook': 'Microsoft Outlook',
'n8n-nodes-base.emailReadImap': 'Email (IMAP)',
'n8n-nodes-base.noOp': 'No Op',
'@n8n/n8n-nodes-langchain.agent': 'AI Agent',
'@n8n/n8n-nodes-langchain.openAi': 'OpenAI',
'@n8n/n8n-nodes-langchain.chainLlm': 'LLM Chain',
'@n8n/n8n-nodes-langchain.lmChatOpenAi': 'OpenAI Chat Model',
'@n8n/n8n-nodes-langchain.lmChatAnthropic': 'Anthropic Chat Model',
'@n8n/n8n-nodes-langchain.chainSummarization': 'Summarization Chain',
'@n8n/n8n-nodes-langchain.toolWorkflow': 'Workflow Tool',
'@n8n/n8n-nodes-langchain.memoryBufferWindow': 'Window Buffer Memory',
'@n8n/n8n-nodes-langchain.outputParserStructured': 'Structured Output Parser',
'n8n-nodes-base.manualTrigger': 'Manual Trigger',
};
/**
* Check if a node type matches a category mapping entry.
* Category mappings use short forms like 'nodes-base.webhook'
* while actual types may be 'n8n-nodes-base.webhook' or '@n8n/n8n-nodes-langchain.agent'.
*/
function matchesCategory(nodeType: string, categoryPattern: string): boolean {
return nodeType.endsWith(categoryPattern) || nodeType.includes(categoryPattern);
}
/**
* Get display name for a node type.
*/
function getDisplayName(nodeType: string): string {
if (DISPLAY_NAMES[nodeType]) {
return DISPLAY_NAMES[nodeType];
}
// Extract the last part after the last dot
const parts = nodeType.split('.');
const name = parts[parts.length - 1];
// Convert camelCase to Title Case
return name
.replace(/([A-Z])/g, ' $1')
.replace(/^./, s => s.toUpperCase())
.trim();
}
/**
* Determine if a node type is a trigger node.
*/
function isTriggerType(nodeType: string): boolean {
const lower = nodeType.toLowerCase();
return lower.includes('trigger') || lower.includes('webhook');
}
/**
* Classify a set of node types into categories.
*/
function classifyTemplate(nodeTypes: string[], metadataCategories?: string[]): string[] {
const categories = new Set<string>();
for (const nodeType of nodeTypes) {
for (const [category, patterns] of Object.entries(TASK_NODE_MAPPING)) {
for (const pattern of patterns) {
if (matchesCategory(nodeType, pattern)) {
categories.add(category);
}
}
}
}
// Also include categories from metadata_json if available
if (metadataCategories && Array.isArray(metadataCategories)) {
for (const cat of metadataCategories) {
const normalized = cat.toLowerCase().replace(/[\s-]+/g, '_');
// Map metadata categories to our category keys
for (const key of Object.keys(TASK_NODE_MAPPING)) {
if (normalized.includes(key) || key.includes(normalized)) {
categories.add(key);
}
}
}
}
return Array.from(categories);
}
interface NodeFrequency {
type: string;
count: number;
frequency: number;
displayName: string;
}
interface EdgeFrequency {
from: string;
to: string;
count: number;
}
interface ChainFrequency {
chain: string[];
count: number;
frequency: number;
}
interface CategoryData {
templateCount: number;
pattern: string;
nodes: Array<{ type: string; frequency: number; role: string; displayName: string }>;
commonChains: ChainFrequency[];
}
interface PatternOutput {
generatedAt: string;
templateCount: number;
categories: Record<string, CategoryData>;
global: {
topNodes: NodeFrequency[];
topEdges: EdgeFrequency[];
};
}
async function main(): Promise<void> {
const dbPath = path.resolve(__dirname, '../../data/nodes.db');
console.log(`Opening database: ${dbPath}`);
const db = await createDatabaseAdapter(dbPath);
// ---- Pass 1: Frequency & co-occurrence ----
console.log('\n=== Pass 1: Node frequency & co-occurrence ===');
const pass1Start = Date.now();
const lightRows = db.prepare(
'SELECT id, nodes_used, metadata_json, views FROM templates ORDER BY views DESC'
).all() as Array<{ id: number; nodes_used: string | null; metadata_json: string | null; views: number }>;
const templateCount = lightRows.length;
console.log(`Found ${templateCount} templates`);
// Global counters
const nodeFrequency = new Map<string, number>();
const pairCooccurrence = new Map<string, number>();
// Per-category tracking
const categoryTemplates = new Map<string, Set<number>>();
const categoryNodes = new Map<string, Map<string, number>>();
for (let i = 0; i < lightRows.length; i++) {
const row = lightRows[i];
if (i > 0 && i % 500 === 0) {
console.log(` Processing template ${i}/${templateCount}...`);
}
if (!row.nodes_used) continue;
let nodeTypes: string[];
try {
nodeTypes = JSON.parse(row.nodes_used);
if (!Array.isArray(nodeTypes)) continue;
} catch {
continue;
}
// Deduplicate and filter
const uniqueTypes = [...new Set(nodeTypes)].filter(t => !EXCLUDED_TYPES.has(t));
if (uniqueTypes.length === 0) continue;
// Count global frequency
for (const nodeType of uniqueTypes) {
nodeFrequency.set(nodeType, (nodeFrequency.get(nodeType) || 0) + 1);
}
// Count pairwise co-occurrence
for (let a = 0; a < uniqueTypes.length; a++) {
for (let b = a + 1; b < uniqueTypes.length; b++) {
const pair = [uniqueTypes[a], uniqueTypes[b]].sort().join('|||');
pairCooccurrence.set(pair, (pairCooccurrence.get(pair) || 0) + 1);
}
}
// Classify into categories
let metadataCategories: string[] | undefined;
if (row.metadata_json) {
try {
const meta = JSON.parse(row.metadata_json);
metadataCategories = meta.categories;
} catch {
// skip
}
}
const categories = classifyTemplate(uniqueTypes, metadataCategories);
for (const cat of categories) {
if (!categoryTemplates.has(cat)) {
categoryTemplates.set(cat, new Set());
categoryNodes.set(cat, new Map());
}
categoryTemplates.get(cat)!.add(row.id);
const catNodeMap = categoryNodes.get(cat)!;
for (const nodeType of uniqueTypes) {
catNodeMap.set(nodeType, (catNodeMap.get(nodeType) || 0) + 1);
}
}
}
const pass1Time = ((Date.now() - pass1Start) / 1000).toFixed(1);
console.log(`Pass 1 complete: ${pass1Time}s`);
console.log(` Unique node types: ${nodeFrequency.size}`);
console.log(` Categories found: ${categoryTemplates.size}`);
// ---- Pass 2: Connection topology ----
console.log('\n=== Pass 2: Connection topology ===');
const pass2Start = Date.now();
const compressedRows = db.prepare(
'SELECT id, nodes_used, workflow_json_compressed, views FROM templates ORDER BY views DESC'
).all() as Array<{ id: number; nodes_used: string | null; workflow_json_compressed: string | null; views: number }>;
// Edge frequency: sourceType -> targetType
const edgeFrequency = new Map<string, number>();
// Chain frequency (by category)
const categoryChains = new Map<string, Map<string, number>>();
// Global chains
const globalChains = new Map<string, number>();
let decompressedCount = 0;
let decompressFailCount = 0;
for (let i = 0; i < compressedRows.length; i++) {
const row = compressedRows[i];
if (i > 0 && i % 500 === 0) {
console.log(` Processing template ${i}/${templateCount}...`);
}
if (!row.workflow_json_compressed) continue;
let workflow: any;
try {
const decompressed = zlib.gunzipSync(Buffer.from(row.workflow_json_compressed, 'base64'));
workflow = JSON.parse(decompressed.toString());
decompressedCount++;
} catch {
decompressFailCount++;
continue;
}
const nodes: any[] = workflow.nodes || [];
const connections: Record<string, any> = workflow.connections || {};
// Build name -> type map
const nameToType = new Map<string, string>();
for (const node of nodes) {
if (node.name && node.type && !EXCLUDED_TYPES.has(node.type)) {
nameToType.set(node.name, node.type);
}
}
// Build adjacency list for BFS
const adjacency = new Map<string, string[]>();
// Parse connections and record edges
for (const sourceName of Object.keys(connections)) {
const sourceType = nameToType.get(sourceName);
if (!sourceType) continue;
const mainOutputs = connections[sourceName]?.main;
if (!Array.isArray(mainOutputs)) continue;
for (const outputGroup of mainOutputs) {
if (!Array.isArray(outputGroup)) continue;
for (const conn of outputGroup) {
if (!conn || !conn.node) continue;
const targetName = conn.node;
const targetType = nameToType.get(targetName);
if (!targetType) continue;
// Record edge
const edgeKey = `${sourceType}|||${targetType}`;
edgeFrequency.set(edgeKey, (edgeFrequency.get(edgeKey) || 0) + 1);
// Build adjacency for chain extraction
if (!adjacency.has(sourceName)) {
adjacency.set(sourceName, []);
}
adjacency.get(sourceName)!.push(targetName);
}
}
}
// Find trigger nodes (nodes with no incoming connections, or type contains 'Trigger')
const hasIncoming = new Set<string>();
for (const targets of adjacency.values()) {
for (const target of targets) {
hasIncoming.add(target);
}
}
const triggerNodes = nodes.filter(n => {
if (EXCLUDED_TYPES.has(n.type)) return false;
return !hasIncoming.has(n.name) || isTriggerType(n.type);
});
// Pre-compute categories for this template (avoids re-parsing nodes_used per chain)
let templateCategories: string[] = [];
try {
if (row.nodes_used) {
const parsed = JSON.parse(row.nodes_used);
if (Array.isArray(parsed)) {
templateCategories = classifyTemplate(parsed.filter((t: string) => !EXCLUDED_TYPES.has(t)));
}
}
} catch {
// skip
}
// BFS from each trigger, extract chains of length 2-4
for (const trigger of triggerNodes) {
const queue: Array<{ nodeName: string; path: string[] }> = [
{ nodeName: trigger.name, path: [nameToType.get(trigger.name)!] },
];
const visited = new Set<string>([trigger.name]);
while (queue.length > 0) {
const { nodeName, path: currentPath } = queue.shift()!;
// Record chains of length 2, 3, 4
if (currentPath.length >= 2 && currentPath.length <= 4) {
const chainKey = currentPath.join('|||');
globalChains.set(chainKey, (globalChains.get(chainKey) || 0) + 1);
for (const cat of templateCategories) {
if (!categoryChains.has(cat)) {
categoryChains.set(cat, new Map());
}
const catChainMap = categoryChains.get(cat)!;
catChainMap.set(chainKey, (catChainMap.get(chainKey) || 0) + 1);
}
}
// Stop extending at depth 4
if (currentPath.length >= 4) continue;
const neighbors = adjacency.get(nodeName) || [];
for (const neighbor of neighbors) {
if (visited.has(neighbor)) continue;
const neighborType = nameToType.get(neighbor);
if (!neighborType) continue;
visited.add(neighbor);
queue.push({ nodeName: neighbor, path: [...currentPath, neighborType] });
}
}
}
}
const pass2Time = ((Date.now() - pass2Start) / 1000).toFixed(1);
console.log(`Pass 2 complete: ${pass2Time}s`);
console.log(` Decompressed: ${decompressedCount}, Failed: ${decompressFailCount}`);
console.log(` Unique edges: ${edgeFrequency.size}`);
console.log(` Unique chains: ${globalChains.size}`);
// ---- Build output ----
console.log('\n=== Building output ===');
// Global top nodes
const topNodes: NodeFrequency[] = [...nodeFrequency.entries()]
.sort(([, a], [, b]) => b - a)
.slice(0, 50)
.map(([type, count]) => ({
type,
count,
frequency: Math.round((count / templateCount) * 100) / 100,
displayName: getDisplayName(type),
}));
// Global top edges
const topEdges: EdgeFrequency[] = [...edgeFrequency.entries()]
.sort(([, a], [, b]) => b - a)
.slice(0, 50)
.map(([key, count]) => {
const [from, to] = key.split('|||');
return { from, to, count };
});
// Build category data
const categories: Record<string, CategoryData> = {};
for (const [cat, templateIds] of categoryTemplates.entries()) {
const catNodeMap = categoryNodes.get(cat)!;
const catTemplateCount = templateIds.size;
// Top nodes for category, sorted by frequency
const catTopNodes = [...catNodeMap.entries()]
.sort(([, a], [, b]) => b - a)
.slice(0, 20)
.map(([type, count]) => ({
type,
frequency: Math.round((count / catTemplateCount) * 100) / 100,
role: isTriggerType(type) ? 'trigger' : 'action',
displayName: getDisplayName(type),
}));
// Top chains for category
const catChainMap = categoryChains.get(cat) || new Map<string, number>();
const catTopChains: ChainFrequency[] = [...catChainMap.entries()]
.sort(([, a], [, b]) => b - a)
.slice(0, 10)
.map(([chainKey, count]) => ({
chain: chainKey.split('|||'),
count,
frequency: Math.round((count / catTemplateCount) * 100) / 100,
}));
// Generate pattern string from top nodes
// Order: triggers first, then transforms/logic, then actions
const triggerNodes = catTopNodes.filter(n => n.role === 'trigger').slice(0, 1);
const actionNodes = catTopNodes.filter(n => n.role !== 'trigger').slice(0, 3);
const patternParts = [...triggerNodes, ...actionNodes].map(n => n.displayName);
const pattern = patternParts.join(' \u2192 ') || 'Mixed workflow';
categories[cat] = {
templateCount: catTemplateCount,
pattern,
nodes: catTopNodes,
commonChains: catTopChains,
};
}
const output: PatternOutput = {
generatedAt: new Date().toISOString(),
templateCount,
categories,
global: {
topNodes,
topEdges,
},
};
// Write output
const outputPath = path.resolve(__dirname, '../../data/workflow-patterns.json');
fs.writeFileSync(outputPath, JSON.stringify(output, null, 2));
console.log(`\nWritten ${Object.keys(categories).length} categories, ${templateCount} templates analyzed`);
console.log(`Output: ${outputPath}`);
console.log(`Pass 1: ${pass1Time}s, Pass 2: ${pass2Time}s`);
console.log(`Total: ${((Date.now() - pass1Start) / 1000).toFixed(1)}s`);
db.close();
}
main().catch((err) => {
console.error('Error:', err);
process.exit(1);
});

View File

@@ -19,6 +19,18 @@ interface ExpressionContext {
}
export class ExpressionValidator {
// Bare n8n variable references missing {{ }} wrappers
private static readonly BARE_EXPRESSION_PATTERNS: Array<{ pattern: RegExp; name: string }> = [
{ pattern: /^\$json[.\[]/, name: '$json' },
{ pattern: /^\$node\[/, name: '$node' },
{ pattern: /^\$input\./, name: '$input' },
{ pattern: /^\$execution\./, name: '$execution' },
{ pattern: /^\$workflow\./, name: '$workflow' },
{ pattern: /^\$prevNode\./, name: '$prevNode' },
{ pattern: /^\$env\./, name: '$env' },
{ pattern: /^\$(now|today|itemIndex|runIndex)$/, name: 'built-in variable' },
];
// Common n8n expression patterns
private static readonly EXPRESSION_PATTERN = /\{\{([\s\S]+?)\}\}/g;
private static readonly VARIABLE_PATTERNS = {
@@ -288,6 +300,32 @@ export class ExpressionValidator {
return combinedResult;
}
/**
* Detect bare n8n variable references missing {{ }} wrappers.
* Emits warnings since the value is technically valid as a literal string.
*/
private static checkBareExpression(
value: string,
path: string,
result: ExpressionValidationResult
): void {
if (value.includes('{{') || value.startsWith('=')) {
return;
}
const trimmed = value.trim();
for (const { pattern, name } of this.BARE_EXPRESSION_PATTERNS) {
if (pattern.test(trimmed)) {
result.warnings.push(
(path ? `${path}: ` : '') +
`Possible unwrapped expression: "${trimmed}" looks like an n8n ${name} reference. ` +
`Use "={{ ${trimmed} }}" to evaluate it as an expression.`
);
return;
}
}
}
/**
* Recursively validate expressions in parameters
*/
@@ -307,6 +345,9 @@ export class ExpressionValidator {
}
if (typeof obj === 'string') {
// Detect bare expressions missing {{ }} wrappers
this.checkBareExpression(obj, path, result);
if (obj.includes('{{')) {
const validation = this.validateExpression(obj, context);

View File

@@ -344,10 +344,10 @@ export function validateWorkflowStructure(workflow: Partial<Workflow>): string[]
});
}
// Validate filter-based nodes (IF v2.2+, Switch v3.2+) have complete metadata
// Validate If/Switch condition structures (version-conditional)
if (workflow.nodes) {
workflow.nodes.forEach((node, index) => {
const filterErrors = validateFilterBasedNodeMetadata(node);
const filterErrors = validateConditionNodeStructure(node);
if (filterErrors.length > 0) {
errors.push(...filterErrors.map(err => `Node "${node.name}" (index ${index}): ${err}`));
}
@@ -488,106 +488,81 @@ export function hasWebhookTrigger(workflow: Workflow): boolean {
}
/**
* Validate filter-based node metadata (IF v2.2+, Switch v3.2+)
* Returns array of error messages
* Validate If/Switch node conditions structure for ANY version.
* Version-conditional: validates the correct structure per version.
*/
export function validateFilterBasedNodeMetadata(node: WorkflowNode): string[] {
export function validateConditionNodeStructure(node: WorkflowNode): string[] {
const errors: string[] = [];
const typeVersion = node.typeVersion || 1;
// Check if node is filter-based
const isIFNode = node.type === 'n8n-nodes-base.if' && node.typeVersion >= 2.2;
const isSwitchNode = node.type === 'n8n-nodes-base.switch' && node.typeVersion >= 3.2;
if (!isIFNode && !isSwitchNode) {
return errors; // Not a filter-based node
}
// Validate IF node
if (isIFNode) {
const conditions = (node.parameters.conditions as any);
// Check conditions.options exists
if (!conditions?.options) {
errors.push(
'Missing required "conditions.options". ' +
'IF v2.2+ requires: {version: 2, leftValue: "", caseSensitive: true, typeValidation: "strict"}'
);
} else {
// Validate required fields
const requiredFields = {
version: 2,
leftValue: '',
caseSensitive: 'boolean',
typeValidation: 'strict'
};
for (const [field, expectedValue] of Object.entries(requiredFields)) {
if (!(field in conditions.options)) {
errors.push(
`Missing required field "conditions.options.${field}". ` +
`Expected value: ${typeof expectedValue === 'string' ? `"${expectedValue}"` : expectedValue}`
);
}
if (node.type === 'n8n-nodes-base.if') {
if (typeVersion >= 2.2) {
errors.push(...validateFilterOptionsRequired(node.parameters?.conditions, 'conditions'));
errors.push(...validateFilterConditionOperators(node.parameters?.conditions, 'conditions'));
} else if (typeVersion >= 2) {
// v2 has conditions but no options requirement; just validate operators
errors.push(...validateFilterConditionOperators(node.parameters?.conditions as any, 'conditions'));
}
} else if (node.type === 'n8n-nodes-base.switch') {
if (typeVersion >= 3.2) {
const rules = node.parameters?.rules as any;
if (rules?.rules && Array.isArray(rules.rules)) {
rules.rules.forEach((rule: any, i: number) => {
errors.push(...validateFilterOptionsRequired(rule.conditions, `rules.rules[${i}].conditions`));
errors.push(...validateFilterConditionOperators(rule.conditions, `rules.rules[${i}].conditions`));
});
}
}
// Validate operators in conditions
if (conditions?.conditions && Array.isArray(conditions.conditions)) {
conditions.conditions.forEach((condition: any, i: number) => {
const operatorErrors = validateOperatorStructure(condition.operator, `conditions.conditions[${i}].operator`);
errors.push(...operatorErrors);
});
}
}
// Validate Switch node
if (isSwitchNode) {
const rules = (node.parameters.rules as any);
if (rules?.rules && Array.isArray(rules.rules)) {
rules.rules.forEach((rule: any, ruleIndex: number) => {
// Check rule.conditions.options
if (!rule.conditions?.options) {
errors.push(
`Missing required "rules.rules[${ruleIndex}].conditions.options". ` +
'Switch v3.2+ requires: {version: 2, leftValue: "", caseSensitive: true, typeValidation: "strict"}'
);
} else {
// Validate required fields
const requiredFields = {
version: 2,
leftValue: '',
caseSensitive: 'boolean',
typeValidation: 'strict'
};
for (const [field, expectedValue] of Object.entries(requiredFields)) {
if (!(field in rule.conditions.options)) {
errors.push(
`Missing required field "rules.rules[${ruleIndex}].conditions.options.${field}". ` +
`Expected value: ${typeof expectedValue === 'string' ? `"${expectedValue}"` : expectedValue}`
);
}
}
}
// Validate operators in rule conditions
if (rule.conditions?.conditions && Array.isArray(rule.conditions.conditions)) {
rule.conditions.conditions.forEach((condition: any, condIndex: number) => {
const operatorErrors = validateOperatorStructure(
condition.operator,
`rules.rules[${ruleIndex}].conditions.conditions[${condIndex}].operator`
);
errors.push(...operatorErrors);
});
}
});
}
}
return errors;
}
function validateFilterOptionsRequired(conditions: any, path: string): string[] {
const errors: string[] = [];
if (!conditions || typeof conditions !== 'object') return errors;
if (!conditions.options) {
errors.push(
`Missing required "${path}.options". ` +
'Filter-based nodes require: {version: 2, leftValue: "", caseSensitive: true, typeValidation: "strict"}'
);
} else {
const requiredFields: [string, string][] = [
['version', '2'],
['leftValue', '""'],
['caseSensitive', 'true'],
['typeValidation', '"strict"'],
];
for (const [field, display] of requiredFields) {
if (!(field in conditions.options)) {
errors.push(
`Missing required field "${path}.options.${field}". Expected value: ${display}`
);
}
}
}
return errors;
}
function validateFilterConditionOperators(conditions: any, path: string): string[] {
const errors: string[] = [];
if (!conditions?.conditions || !Array.isArray(conditions.conditions)) return errors;
conditions.conditions.forEach((condition: any, i: number) => {
errors.push(...validateOperatorStructure(
condition.operator,
`${path}.conditions[${i}].operator`
));
});
return errors;
}
/** @deprecated Use validateConditionNodeStructure instead */
export function validateFilterBasedNodeMetadata(node: WorkflowNode): string[] {
return validateConditionNodeStructure(node);
}
/**
* Validate operator structure
* Ensures operator has correct format: {type, operation, singleValue?}

View File

@@ -1713,12 +1713,16 @@ export class NodeSpecificValidators {
// Validate mode-specific requirements
if (config.mode === 'manual') {
// In manual mode, at least one field should be defined
const hasFields = config.values && Object.keys(config.values).length > 0;
const hasFieldsViaValues = config.values && Object.keys(config.values).length > 0;
const hasFieldsViaAssignments = config.assignments?.assignments
&& Array.isArray(config.assignments.assignments)
&& config.assignments.assignments.length > 0;
const hasFields = hasFieldsViaValues || hasFieldsViaAssignments;
if (!hasFields && !config.jsonOutput) {
warnings.push({
type: 'missing_common',
message: 'Set node has no fields configured - will output empty items',
suggestion: 'Add fields in the Values section or use JSON mode'
suggestion: 'Add field assignments or use JSON mode'
});
}
}

View File

@@ -15,6 +15,7 @@ import { validateAISpecificNodes, hasAINodes, AI_CONNECTION_TYPES } from './ai-n
import { isAIToolSubNode } from './ai-tool-validators';
import { isTriggerNode } from '../utils/node-type-utils';
import { isNonExecutableNode } from '../utils/node-classification';
import { validateConditionNodeStructure } from './n8n-validation';
import { ToolVariantGenerator } from './tool-variant-generator';
const logger = new Logger({ prefix: '[WorkflowValidator]' });
@@ -579,6 +580,19 @@ export class WorkflowValidator {
});
});
// Validate If/Switch conditions structure (version-conditional)
if (node.type === 'n8n-nodes-base.if' || node.type === 'n8n-nodes-base.switch') {
const conditionErrors = validateConditionNodeStructure(node as any);
for (const err of conditionErrors) {
result.errors.push({
type: 'error',
nodeId: node.id,
nodeName: node.name,
message: err
});
}
}
} catch (error) {
result.errors.push({
type: 'error',

View File

@@ -334,14 +334,14 @@ describe('HTTP Server Session Management', () => {
server = new SingleSessionHTTPServer();
// Mock expired sessions
// Note: Default session timeout is 5 minutes (configurable via SESSION_TIMEOUT_MINUTES)
// Note: Default session timeout is 30 minutes (configurable via SESSION_TIMEOUT_MINUTES)
const mockSessionMetadata = {
'session-1': {
lastAccess: new Date(Date.now() - 10 * 60 * 1000), // 10 minutes ago (expired with 5 min timeout)
lastAccess: new Date(Date.now() - 45 * 60 * 1000), // 45 minutes ago (expired with 30 min timeout)
createdAt: new Date(Date.now() - 60 * 60 * 1000)
},
'session-2': {
lastAccess: new Date(Date.now() - 2 * 60 * 1000), // 2 minutes ago (not expired with 5 min timeout)
lastAccess: new Date(Date.now() - 10 * 60 * 1000), // 10 minutes ago (not expired with 30 min timeout)
createdAt: new Date(Date.now() - 20 * 60 * 1000)
}
};
@@ -517,15 +517,15 @@ describe('HTTP Server Session Management', () => {
it('should get session metrics correctly', async () => {
server = new SingleSessionHTTPServer();
// Note: Default session timeout is 5 minutes (configurable via SESSION_TIMEOUT_MINUTES)
// Note: Default session timeout is 30 minutes (configurable via SESSION_TIMEOUT_MINUTES)
const now = Date.now();
(server as any).sessionMetadata = {
'active-session': {
lastAccess: new Date(now - 2 * 60 * 1000), // 2 minutes ago (not expired with 5 min timeout)
lastAccess: new Date(now - 10 * 60 * 1000), // 10 minutes ago (not expired with 30 min timeout)
createdAt: new Date(now - 20 * 60 * 1000)
},
'expired-session': {
lastAccess: new Date(now - 10 * 60 * 1000), // 10 minutes ago (expired with 5 min timeout)
lastAccess: new Date(now - 45 * 60 * 1000), // 45 minutes ago (expired with 30 min timeout)
createdAt: new Date(now - 60 * 60 * 1000)
}
};

View File

@@ -17,9 +17,13 @@ vi.mock('@/services/workflow-diff-engine');
vi.mock('@/services/n8n-api-client');
vi.mock('@/config/n8n-api');
vi.mock('@/utils/logger');
vi.mock('@/mcp/handlers-n8n-manager', () => ({
getN8nApiClient: vi.fn(),
}));
vi.mock('@/mcp/handlers-n8n-manager', async (importOriginal) => {
const actual = await importOriginal<typeof import('@/mcp/handlers-n8n-manager')>();
return {
...actual,
getN8nApiClient: vi.fn(),
};
});
// Import mocked modules
import { getN8nApiClient } from '@/mcp/handlers-n8n-manager';

View File

@@ -151,7 +151,7 @@ describe('n8nDocumentationToolsFinal', () => {
it('should have searchMode parameter with correct enum values', () => {
const searchModeParam = tool?.inputSchema.properties?.searchMode;
expect(searchModeParam).toBeDefined();
expect(searchModeParam.enum).toEqual(['keyword', 'by_nodes', 'by_task', 'by_metadata']);
expect(searchModeParam.enum).toEqual(['keyword', 'by_nodes', 'by_task', 'by_metadata', 'patterns']);
expect(searchModeParam.default).toBe('keyword');
});

View File

@@ -105,6 +105,78 @@ describe('ExpressionValidator', () => {
});
});
describe('bare expression detection', () => {
it('should warn on bare $json.name', () => {
const params = { value: '$json.name' };
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(true);
});
it('should warn on bare $node["Webhook"].json', () => {
const params = { value: '$node["Webhook"].json' };
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(true);
});
it('should warn on bare $now', () => {
const params = { value: '$now' };
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(true);
});
it('should warn on bare $execution.id', () => {
const params = { value: '$execution.id' };
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(true);
});
it('should warn on bare $env.API_KEY', () => {
const params = { value: '$env.API_KEY' };
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(true);
});
it('should warn on bare $input.item.json.field', () => {
const params = { value: '$input.item.json.field' };
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(true);
});
it('should NOT warn on properly wrapped ={{ $json.name }}', () => {
const params = { value: '={{ $json.name }}' };
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(false);
});
it('should NOT warn on properly wrapped {{ $json.name }}', () => {
const params = { value: '{{ $json.name }}' };
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(false);
});
it('should NOT warn when $json appears mid-string', () => {
const params = { value: 'The $json data is ready' };
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(false);
});
it('should NOT warn on plain text', () => {
const params = { value: 'Hello World' };
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(false);
});
it('should detect bare expression in nested structure', () => {
const params = {
assignments: {
assignments: [{ value: '$json.name' }]
}
};
const result = ExpressionValidator.validateNodeExpressions(params, defaultContext);
expect(result.warnings.some(w => w.includes('unwrapped expression'))).toBe(true);
});
});
describe('edge cases', () => {
it('should handle empty expressions', () => {
const result = ExpressionValidator.validateExpression('{{ }}', defaultContext);

View File

@@ -2384,6 +2384,73 @@ return [{"json": {"result": result}}]
});
});
describe('validateSet', () => {
it('should not warn when Set v3 has populated assignments', () => {
context.config = {
mode: 'manual',
assignments: {
assignments: [
{ id: '1', name: 'status', value: 'active', type: 'string' }
]
}
};
NodeSpecificValidators.validateSet(context);
const fieldWarnings = context.warnings.filter(w => w.message.includes('no fields configured'));
expect(fieldWarnings).toHaveLength(0);
});
it('should not warn when Set v2 has populated values', () => {
context.config = {
mode: 'manual',
values: {
string: [{ name: 'field', value: 'val' }]
}
};
NodeSpecificValidators.validateSet(context);
const fieldWarnings = context.warnings.filter(w => w.message.includes('no fields configured'));
expect(fieldWarnings).toHaveLength(0);
});
it('should warn when Set v3 has empty assignments array', () => {
context.config = {
mode: 'manual',
assignments: { assignments: [] }
};
NodeSpecificValidators.validateSet(context);
const fieldWarnings = context.warnings.filter(w => w.message.includes('no fields configured'));
expect(fieldWarnings).toHaveLength(1);
});
it('should warn when Set manual mode has no values or assignments', () => {
context.config = {
mode: 'manual'
};
NodeSpecificValidators.validateSet(context);
const fieldWarnings = context.warnings.filter(w => w.message.includes('no fields configured'));
expect(fieldWarnings).toHaveLength(1);
});
it('should not warn when Set manual mode has jsonOutput', () => {
context.config = {
mode: 'manual',
jsonOutput: '{"key":"value"}'
};
NodeSpecificValidators.validateSet(context);
const fieldWarnings = context.warnings.filter(w => w.message.includes('no fields configured'));
expect(fieldWarnings).toHaveLength(0);
});
});
describe('validateAIAgent', () => {
let context: NodeValidationContext;

View File

@@ -4,6 +4,7 @@ import { NodeRepository } from '@/database/node-repository';
import { EnhancedConfigValidator } from '@/services/enhanced-config-validator';
import { ExpressionValidator } from '@/services/expression-validator';
import { createWorkflow } from '@tests/utils/builders/workflow.builder';
import { validateConditionNodeStructure } from '@/services/n8n-validation';
// Mock dependencies
vi.mock('@/database/node-repository');
@@ -743,4 +744,156 @@ describe('WorkflowValidator', () => {
expect(result.statistics.validConnections).toBe(3);
});
});
// ─── If/Switch conditions validation ──────────────────────────────
describe('If/Switch conditions validation (validateConditionNodeStructure)', () => {
it('If v2.3 missing conditions.options → error', () => {
const node = {
id: '1', name: 'IF', type: 'n8n-nodes-base.if', typeVersion: 2.3,
position: [0, 0] as [number, number],
parameters: {
conditions: {
conditions: [{ leftValue: '={{ $json.x }}', rightValue: 'a', operator: { type: 'string', operation: 'equals' } }],
combinator: 'and'
}
}
};
const errors = validateConditionNodeStructure(node);
expect(errors.length).toBeGreaterThan(0);
expect(errors.some(e => e.includes('options'))).toBe(true);
});
it('If v2.3 with complete options → no error', () => {
const node = {
id: '1', name: 'IF', type: 'n8n-nodes-base.if', typeVersion: 2.3,
position: [0, 0] as [number, number],
parameters: {
conditions: {
options: { version: 2, leftValue: '', caseSensitive: true, typeValidation: 'strict' },
conditions: [{ leftValue: '={{ $json.x }}', rightValue: 'a', operator: { type: 'string', operation: 'equals' } }],
combinator: 'and'
}
}
};
const errors = validateConditionNodeStructure(node);
expect(errors).toHaveLength(0);
});
it('If v2.0 without options → no error', () => {
const node = {
id: '1', name: 'IF', type: 'n8n-nodes-base.if', typeVersion: 2.0,
position: [0, 0] as [number, number],
parameters: {
conditions: {
conditions: [{ leftValue: '={{ $json.x }}', rightValue: 'a', operator: { type: 'string', operation: 'equals' } }],
combinator: 'and'
}
}
};
const errors = validateConditionNodeStructure(node);
expect(errors).toHaveLength(0);
});
it('If v2.0 with bad operator (missing type) → operator error', () => {
const node = {
id: '1', name: 'IF', type: 'n8n-nodes-base.if', typeVersion: 2.0,
position: [0, 0] as [number, number],
parameters: {
conditions: {
conditions: [{ leftValue: '={{ $json.x }}', rightValue: 'a', operator: { operation: 'equals' } }],
combinator: 'and'
}
}
};
const errors = validateConditionNodeStructure(node);
expect(errors.length).toBeGreaterThan(0);
expect(errors.some(e => e.includes('type'))).toBe(true);
});
it('If v1 with old format → no errors', () => {
const node = {
id: '1', name: 'IF', type: 'n8n-nodes-base.if', typeVersion: 1,
position: [0, 0] as [number, number],
parameters: {
conditions: { string: [{ value1: '={{ $json.x }}', value2: 'a', operation: 'equals' }] }
}
};
const errors = validateConditionNodeStructure(node);
expect(errors).toHaveLength(0);
});
it('Switch v3.2 missing rule options → error', () => {
const node = {
id: '1', name: 'Switch', type: 'n8n-nodes-base.switch', typeVersion: 3.2,
position: [0, 0] as [number, number],
parameters: {
rules: {
rules: [{
conditions: {
conditions: [{ leftValue: '={{ $json.x }}', rightValue: 'a', operator: { type: 'string', operation: 'equals' } }],
combinator: 'and'
},
outputKey: 'Branch 1'
}]
}
}
};
const errors = validateConditionNodeStructure(node);
expect(errors.length).toBeGreaterThan(0);
expect(errors.some(e => e.includes('rules.rules[0].conditions.options'))).toBe(true);
});
it('Switch v3.2 with complete options → no error', () => {
const node = {
id: '1', name: 'Switch', type: 'n8n-nodes-base.switch', typeVersion: 3.2,
position: [0, 0] as [number, number],
parameters: {
rules: {
rules: [{
conditions: {
options: { version: 2, leftValue: '', caseSensitive: true, typeValidation: 'strict' },
conditions: [{ leftValue: '={{ $json.x }}', rightValue: 'a', operator: { type: 'string', operation: 'equals' } }],
combinator: 'and'
},
outputKey: 'Branch 1'
}]
}
}
};
const errors = validateConditionNodeStructure(node);
expect(errors).toHaveLength(0);
});
it('If v2.2 with empty parameters (missing conditions) → no error (graceful)', () => {
const node = {
id: '1', name: 'IF', type: 'n8n-nodes-base.if', typeVersion: 2.2,
position: [0, 0] as [number, number],
parameters: {}
};
const errors = validateConditionNodeStructure(node);
// Empty parameters are allowed — draft/incomplete nodes are valid at this level
expect(errors).toHaveLength(0);
});
it('Switch v3.0 without options → no error', () => {
const node = {
id: '1', name: 'Switch', type: 'n8n-nodes-base.switch', typeVersion: 3.0,
position: [0, 0] as [number, number],
parameters: {
rules: {
rules: [{
conditions: {
conditions: [{ leftValue: '={{ $json.x }}', rightValue: 'a', operator: { type: 'string', operation: 'equals' } }],
combinator: 'and'
},
outputKey: 'Branch 1'
}]
}
}
};
const errors = validateConditionNodeStructure(node);
expect(errors).toHaveLength(0);
});
});
});