mirror of
https://github.com/czlonkowski/n8n-mcp.git
synced 2026-04-05 17:13:08 +00:00
Compare commits
4 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1750fb4acf | ||
|
|
796c427317 | ||
|
|
12d7d5bdb6 | ||
|
|
2d4115530c |
79
CHANGELOG.md
79
CHANGELOG.md
@@ -7,6 +7,85 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## [2.47.1] - 2026-04-04
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Credential get fallback** — `n8n_manage_credentials({action: "get"})` now falls back to list + filter when `GET /credentials/:id` returns 403 Forbidden or 405 Method Not Allowed, since this endpoint is not in the n8n public API
|
||||
- **Credential update accepts `type` field** — `n8n_manage_credentials({action: "update"})` now forwards the optional `type` field to the n8n API, which some n8n versions require in the PATCH payload
|
||||
- **Credential response stripping** — `create` and `update` handlers now strip the `data` field from responses (defense-in-depth, matching the `get` handler pattern)
|
||||
|
||||
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
|
||||
|
||||
## [2.47.0] - 2026-04-04
|
||||
|
||||
### Added
|
||||
|
||||
- **`n8n_audit_instance` tool** — Security audit combining n8n's built-in `POST /audit` API (5 risk categories: credentials, database, nodes, instance, filesystem) with deep workflow scanning. Custom checks include 50+ regex patterns for hardcoded secrets (OpenAI, AWS, Stripe, GitHub, Slack, SendGrid, and more), unauthenticated webhook detection, error handling gap analysis, data retention risk assessment, and PII detection. Returns a compact markdown report grouped by workflow with a Remediation Playbook showing auto-fixable items, items requiring review, and items requiring user action. Inspired by [Audit n8n Workflows Security](https://wotai.co/blog/audit-n8n-workflows-security)
|
||||
- **`n8n_manage_credentials` tool** — Full credential CRUD with schema discovery. Actions: list, get, create, update, delete, getSchema. Enables AI agents to create credentials and assign them to workflow nodes as part of security remediation. Credential secret values are never logged or returned in responses (defense-in-depth)
|
||||
- **Credential scanner service** (`src/services/credential-scanner.ts`) — 50+ regex patterns ported from the production cache ingestion pipeline, covering AI/ML keys, cloud/DevOps tokens, GitHub PATs, payment keys, email/marketing APIs, and more. Per-node scanning with masked output
|
||||
- **Workflow security scanner** (`src/services/workflow-security-scanner.ts`) — 4 configurable checks: hardcoded secrets, unauthenticated webhooks (excludes respondToWebhook), error handling gaps (3+ node threshold), data retention settings
|
||||
- **Audit report builder** (`src/services/audit-report-builder.ts`) — Generates compact grouped-by-workflow markdown with tables, built-in audit rendering, and a Remediation Playbook with tool chains for auto-fixing
|
||||
|
||||
### Changed
|
||||
|
||||
- **CLAUDE.md** — Removed Session Persistence section (no longer needed), added OSS sensitivity notice to prevent secrets from landing in committed files
|
||||
- **API client request interceptor** — Now redacts request body for `/credentials` endpoints to prevent secret leakage in debug logs
|
||||
- **Credential handler responses** — All credential handlers (get, create, update) strip the `data` field from responses as defense-in-depth against future n8n versions returning decrypted values
|
||||
|
||||
### Security
|
||||
|
||||
- **Secret masking at scan time** — `maskSecret()` is called immediately during scanning; raw values are never stored in detection results
|
||||
- **Credential body redaction** — API client interceptor suppresses body logging for credential endpoints
|
||||
- **Cursor dedup guard** — `listAllWorkflows()` tracks seen cursors to prevent infinite pagination loops
|
||||
- **PII findings classified as review** — PII detections (email, phone, credit card) are marked as `review_recommended` instead of `auto_fixable`, preventing nonsensical auto-remediation
|
||||
|
||||
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
|
||||
|
||||
## [2.46.1] - 2026-04-03
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Fix SSE reconnection loop** — SSE clients entering rapid reconnection loops because `POST /mcp` never routed messages to `SSEServerTransport.handlePostMessage()` (Fixes #617). Root cause: SSE sessions were stored in a separate `this.session` property invisible to the StreamableHTTP POST handler
|
||||
- **Add authentication to SSE endpoints** — `GET /sse` and `POST /messages` now require Bearer token authentication, closing an auth gap where SSE connections were unauthenticated
|
||||
- **Fix rate limiter exhaustion during reconnection** — added `skipSuccessfulRequests: true` to `authLimiter` so legitimate requests don't count toward the rate limit, preventing 429 storms during SSE reconnection loops
|
||||
|
||||
### Changed
|
||||
|
||||
- **Separate SSE endpoints (SDK pattern)** — SSE transport now uses dedicated `GET /sse` + `POST /messages` endpoints instead of sharing `/mcp` with StreamableHTTP, following the official MCP SDK backward-compatible server pattern
|
||||
- **Unified auth into `authenticateRequest()` method** — consolidated duplicated Bearer token validation logic from three endpoints into a single method with consistent JSON-RPC error responses
|
||||
- **SSE sessions use shared transports map** — removed the legacy `this.session` singleton; SSE sessions are now stored in the same `this.transports` map as StreamableHTTP sessions with `instanceof` guards for type discrimination
|
||||
|
||||
### Deprecated
|
||||
|
||||
- **SSE transport (`GET /sse`, `POST /messages`)** — SSE is deprecated in MCP SDK v1.x and removed in v2.x. Clients should migrate to StreamableHTTP (`POST /mcp`). These endpoints will be removed in a future major release
|
||||
|
||||
### Security
|
||||
|
||||
- **Rate limiting on all authenticated endpoints** — `authLimiter` now applied to `GET /sse` and `POST /messages` in addition to `POST /mcp`
|
||||
- **Transport type guards** — `instanceof` checks prevent cross-protocol access (SSE session IDs rejected on StreamableHTTP endpoint and vice versa)
|
||||
|
||||
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
|
||||
|
||||
## [2.46.0] - 2026-04-03
|
||||
|
||||
### Added
|
||||
|
||||
- **`patchNodeField` operation for `n8n_update_partial_workflow`** — a dedicated, strict find/replace operation for surgical string edits in node fields (Fixes #696). Key features:
|
||||
- **Strict error handling**: errors if find string not found (unlike `__patch_find_replace` which only warns)
|
||||
- **Ambiguity detection**: errors if find matches multiple times unless `replaceAll: true` is set
|
||||
- **`replaceAll` flag**: replace all occurrences of a string in a single patch
|
||||
- **`regex` flag**: use regex patterns for advanced find/replace
|
||||
- Top-level operation type for better discoverability
|
||||
|
||||
### Security
|
||||
|
||||
- **Prototype pollution protection** — `setNestedProperty` and `getNestedProperty` now reject paths containing `__proto__`, `constructor`, or `prototype`. Protects both `patchNodeField` and `updateNode` operations
|
||||
- **ReDoS protection** — regex patterns with nested quantifiers or overlapping alternations are rejected to prevent catastrophic backtracking
|
||||
- **Resource limits** — max 50 patches per operation, max 500-char regex patterns, max 512KB field size for regex operations
|
||||
|
||||
Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
|
||||
|
||||
## [2.45.1] - 2026-04-02
|
||||
|
||||
### Fixed
|
||||
|
||||
31
CLAUDE.md
31
CLAUDE.md
@@ -2,6 +2,8 @@
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
> **Note:** This file is committed to a public OSS repository. Never add sensitive information (API keys, internal URLs, credentials, private infrastructure details) here.
|
||||
|
||||
## Project Overview
|
||||
|
||||
n8n-mcp is a comprehensive documentation and knowledge server that provides AI assistants with complete access to n8n node information through the Model Context Protocol (MCP). It serves as a bridge between n8n's workflow automation platform and AI models, enabling them to understand and work with n8n nodes effectively.
|
||||
@@ -195,35 +197,6 @@ The MCP server exposes tools in several categories:
|
||||
### Development Best Practices
|
||||
- Run typecheck and lint after every code change
|
||||
|
||||
### Session Persistence Feature (v2.24.1)
|
||||
|
||||
**Location:**
|
||||
- Types: `src/types/session-state.ts`
|
||||
- Implementation: `src/http-server-single-session.ts` (lines 698-702, 1444-1584)
|
||||
- Wrapper: `src/mcp-engine.ts` (lines 123-169)
|
||||
- Tests: `tests/unit/http-server/session-persistence.test.ts`, `tests/unit/mcp-engine/session-persistence.test.ts`
|
||||
|
||||
**Key Features:**
|
||||
- **Export/Restore API**: `exportSessionState()` and `restoreSessionState()` methods
|
||||
- **Multi-tenant support**: Enables zero-downtime deployments for SaaS platforms
|
||||
- **Security-first**: API keys exported as plaintext - downstream MUST encrypt
|
||||
- **Dormant sessions**: Restored sessions recreate transports on first request
|
||||
- **Automatic expiration**: Respects `sessionTimeout` setting (default 30 min)
|
||||
- **MAX_SESSIONS limit**: Caps at 100 concurrent sessions (configurable via N8N_MCP_MAX_SESSIONS env var)
|
||||
|
||||
**Important Implementation Notes:**
|
||||
- Only exports sessions with valid n8nApiUrl and n8nApiKey in context
|
||||
- Skips expired sessions during both export and restore
|
||||
- Uses `validateInstanceContext()` for data integrity checks
|
||||
- Handles null/invalid session gracefully with warnings
|
||||
- Session metadata (timestamps) and context (credentials) are persisted
|
||||
- Transport and server objects are NOT persisted (recreated on-demand)
|
||||
|
||||
**Testing:**
|
||||
- 22 unit tests covering export, restore, edge cases, and round-trip cycles
|
||||
- Tests use current timestamps to avoid expiration issues
|
||||
- Integration with multi-tenant backends documented in README.md
|
||||
|
||||
# important-instruction-reminders
|
||||
Do what has been asked; nothing more, nothing less.
|
||||
NEVER create files unless they're absolutely necessary for achieving your goal.
|
||||
|
||||
@@ -987,6 +987,12 @@ These tools require `N8N_API_URL` and `N8N_API_KEY` in your configuration.
|
||||
- `action: 'get'` - Get execution details by ID
|
||||
- `action: 'delete'` - Delete execution records
|
||||
|
||||
#### Credential Management
|
||||
- **`n8n_manage_credentials`** - Manage n8n credentials (list, get, create, update, delete, getSchema)
|
||||
|
||||
#### Security & Audit
|
||||
- **`n8n_audit_instance`** - Security audit combining n8n's built-in audit API with deep workflow scanning (50+ secret patterns, webhook auth, error handling, data retention). Returns actionable remediation playbook.
|
||||
|
||||
#### System Tools
|
||||
- **`n8n_health_check`** - Check n8n API connectivity and features
|
||||
|
||||
|
||||
5
dist/http-server-single-session.d.ts
vendored
5
dist/http-server-single-session.d.ts
vendored
@@ -12,7 +12,6 @@ export declare class SingleSessionHTTPServer {
|
||||
private sessionMetadata;
|
||||
private sessionContexts;
|
||||
private contextSwitchLocks;
|
||||
private session;
|
||||
private consoleManager;
|
||||
private expressServer;
|
||||
private sessionTimeout;
|
||||
@@ -29,14 +28,14 @@ export declare class SingleSessionHTTPServer {
|
||||
private isJsonRpcNotification;
|
||||
private sanitizeErrorForClient;
|
||||
private updateSessionAccess;
|
||||
private authenticateRequest;
|
||||
private switchSessionContext;
|
||||
private performContextSwitch;
|
||||
private getSessionMetrics;
|
||||
private loadAuthToken;
|
||||
private validateEnvironment;
|
||||
handleRequest(req: express.Request, res: express.Response, instanceContext?: InstanceContext): Promise<void>;
|
||||
private resetSessionSSE;
|
||||
private isExpired;
|
||||
private createSSESession;
|
||||
private isSessionExpired;
|
||||
start(): Promise<void>;
|
||||
shutdown(): Promise<void>;
|
||||
|
||||
2
dist/http-server-single-session.d.ts.map
vendored
2
dist/http-server-single-session.d.ts.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"http-server-single-session.d.ts","sourceRoot":"","sources":["../src/http-server-single-session.ts"],"names":[],"mappings":";AAMA,OAAO,OAAO,MAAM,SAAS,CAAC;AAoB9B,OAAO,EAAE,eAAe,EAA2B,MAAM,0BAA0B,CAAC;AACpF,OAAO,EAAE,YAAY,EAAE,MAAM,uBAAuB,CAAC;AACrD,OAAO,EAAE,uBAAuB,EAAE,MAAM,2BAA2B,CAAC;AAwEpE,MAAM,WAAW,8BAA8B;IAC7C,uBAAuB,CAAC,EAAE,uBAAuB,CAAC;CACnD;AAED,qBAAa,uBAAuB;IAElC,OAAO,CAAC,UAAU,CAA8D;IAChF,OAAO,CAAC,OAAO,CAA0D;IACzE,OAAO,CAAC,eAAe,CAAsE;IAC7F,OAAO,CAAC,eAAe,CAA4D;IACnF,OAAO,CAAC,kBAAkB,CAAyC;IACnE,OAAO,CAAC,OAAO,CAAwB;IACvC,OAAO,CAAC,cAAc,CAAwB;IAC9C,OAAO,CAAC,aAAa,CAAM;IAG3B,OAAO,CAAC,cAAc,CAER;IACd,OAAO,CAAC,SAAS,CAAuB;IACxC,OAAO,CAAC,YAAY,CAA+B;IACnD,OAAO,CAAC,uBAAuB,CAAC,CAA0B;gBAE9C,OAAO,CAAC,EAAE,8BAA8B;IAapD,OAAO,CAAC,mBAAmB;IAmB3B,OAAO,CAAC,sBAAsB;YAqChB,aAAa;IAuC3B,OAAO,CAAC,qBAAqB;IAO7B,OAAO,CAAC,gBAAgB;IAkBxB,OAAO,CAAC,gBAAgB;IAYxB,OAAO,CAAC,qBAAqB;IAa7B,OAAO,CAAC,sBAAsB;IAkC9B,OAAO,CAAC,mBAAmB;YASb,oBAAoB;YAwBpB,oBAAoB;IAwBlC,OAAO,CAAC,iBAAiB;IAsBzB,OAAO,CAAC,aAAa;IA2BrB,OAAO,CAAC,mBAAmB;IAoDrB,aAAa,CACjB,GAAG,EAAE,OAAO,CAAC,OAAO,EACpB,GAAG,EAAE,OAAO,CAAC,QAAQ,EACrB,eAAe,CAAC,EAAE,eAAe,GAChC,OAAO,CAAC,IAAI,CAAC;YAsRF,eAAe;IA8D7B,OAAO,CAAC,SAAS;IAYjB,OAAO,CAAC,gBAAgB;IASlB,KAAK,IAAI,OAAO,CAAC,IAAI,CAAC;IAgnBtB,QAAQ,IAAI,OAAO,CAAC,IAAI,CAAC;IA2D/B,cAAc,IAAI;QAChB,MAAM,EAAE,OAAO,CAAC;QAChB,SAAS,CAAC,EAAE,MAAM,CAAC;QACnB,GAAG,CAAC,EAAE,MAAM,CAAC;QACb,QAAQ,CAAC,EAAE;YACT,KAAK,EAAE,MAAM,CAAC;YACd,MAAM,EAAE,MAAM,CAAC;YACf,OAAO,EAAE,MAAM,CAAC;YAChB,GAAG,EAAE,MAAM,CAAC;YACZ,UAAU,EAAE,MAAM,EAAE,CAAC;SACtB,CAAC;KACH;IAmDM,kBAAkB,IAAI,YAAY,EAAE;IAoEpC,mBAAmB,CAAC,QAAQ,EAAE,YAAY,EAAE,GAAG,MAAM;CAsG7D"}
|
||||
{"version":3,"file":"http-server-single-session.d.ts","sourceRoot":"","sources":["../src/http-server-single-session.ts"],"names":[],"mappings":";AAMA,OAAO,OAAO,MAAM,SAAS,CAAC;AAoB9B,OAAO,EAAE,eAAe,EAA2B,MAAM,0BAA0B,CAAC;AACpF,OAAO,EAAE,YAAY,EAAE,MAAM,uBAAuB,CAAC;AACrD,OAAO,EAAE,uBAAuB,EAAE,MAAM,2BAA2B,CAAC;AA+DpE,MAAM,WAAW,8BAA8B;IAC7C,uBAAuB,CAAC,EAAE,uBAAuB,CAAC;CACnD;AAED,qBAAa,uBAAuB;IAGlC,OAAO,CAAC,UAAU,CAAmF;IACrG,OAAO,CAAC,OAAO,CAA0D;IACzE,OAAO,CAAC,eAAe,CAAsE;IAC7F,OAAO,CAAC,eAAe,CAA4D;IACnF,OAAO,CAAC,kBAAkB,CAAyC;IACnE,OAAO,CAAC,cAAc,CAAwB;IAC9C,OAAO,CAAC,aAAa,CAAM;IAG3B,OAAO,CAAC,cAAc,CAER;IACd,OAAO,CAAC,SAAS,CAAuB;IACxC,OAAO,CAAC,YAAY,CAA+B;IACnD,OAAO,CAAC,uBAAuB,CAAC,CAA0B;gBAE9C,OAAO,CAAC,EAAE,8BAA8B;IAapD,OAAO,CAAC,mBAAmB;IAmB3B,OAAO,CAAC,sBAAsB;YAqChB,aAAa;IAuC3B,OAAO,CAAC,qBAAqB;IAO7B,OAAO,CAAC,gBAAgB;IAkBxB,OAAO,CAAC,gBAAgB;IAYxB,OAAO,CAAC,qBAAqB;IAa7B,OAAO,CAAC,sBAAsB;IAkC9B,OAAO,CAAC,mBAAmB;IAW3B,OAAO,CAAC,mBAAmB;YAyCb,oBAAoB;YAwBpB,oBAAoB;IAwBlC,OAAO,CAAC,iBAAiB;IAsBzB,OAAO,CAAC,aAAa;IA2BrB,OAAO,CAAC,mBAAmB;IAoDrB,aAAa,CACjB,GAAG,EAAE,OAAO,CAAC,OAAO,EACpB,GAAG,EAAE,OAAO,CAAC,QAAQ,EACrB,eAAe,CAAC,EAAE,eAAe,GAChC,OAAO,CAAC,IAAI,CAAC;YAsSF,gBAAgB;IA+C9B,OAAO,CAAC,gBAAgB;IASlB,KAAK,IAAI,OAAO,CAAC,IAAI,CAAC;IAynBtB,QAAQ,IAAI,OAAO,CAAC,IAAI,CAAC;IAgD/B,cAAc,IAAI;QAChB,MAAM,EAAE,OAAO,CAAC;QAChB,SAAS,CAAC,EAAE,MAAM,CAAC;QACnB,GAAG,CAAC,EAAE,MAAM,CAAC;QACb,QAAQ,CAAC,EAAE;YACT,KAAK,EAAE,MAAM,CAAC;YACd,MAAM,EAAE,MAAM,CAAC;YACf,OAAO,EAAE,MAAM,CAAC;YAChB,GAAG,EAAE,MAAM,CAAC;YACZ,UAAU,EAAE,MAAM,EAAE,CAAC;SACtB,CAAC;KACH;IAmCM,kBAAkB,IAAI,YAAY,EAAE;IAoEpC,mBAAmB,CAAC,QAAQ,EAAE,YAAY,EAAE,GAAG,MAAM;CAsG7D"}
|
||||
363
dist/http-server-single-session.js
vendored
363
dist/http-server-single-session.js
vendored
@@ -51,7 +51,6 @@ class SingleSessionHTTPServer {
|
||||
this.sessionMetadata = {};
|
||||
this.sessionContexts = {};
|
||||
this.contextSwitchLocks = new Map();
|
||||
this.session = null;
|
||||
this.consoleManager = new console_manager_1.ConsoleManager();
|
||||
this.sessionTimeout = parseInt(process.env.SESSION_TIMEOUT_MINUTES || '30', 10) * 60 * 1000;
|
||||
this.authToken = null;
|
||||
@@ -170,6 +169,39 @@ class SingleSessionHTTPServer {
|
||||
this.sessionMetadata[sessionId].lastAccess = new Date();
|
||||
}
|
||||
}
|
||||
authenticateRequest(req, res) {
|
||||
const authHeader = req.headers.authorization;
|
||||
if (!authHeader || !authHeader.startsWith('Bearer ')) {
|
||||
const reason = !authHeader ? 'no_auth_header' : 'invalid_auth_format';
|
||||
logger_1.logger.warn('Authentication failed', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
reason
|
||||
});
|
||||
res.status(401).json({
|
||||
jsonrpc: '2.0',
|
||||
error: { code: -32001, message: 'Unauthorized' },
|
||||
id: null
|
||||
});
|
||||
return false;
|
||||
}
|
||||
const token = authHeader.slice(7).trim();
|
||||
const isValid = this.authToken && auth_1.AuthManager.timingSafeCompare(token, this.authToken);
|
||||
if (!isValid) {
|
||||
logger_1.logger.warn('Authentication failed: Invalid token', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
reason: 'invalid_token'
|
||||
});
|
||||
res.status(401).json({
|
||||
jsonrpc: '2.0',
|
||||
error: { code: -32001, message: 'Unauthorized' },
|
||||
id: null
|
||||
});
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
async switchSessionContext(sessionId, newContext) {
|
||||
const existingLock = this.contextSwitchLocks.get(sessionId);
|
||||
if (existingLock) {
|
||||
@@ -392,6 +424,18 @@ class SingleSessionHTTPServer {
|
||||
return;
|
||||
}
|
||||
logger_1.logger.info('handleRequest: Reusing existing transport for session', { sessionId });
|
||||
if (this.transports[sessionId] instanceof sse_js_1.SSEServerTransport) {
|
||||
logger_1.logger.warn('handleRequest: SSE session used on StreamableHTTP endpoint', { sessionId });
|
||||
res.status(400).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32000,
|
||||
message: 'Session uses SSE transport. Send messages to POST /messages?sessionId=<id> instead.'
|
||||
},
|
||||
id: req.body?.id || null
|
||||
});
|
||||
return;
|
||||
}
|
||||
transport = this.transports[sessionId];
|
||||
if (!transport) {
|
||||
if (this.isJsonRpcNotification(req.body)) {
|
||||
@@ -486,54 +530,33 @@ class SingleSessionHTTPServer {
|
||||
}
|
||||
});
|
||||
}
|
||||
async resetSessionSSE(res) {
|
||||
if (this.session) {
|
||||
const sessionId = this.session.sessionId;
|
||||
logger_1.logger.info('Closing previous session for SSE', { sessionId });
|
||||
if (this.session.server && typeof this.session.server.close === 'function') {
|
||||
try {
|
||||
await this.session.server.close();
|
||||
}
|
||||
catch (serverError) {
|
||||
logger_1.logger.warn('Error closing server for SSE session', { sessionId, error: serverError });
|
||||
}
|
||||
}
|
||||
try {
|
||||
await this.session.transport.close();
|
||||
}
|
||||
catch (transportError) {
|
||||
logger_1.logger.warn('Error closing transport for SSE session', { sessionId, error: transportError });
|
||||
}
|
||||
}
|
||||
try {
|
||||
logger_1.logger.info('Creating new N8NDocumentationMCPServer for SSE...');
|
||||
const server = new server_1.N8NDocumentationMCPServer(undefined, undefined, {
|
||||
generateWorkflowHandler: this.generateWorkflowHandler,
|
||||
async createSSESession(res) {
|
||||
if (!this.canCreateSession()) {
|
||||
logger_1.logger.warn('SSE session creation rejected: session limit reached', {
|
||||
currentSessions: this.getActiveSessionCount(),
|
||||
maxSessions: MAX_SESSIONS
|
||||
});
|
||||
const sessionId = (0, uuid_1.v4)();
|
||||
logger_1.logger.info('Creating SSEServerTransport...');
|
||||
const transport = new sse_js_1.SSEServerTransport('/mcp', res);
|
||||
logger_1.logger.info('Connecting server to SSE transport...');
|
||||
await server.connect(transport);
|
||||
this.session = {
|
||||
server,
|
||||
transport,
|
||||
lastAccess: new Date(),
|
||||
sessionId,
|
||||
initialized: false,
|
||||
isSSE: true
|
||||
};
|
||||
logger_1.logger.info('Created new SSE session successfully', { sessionId: this.session.sessionId });
|
||||
throw new Error(`Session limit reached (${MAX_SESSIONS})`);
|
||||
}
|
||||
catch (error) {
|
||||
logger_1.logger.error('Failed to create SSE session:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
isExpired() {
|
||||
if (!this.session)
|
||||
return true;
|
||||
return Date.now() - this.session.lastAccess.getTime() > this.sessionTimeout;
|
||||
const server = new server_1.N8NDocumentationMCPServer(undefined, undefined, {
|
||||
generateWorkflowHandler: this.generateWorkflowHandler,
|
||||
});
|
||||
const transport = new sse_js_1.SSEServerTransport('/messages', res);
|
||||
const sessionId = transport.sessionId;
|
||||
this.transports[sessionId] = transport;
|
||||
this.servers[sessionId] = server;
|
||||
this.sessionMetadata[sessionId] = {
|
||||
lastAccess: new Date(),
|
||||
createdAt: new Date()
|
||||
};
|
||||
res.on('close', () => {
|
||||
logger_1.logger.info('SSE connection closed by client', { sessionId });
|
||||
this.removeSession(sessionId, 'sse_disconnect').catch(err => {
|
||||
logger_1.logger.warn('Error cleaning up SSE session on disconnect', { sessionId, error: err });
|
||||
});
|
||||
});
|
||||
await server.connect(transport);
|
||||
logger_1.logger.info('SSE session created', { sessionId, transport: 'SSEServerTransport' });
|
||||
}
|
||||
isSessionExpired(sessionId) {
|
||||
const metadata = this.sessionMetadata[sessionId];
|
||||
@@ -601,7 +624,7 @@ class SingleSessionHTTPServer {
|
||||
authentication: {
|
||||
type: 'Bearer Token',
|
||||
header: 'Authorization: Bearer <token>',
|
||||
required_for: ['POST /mcp']
|
||||
required_for: ['POST /mcp', 'GET /sse', 'POST /messages']
|
||||
},
|
||||
documentation: 'https://github.com/czlonkowski/n8n-mcp'
|
||||
});
|
||||
@@ -633,7 +656,7 @@ class SingleSessionHTTPServer {
|
||||
},
|
||||
activeTransports: activeTransports.length,
|
||||
activeServers: activeServers.length,
|
||||
legacySessionActive: !!this.session,
|
||||
legacySessionActive: false,
|
||||
memory: {
|
||||
used: Math.round(process.memoryUsage().heapUsed / 1024 / 1024),
|
||||
total: Math.round(process.memoryUsage().heapTotal / 1024 / 1024),
|
||||
@@ -673,9 +696,10 @@ class SingleSessionHTTPServer {
|
||||
});
|
||||
app.get('/mcp', async (req, res) => {
|
||||
const sessionId = req.headers['mcp-session-id'];
|
||||
if (sessionId && this.transports[sessionId]) {
|
||||
const existingTransport = sessionId ? this.transports[sessionId] : undefined;
|
||||
if (existingTransport && existingTransport instanceof streamableHttp_js_1.StreamableHTTPServerTransport) {
|
||||
try {
|
||||
await this.transports[sessionId].handleRequest(req, res, undefined);
|
||||
await existingTransport.handleRequest(req, res, undefined);
|
||||
return;
|
||||
}
|
||||
catch (error) {
|
||||
@@ -684,22 +708,12 @@ class SingleSessionHTTPServer {
|
||||
}
|
||||
const accept = req.headers.accept;
|
||||
if (accept && accept.includes('text/event-stream')) {
|
||||
logger_1.logger.info('SSE stream request received - establishing SSE connection');
|
||||
try {
|
||||
await this.resetSessionSSE(res);
|
||||
logger_1.logger.info('SSE connection established successfully');
|
||||
}
|
||||
catch (error) {
|
||||
logger_1.logger.error('Failed to establish SSE connection:', error);
|
||||
res.status(500).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32603,
|
||||
message: 'Failed to establish SSE connection'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
}
|
||||
logger_1.logger.info('SSE request on /mcp redirected to /sse', { ip: req.ip });
|
||||
res.status(400).json({
|
||||
error: 'SSE transport uses /sse endpoint',
|
||||
message: 'Connect via GET /sse for SSE streaming. POST messages to /messages?sessionId=<id>.',
|
||||
documentation: 'https://github.com/czlonkowski/n8n-mcp'
|
||||
});
|
||||
return;
|
||||
}
|
||||
if (process.env.N8N_MODE === 'true') {
|
||||
@@ -724,9 +738,23 @@ class SingleSessionHTTPServer {
|
||||
mcp: {
|
||||
method: 'POST',
|
||||
path: '/mcp',
|
||||
description: 'Main MCP JSON-RPC endpoint',
|
||||
description: 'Main MCP JSON-RPC endpoint (StreamableHTTP)',
|
||||
authentication: 'Bearer token required'
|
||||
},
|
||||
sse: {
|
||||
method: 'GET',
|
||||
path: '/sse',
|
||||
description: 'DEPRECATED: SSE stream for legacy clients. Migrate to StreamableHTTP (POST /mcp).',
|
||||
authentication: 'Bearer token required',
|
||||
deprecated: true
|
||||
},
|
||||
messages: {
|
||||
method: 'POST',
|
||||
path: '/messages',
|
||||
description: 'DEPRECATED: Message delivery for SSE sessions. Migrate to StreamableHTTP (POST /mcp).',
|
||||
authentication: 'Bearer token required',
|
||||
deprecated: true
|
||||
},
|
||||
health: {
|
||||
method: 'GET',
|
||||
path: '/health',
|
||||
@@ -743,6 +771,92 @@ class SingleSessionHTTPServer {
|
||||
documentation: 'https://github.com/czlonkowski/n8n-mcp'
|
||||
});
|
||||
});
|
||||
const authLimiter = (0, express_rate_limit_1.default)({
|
||||
windowMs: parseInt(process.env.AUTH_RATE_LIMIT_WINDOW || '900000'),
|
||||
max: parseInt(process.env.AUTH_RATE_LIMIT_MAX || '20'),
|
||||
message: {
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32000,
|
||||
message: 'Too many authentication attempts. Please try again later.'
|
||||
},
|
||||
id: null
|
||||
},
|
||||
standardHeaders: true,
|
||||
legacyHeaders: false,
|
||||
skipSuccessfulRequests: true,
|
||||
handler: (req, res) => {
|
||||
logger_1.logger.warn('Rate limit exceeded', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
event: 'rate_limit'
|
||||
});
|
||||
res.status(429).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32000,
|
||||
message: 'Too many authentication attempts'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
}
|
||||
});
|
||||
app.get('/sse', authLimiter, async (req, res) => {
|
||||
if (!this.authenticateRequest(req, res))
|
||||
return;
|
||||
logger_1.logger.warn('SSE transport is deprecated and will be removed in a future release. Migrate to StreamableHTTP (POST /mcp).', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent')
|
||||
});
|
||||
try {
|
||||
await this.createSSESession(res);
|
||||
}
|
||||
catch (error) {
|
||||
logger_1.logger.error('Failed to create SSE session:', error);
|
||||
if (!res.headersSent) {
|
||||
res.status(error instanceof Error && error.message.includes('Session limit')
|
||||
? 429 : 500).json({
|
||||
error: error instanceof Error ? error.message : 'Failed to establish SSE connection'
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
app.post('/messages', authLimiter, jsonParser, async (req, res) => {
|
||||
if (!this.authenticateRequest(req, res))
|
||||
return;
|
||||
const sessionId = req.query.sessionId;
|
||||
if (!sessionId) {
|
||||
res.status(400).json({
|
||||
jsonrpc: '2.0',
|
||||
error: { code: -32602, message: 'Missing sessionId query parameter' },
|
||||
id: req.body?.id || null
|
||||
});
|
||||
return;
|
||||
}
|
||||
const transport = this.transports[sessionId];
|
||||
if (!transport || !(transport instanceof sse_js_1.SSEServerTransport)) {
|
||||
res.status(400).json({
|
||||
jsonrpc: '2.0',
|
||||
error: { code: -32000, message: 'SSE session not found or expired' },
|
||||
id: req.body?.id || null
|
||||
});
|
||||
return;
|
||||
}
|
||||
this.updateSessionAccess(sessionId);
|
||||
try {
|
||||
await transport.handlePostMessage(req, res, req.body);
|
||||
}
|
||||
catch (error) {
|
||||
logger_1.logger.error('SSE message handling error', { sessionId, error });
|
||||
if (!res.headersSent) {
|
||||
res.status(500).json({
|
||||
jsonrpc: '2.0',
|
||||
error: { code: -32603, message: 'Internal error processing SSE message' },
|
||||
id: req.body?.id || null
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
app.delete('/mcp', async (req, res) => {
|
||||
const mcpSessionId = req.headers['mcp-session-id'];
|
||||
if (!mcpSessionId) {
|
||||
@@ -796,35 +910,6 @@ class SingleSessionHTTPServer {
|
||||
});
|
||||
}
|
||||
});
|
||||
const authLimiter = (0, express_rate_limit_1.default)({
|
||||
windowMs: parseInt(process.env.AUTH_RATE_LIMIT_WINDOW || '900000'),
|
||||
max: parseInt(process.env.AUTH_RATE_LIMIT_MAX || '20'),
|
||||
message: {
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32000,
|
||||
message: 'Too many authentication attempts. Please try again later.'
|
||||
},
|
||||
id: null
|
||||
},
|
||||
standardHeaders: true,
|
||||
legacyHeaders: false,
|
||||
handler: (req, res) => {
|
||||
logger_1.logger.warn('Rate limit exceeded', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
event: 'rate_limit'
|
||||
});
|
||||
res.status(429).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32000,
|
||||
message: 'Too many authentication attempts'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
}
|
||||
});
|
||||
app.post('/mcp', authLimiter, jsonParser, async (req, res) => {
|
||||
logger_1.logger.info('POST /mcp request received - DETAILED DEBUG', {
|
||||
headers: req.headers,
|
||||
@@ -864,63 +949,10 @@ class SingleSessionHTTPServer {
|
||||
req.removeListener('close', closeHandler);
|
||||
});
|
||||
}
|
||||
const authHeader = req.headers.authorization;
|
||||
if (!authHeader) {
|
||||
logger_1.logger.warn('Authentication failed: Missing Authorization header', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
reason: 'no_auth_header'
|
||||
});
|
||||
res.status(401).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32001,
|
||||
message: 'Unauthorized'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
if (!this.authenticateRequest(req, res))
|
||||
return;
|
||||
}
|
||||
if (!authHeader.startsWith('Bearer ')) {
|
||||
logger_1.logger.warn('Authentication failed: Invalid Authorization header format (expected Bearer token)', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
reason: 'invalid_auth_format',
|
||||
headerPrefix: authHeader.substring(0, Math.min(authHeader.length, 10)) + '...'
|
||||
});
|
||||
res.status(401).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32001,
|
||||
message: 'Unauthorized'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
return;
|
||||
}
|
||||
const token = authHeader.slice(7).trim();
|
||||
const isValidToken = this.authToken &&
|
||||
auth_1.AuthManager.timingSafeCompare(token, this.authToken);
|
||||
if (!isValidToken) {
|
||||
logger_1.logger.warn('Authentication failed: Invalid token', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
reason: 'invalid_token'
|
||||
});
|
||||
res.status(401).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32001,
|
||||
message: 'Unauthorized'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
return;
|
||||
}
|
||||
logger_1.logger.info('Authentication successful - proceeding to handleRequest', {
|
||||
hasSession: !!this.session,
|
||||
sessionType: this.session?.isSSE ? 'SSE' : 'StreamableHTTP',
|
||||
sessionInitialized: this.session?.initialized
|
||||
activeSessions: this.getActiveSessionCount()
|
||||
});
|
||||
const instanceContext = (() => {
|
||||
const headers = extractMultiTenantHeaders(req);
|
||||
@@ -1008,6 +1040,7 @@ class SingleSessionHTTPServer {
|
||||
console.log(`Session Limits: ${MAX_SESSIONS} max sessions, ${this.sessionTimeout / 1000 / 60}min timeout`);
|
||||
console.log(`Health check: ${endpoints.health}`);
|
||||
console.log(`MCP endpoint: ${endpoints.mcp}`);
|
||||
console.log(`SSE endpoint: ${baseUrl}/sse (legacy clients)`);
|
||||
if (isProduction) {
|
||||
console.log('🔒 Running in PRODUCTION mode - enhanced security enabled');
|
||||
}
|
||||
@@ -1061,16 +1094,6 @@ class SingleSessionHTTPServer {
|
||||
logger_1.logger.warn(`Error closing transport for session ${sessionId}:`, error);
|
||||
}
|
||||
}
|
||||
if (this.session) {
|
||||
try {
|
||||
await this.session.transport.close();
|
||||
logger_1.logger.info('Legacy session closed');
|
||||
}
|
||||
catch (error) {
|
||||
logger_1.logger.warn('Error closing legacy session:', error);
|
||||
}
|
||||
this.session = null;
|
||||
}
|
||||
if (this.expressServer) {
|
||||
await new Promise((resolve) => {
|
||||
this.expressServer.close(() => {
|
||||
@@ -1090,22 +1113,8 @@ class SingleSessionHTTPServer {
|
||||
}
|
||||
getSessionInfo() {
|
||||
const metrics = this.getSessionMetrics();
|
||||
if (!this.session) {
|
||||
return {
|
||||
active: false,
|
||||
sessions: {
|
||||
total: metrics.totalSessions,
|
||||
active: metrics.activeSessions,
|
||||
expired: metrics.expiredSessions,
|
||||
max: MAX_SESSIONS,
|
||||
sessionIds: Object.keys(this.transports)
|
||||
}
|
||||
};
|
||||
}
|
||||
return {
|
||||
active: true,
|
||||
sessionId: this.session.sessionId,
|
||||
age: Date.now() - this.session.lastAccess.getTime(),
|
||||
active: metrics.activeSessions > 0,
|
||||
sessions: {
|
||||
total: metrics.totalSessions,
|
||||
active: metrics.activeSessions,
|
||||
|
||||
2
dist/http-server-single-session.js.map
vendored
2
dist/http-server-single-session.js.map
vendored
File diff suppressed because one or more lines are too long
7
dist/mcp/handlers-n8n-manager.d.ts
vendored
7
dist/mcp/handlers-n8n-manager.d.ts
vendored
@@ -37,4 +37,11 @@ export declare function handleInsertRows(args: unknown, context?: InstanceContex
|
||||
export declare function handleUpdateRows(args: unknown, context?: InstanceContext): Promise<McpToolResponse>;
|
||||
export declare function handleUpsertRows(args: unknown, context?: InstanceContext): Promise<McpToolResponse>;
|
||||
export declare function handleDeleteRows(args: unknown, context?: InstanceContext): Promise<McpToolResponse>;
|
||||
export declare function handleListCredentials(args: unknown, context?: InstanceContext): Promise<McpToolResponse>;
|
||||
export declare function handleGetCredential(args: unknown, context?: InstanceContext): Promise<McpToolResponse>;
|
||||
export declare function handleCreateCredential(args: unknown, context?: InstanceContext): Promise<McpToolResponse>;
|
||||
export declare function handleUpdateCredential(args: unknown, context?: InstanceContext): Promise<McpToolResponse>;
|
||||
export declare function handleDeleteCredential(args: unknown, context?: InstanceContext): Promise<McpToolResponse>;
|
||||
export declare function handleGetCredentialSchema(args: unknown, context?: InstanceContext): Promise<McpToolResponse>;
|
||||
export declare function handleAuditInstance(args: unknown, context?: InstanceContext): Promise<McpToolResponse>;
|
||||
//# sourceMappingURL=handlers-n8n-manager.d.ts.map
|
||||
2
dist/mcp/handlers-n8n-manager.d.ts.map
vendored
2
dist/mcp/handlers-n8n-manager.d.ts.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"handlers-n8n-manager.d.ts","sourceRoot":"","sources":["../../src/mcp/handlers-n8n-manager.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,YAAY,EAAE,MAAM,4BAA4B,CAAC;AAE1D,OAAO,EAML,eAAe,EAGhB,MAAM,kBAAkB,CAAC;AAkB1B,OAAO,EAAE,cAAc,EAAE,MAAM,6BAA6B,CAAC;AAC7D,OAAO,EAAE,eAAe,EAA2B,MAAM,2BAA2B,CAAC;AAOrF,OAAO,EAAE,eAAe,EAAE,MAAM,+BAA+B,CAAC;AAqNhE,wBAAgB,0BAA0B,IAAI,MAAM,CAEnD;AAMD,wBAAgB,uBAAuB,gDAEtC;AAKD,wBAAgB,kBAAkB,IAAI,IAAI,CAIzC;AAED,wBAAgB,eAAe,CAAC,OAAO,CAAC,EAAE,eAAe,GAAG,YAAY,GAAG,IAAI,CAgF9E;AA4HD,wBAAsB,oBAAoB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CA8F7G;AAED,wBAAsB,iBAAiB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAiC1G;AAED,wBAAsB,wBAAwB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAoDjH;AAED,wBAAsB,0BAA0B,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAmDnH;AAED,wBAAsB,wBAAwB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAyCjH;AAED,wBAAsB,oBAAoB,CACxC,IAAI,EAAE,OAAO,EACb,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CA8H1B;AAeD,wBAAsB,oBAAoB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAsC7G;AAED,wBAAsB,mBAAmB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAiE5G;AAED,wBAAsB,sBAAsB,CAC1C,IAAI,EAAE,OAAO,EACb,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CA0F1B;AAED,wBAAsB,qBAAqB,CACzC,IAAI,EAAE,OAAO,EACb,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CAoK1B;AAQD,wBAAsB,kBAAkB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAwJ3G;AAED,wBAAsB,kBAAkB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CA8H3G;AAED,wBAAsB,oBAAoB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAgD7G;AAED,wBAAsB,qBAAqB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAiC9G;AAID,wBAAsB,iBAAiB,CAAC,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAwG3F;AAkLD,wBAAsB,gBAAgB,CAAC,OAAO,EAAE,GAAG,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAkQxG;AAED,wBAAsB,sBAAsB,CAC1C,IAAI,EAAE,OAAO,EACb,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CAsL1B;AA+BD,wBAAsB,oBAAoB,CACxC,IAAI,EAAE,OAAO,EACb,eAAe,EAAE,eAAe,EAChC,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CAoM1B;AAQD,wBAAsB,4BAA4B,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAyErH;AA2CD,wBAAgB,YAAY,CAAC,GAAG,EAAE,OAAO,GAAG,OAAO,CAGlD;AAgDD,wBAAsB,iBAAiB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAgB1G;AAED,wBAAsB,gBAAgB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAgBzG;AAED,wBAAsB,cAAc,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CASvG;AAED,wBAAsB,iBAAiB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAgB1G;AAED,wBAAsB,iBAAiB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAS1G;AAED,wBAAsB,aAAa,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAuBtG;AAED,wBAAsB,gBAAgB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAazG;AAED,wBAAsB,gBAAgB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAazG;AAED,wBAAsB,gBAAgB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAazG;AAED,wBAAsB,gBAAgB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAiBzG"}
|
||||
{"version":3,"file":"handlers-n8n-manager.d.ts","sourceRoot":"","sources":["../../src/mcp/handlers-n8n-manager.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,YAAY,EAAE,MAAM,4BAA4B,CAAC;AAI1D,OAAO,EAML,eAAe,EAGhB,MAAM,kBAAkB,CAAC;AAkB1B,OAAO,EAAE,cAAc,EAAE,MAAM,6BAA6B,CAAC;AAC7D,OAAO,EAAE,eAAe,EAA2B,MAAM,2BAA2B,CAAC;AAOrF,OAAO,EAAE,eAAe,EAAE,MAAM,+BAA+B,CAAC;AAqNhE,wBAAgB,0BAA0B,IAAI,MAAM,CAEnD;AAMD,wBAAgB,uBAAuB,gDAEtC;AAKD,wBAAgB,kBAAkB,IAAI,IAAI,CAIzC;AAED,wBAAgB,eAAe,CAAC,OAAO,CAAC,EAAE,eAAe,GAAG,YAAY,GAAG,IAAI,CAgF9E;AA4HD,wBAAsB,oBAAoB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CA8F7G;AAED,wBAAsB,iBAAiB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAiC1G;AAED,wBAAsB,wBAAwB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAoDjH;AAED,wBAAsB,0BAA0B,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAmDnH;AAED,wBAAsB,wBAAwB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAyCjH;AAED,wBAAsB,oBAAoB,CACxC,IAAI,EAAE,OAAO,EACb,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CAoJ1B;AAeD,wBAAsB,oBAAoB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAsC7G;AAED,wBAAsB,mBAAmB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAiE5G;AAED,wBAAsB,sBAAsB,CAC1C,IAAI,EAAE,OAAO,EACb,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CA0F1B;AAED,wBAAsB,qBAAqB,CACzC,IAAI,EAAE,OAAO,EACb,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CAoK1B;AAQD,wBAAsB,kBAAkB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAwJ3G;AAED,wBAAsB,kBAAkB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CA8H3G;AAED,wBAAsB,oBAAoB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAgD7G;AAED,wBAAsB,qBAAqB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAiC9G;AAID,wBAAsB,iBAAiB,CAAC,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAwG3F;AAkLD,wBAAsB,gBAAgB,CAAC,OAAO,EAAE,GAAG,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAkQxG;AAED,wBAAsB,sBAAsB,CAC1C,IAAI,EAAE,OAAO,EACb,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CAsL1B;AA+BD,wBAAsB,oBAAoB,CACxC,IAAI,EAAE,OAAO,EACb,eAAe,EAAE,eAAe,EAChC,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CAoM1B;AAQD,wBAAsB,4BAA4B,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAyErH;AA2CD,wBAAgB,YAAY,CAAC,GAAG,EAAE,OAAO,GAAG,OAAO,CAGlD;AAiDD,wBAAsB,iBAAiB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAgB1G;AAED,wBAAsB,gBAAgB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAgBzG;AAED,wBAAsB,cAAc,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CASvG;AAED,wBAAsB,iBAAiB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAgB1G;AAED,wBAAsB,iBAAiB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAS1G;AAED,wBAAsB,aAAa,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAuBtG;AAED,wBAAsB,gBAAgB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAazG;AAED,wBAAsB,gBAAgB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAazG;AAED,wBAAsB,gBAAgB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAazG;AAED,wBAAsB,gBAAgB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAiBzG;AAmCD,wBAAsB,qBAAqB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAgB9G;AAED,wBAAsB,mBAAmB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAc5G;AAED,wBAAsB,sBAAsB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAe/G;AAED,wBAAsB,sBAAsB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAkB/G;AAED,wBAAsB,sBAAsB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAa/G;AAED,wBAAsB,yBAAyB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CAalH;AAeD,wBAAsB,mBAAmB,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE,eAAe,GAAG,OAAO,CAAC,eAAe,CAAC,CA+F5G"}
|
||||
262
dist/mcp/handlers-n8n-manager.js
vendored
262
dist/mcp/handlers-n8n-manager.js
vendored
@@ -67,7 +67,16 @@ exports.handleInsertRows = handleInsertRows;
|
||||
exports.handleUpdateRows = handleUpdateRows;
|
||||
exports.handleUpsertRows = handleUpsertRows;
|
||||
exports.handleDeleteRows = handleDeleteRows;
|
||||
exports.handleListCredentials = handleListCredentials;
|
||||
exports.handleGetCredential = handleGetCredential;
|
||||
exports.handleCreateCredential = handleCreateCredential;
|
||||
exports.handleUpdateCredential = handleUpdateCredential;
|
||||
exports.handleDeleteCredential = handleDeleteCredential;
|
||||
exports.handleGetCredentialSchema = handleGetCredentialSchema;
|
||||
exports.handleAuditInstance = handleAuditInstance;
|
||||
const n8n_api_client_1 = require("../services/n8n-api-client");
|
||||
const workflow_security_scanner_1 = require("../services/workflow-security-scanner");
|
||||
const audit_report_builder_1 = require("../services/audit-report-builder");
|
||||
const n8n_api_1 = require("../config/n8n-api");
|
||||
const n8n_api_2 = require("../types/n8n-api");
|
||||
const n8n_validation_1 = require("../services/n8n-validation");
|
||||
@@ -519,6 +528,24 @@ async function handleUpdateWorkflow(args, repository, context) {
|
||||
if (updateData.nodes || updateData.connections) {
|
||||
const current = await client.getWorkflow(id);
|
||||
workflowBefore = JSON.parse(JSON.stringify(current));
|
||||
if (updateData.nodes && current.nodes) {
|
||||
const currentById = new Map();
|
||||
const currentByName = new Map();
|
||||
for (const node of current.nodes) {
|
||||
if (node.id)
|
||||
currentById.set(node.id, node);
|
||||
currentByName.set(node.name, node);
|
||||
}
|
||||
for (const node of updateData.nodes) {
|
||||
const hasCredentials = node.credentials && typeof node.credentials === 'object' && Object.keys(node.credentials).length > 0;
|
||||
if (!hasCredentials) {
|
||||
const match = (node.id && currentById.get(node.id)) || currentByName.get(node.name);
|
||||
if (match?.credentials) {
|
||||
node.credentials = match.credentials;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if (createBackup !== false) {
|
||||
try {
|
||||
const versioningService = new workflow_versioning_service_1.WorkflowVersioningService(repository, client);
|
||||
@@ -2111,7 +2138,7 @@ const deleteRowsSchema = tableIdSchema.extend({
|
||||
returnData: zod_1.z.boolean().optional(),
|
||||
dryRun: zod_1.z.boolean().optional(),
|
||||
});
|
||||
function handleDataTableError(error) {
|
||||
function handleCrudError(error) {
|
||||
if (error instanceof zod_1.z.ZodError) {
|
||||
return { success: false, error: 'Invalid input', details: { errors: error.errors } };
|
||||
}
|
||||
@@ -2140,7 +2167,7 @@ async function handleCreateTable(args, context) {
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleListTables(args, context) {
|
||||
@@ -2158,7 +2185,7 @@ async function handleListTables(args, context) {
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleGetTable(args, context) {
|
||||
@@ -2169,7 +2196,7 @@ async function handleGetTable(args, context) {
|
||||
return { success: true, data: dataTable };
|
||||
}
|
||||
catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleUpdateTable(args, context) {
|
||||
@@ -2187,7 +2214,7 @@ async function handleUpdateTable(args, context) {
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleDeleteTable(args, context) {
|
||||
@@ -2198,7 +2225,7 @@ async function handleDeleteTable(args, context) {
|
||||
return { success: true, message: `Data table ${tableId} deleted successfully` };
|
||||
}
|
||||
catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleGetRows(args, context) {
|
||||
@@ -2223,7 +2250,7 @@ async function handleGetRows(args, context) {
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleInsertRows(args, context) {
|
||||
@@ -2238,7 +2265,7 @@ async function handleInsertRows(args, context) {
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleUpdateRows(args, context) {
|
||||
@@ -2253,7 +2280,7 @@ async function handleUpdateRows(args, context) {
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleUpsertRows(args, context) {
|
||||
@@ -2268,7 +2295,7 @@ async function handleUpsertRows(args, context) {
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleDeleteRows(args, context) {
|
||||
@@ -2287,7 +2314,220 @@ async function handleDeleteRows(args, context) {
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
const listCredentialsSchema = zod_1.z.object({}).passthrough();
|
||||
const getCredentialSchema = zod_1.z.object({
|
||||
id: zod_1.z.string({ required_error: 'Credential ID is required' }),
|
||||
});
|
||||
const createCredentialSchema = zod_1.z.object({
|
||||
name: zod_1.z.string({ required_error: 'Credential name is required' }),
|
||||
type: zod_1.z.string({ required_error: 'Credential type is required' }),
|
||||
data: zod_1.z.record(zod_1.z.any(), { required_error: 'Credential data is required' }),
|
||||
});
|
||||
const updateCredentialSchema = zod_1.z.object({
|
||||
id: zod_1.z.string({ required_error: 'Credential ID is required' }),
|
||||
name: zod_1.z.string().optional(),
|
||||
data: zod_1.z.record(zod_1.z.any()).optional(),
|
||||
});
|
||||
const deleteCredentialSchema = zod_1.z.object({
|
||||
id: zod_1.z.string({ required_error: 'Credential ID is required' }),
|
||||
});
|
||||
const getCredentialSchemaTypeSchema = zod_1.z.object({
|
||||
type: zod_1.z.string({ required_error: 'Credential type is required' }),
|
||||
});
|
||||
async function handleListCredentials(args, context) {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
listCredentialsSchema.parse(args);
|
||||
const result = await client.listCredentials();
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
credentials: result.data,
|
||||
count: result.data.length,
|
||||
nextCursor: result.nextCursor || undefined,
|
||||
},
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleGetCredential(args, context) {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const { id } = getCredentialSchema.parse(args);
|
||||
const credential = await client.getCredential(id);
|
||||
const { data: _sensitiveData, ...safeCred } = credential;
|
||||
return {
|
||||
success: true,
|
||||
data: safeCred,
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleCreateCredential(args, context) {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const { name, type, data } = createCredentialSchema.parse(args);
|
||||
logger_1.logger.info(`Creating credential: name="${name}", type="${type}"`);
|
||||
const credential = await client.createCredential({ name, type, data });
|
||||
const { data: _sensitiveData, ...safeCred } = credential;
|
||||
return {
|
||||
success: true,
|
||||
data: safeCred,
|
||||
message: `Credential "${name}" (type: ${type}) created with ID ${credential.id}`,
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleUpdateCredential(args, context) {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const { id, name, data } = updateCredentialSchema.parse(args);
|
||||
logger_1.logger.info(`Updating credential: id="${id}"${name ? `, name="${name}"` : ''}`);
|
||||
const updatePayload = {};
|
||||
if (name !== undefined)
|
||||
updatePayload.name = name;
|
||||
if (data !== undefined)
|
||||
updatePayload.data = data;
|
||||
const credential = await client.updateCredential(id, updatePayload);
|
||||
const { data: _sensitiveData, ...safeCred } = credential;
|
||||
return {
|
||||
success: true,
|
||||
data: safeCred,
|
||||
message: `Credential ${id} updated successfully`,
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleDeleteCredential(args, context) {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const { id } = deleteCredentialSchema.parse(args);
|
||||
logger_1.logger.info(`Deleting credential: id="${id}"`);
|
||||
await client.deleteCredential(id);
|
||||
return {
|
||||
success: true,
|
||||
message: `Credential ${id} deleted successfully`,
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
async function handleGetCredentialSchema(args, context) {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const { type } = getCredentialSchemaTypeSchema.parse(args);
|
||||
const schema = await client.getCredentialSchema(type);
|
||||
return {
|
||||
success: true,
|
||||
data: schema,
|
||||
message: `Schema for credential type "${type}"`,
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
const auditInstanceSchema = zod_1.z.object({
|
||||
categories: zod_1.z.array(zod_1.z.enum([
|
||||
'credentials', 'database', 'nodes', 'instance', 'filesystem',
|
||||
])).optional(),
|
||||
includeCustomScan: zod_1.z.boolean().optional().default(true),
|
||||
daysAbandonedWorkflow: zod_1.z.number().optional(),
|
||||
customChecks: zod_1.z.array(zod_1.z.enum([
|
||||
'hardcoded_secrets', 'unauthenticated_webhooks', 'error_handling', 'data_retention',
|
||||
])).optional(),
|
||||
});
|
||||
async function handleAuditInstance(args, context) {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const input = auditInstanceSchema.parse(args);
|
||||
const totalStart = Date.now();
|
||||
const warnings = [];
|
||||
let builtinAudit = null;
|
||||
let builtinAuditMs = 0;
|
||||
try {
|
||||
const auditStart = Date.now();
|
||||
builtinAudit = await client.generateAudit({
|
||||
categories: input.categories,
|
||||
daysAbandonedWorkflow: input.daysAbandonedWorkflow,
|
||||
});
|
||||
builtinAuditMs = Date.now() - auditStart;
|
||||
}
|
||||
catch (auditError) {
|
||||
builtinAuditMs = Date.now() - totalStart;
|
||||
const msg = auditError?.statusCode === 404
|
||||
? 'Built-in audit endpoint not available on this n8n version.'
|
||||
: `Built-in audit failed: ${auditError?.message || 'unknown error'}`;
|
||||
warnings.push(msg);
|
||||
logger_1.logger.warn(`Audit: ${msg}`);
|
||||
}
|
||||
let customReport = null;
|
||||
let workflowFetchMs = 0;
|
||||
let customScanMs = 0;
|
||||
if (input.includeCustomScan) {
|
||||
try {
|
||||
const fetchStart = Date.now();
|
||||
const allWorkflows = await client.listAllWorkflows();
|
||||
workflowFetchMs = Date.now() - fetchStart;
|
||||
logger_1.logger.info(`Audit: fetched ${allWorkflows.length} workflows for scanning`);
|
||||
const scanStart = Date.now();
|
||||
customReport = (0, workflow_security_scanner_1.scanWorkflows)(allWorkflows, input.customChecks);
|
||||
customScanMs = Date.now() - scanStart;
|
||||
logger_1.logger.info(`Audit: custom scan found ${customReport.summary.total} findings across ${customReport.workflowsScanned} workflows`);
|
||||
}
|
||||
catch (scanError) {
|
||||
warnings.push(`Custom scan failed: ${scanError?.message || 'unknown error'}`);
|
||||
logger_1.logger.warn(`Audit: custom scan failed: ${scanError?.message}`);
|
||||
}
|
||||
}
|
||||
const totalMs = Date.now() - totalStart;
|
||||
const apiConfig = context?.n8nApiUrl
|
||||
? { baseUrl: context.n8nApiUrl }
|
||||
: (0, n8n_api_1.getN8nApiConfig)();
|
||||
const instanceUrl = apiConfig?.baseUrl || 'unknown';
|
||||
const report = (0, audit_report_builder_1.buildAuditReport)({
|
||||
builtinAudit,
|
||||
customReport,
|
||||
performance: { builtinAuditMs, workflowFetchMs, customScanMs, totalMs },
|
||||
instanceUrl,
|
||||
warnings: warnings.length > 0 ? warnings : undefined,
|
||||
});
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
report: report.markdown,
|
||||
summary: report.summary,
|
||||
},
|
||||
};
|
||||
}
|
||||
catch (error) {
|
||||
if (error instanceof zod_1.z.ZodError) {
|
||||
return {
|
||||
success: false,
|
||||
error: 'Invalid audit parameters',
|
||||
details: { issues: error.errors },
|
||||
};
|
||||
}
|
||||
if (error instanceof n8n_errors_1.N8nApiError) {
|
||||
return {
|
||||
success: false,
|
||||
error: (0, n8n_errors_1.getUserFriendlyErrorMessage)(error),
|
||||
};
|
||||
}
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
return { success: false, error: message };
|
||||
}
|
||||
}
|
||||
//# sourceMappingURL=handlers-n8n-manager.js.map
|
||||
2
dist/mcp/handlers-n8n-manager.js.map
vendored
2
dist/mcp/handlers-n8n-manager.js.map
vendored
File diff suppressed because one or more lines are too long
2
dist/mcp/handlers-workflow-diff.d.ts.map
vendored
2
dist/mcp/handlers-workflow-diff.d.ts.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"handlers-workflow-diff.d.ts","sourceRoot":"","sources":["../../src/mcp/handlers-workflow-diff.ts"],"names":[],"mappings":"AAMA,OAAO,EAAE,eAAe,EAAE,MAAM,kBAAkB,CAAC;AAMnD,OAAO,EAAE,eAAe,EAAE,MAAM,2BAA2B,CAAC;AAE5D,OAAO,EAAE,cAAc,EAAE,MAAM,6BAA6B,CAAC;AAkF7D,wBAAsB,2BAA2B,CAC/C,IAAI,EAAE,OAAO,EACb,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CAib1B"}
|
||||
{"version":3,"file":"handlers-workflow-diff.d.ts","sourceRoot":"","sources":["../../src/mcp/handlers-workflow-diff.ts"],"names":[],"mappings":"AAMA,OAAO,EAAE,eAAe,EAAE,MAAM,kBAAkB,CAAC;AAMnD,OAAO,EAAE,eAAe,EAAE,MAAM,2BAA2B,CAAC;AAE5D,OAAO,EAAE,cAAc,EAAE,MAAM,6BAA6B,CAAC;AAoF7D,wBAAsB,2BAA2B,CAC/C,IAAI,EAAE,OAAO,EACb,UAAU,EAAE,cAAc,EAC1B,OAAO,CAAC,EAAE,eAAe,GACxB,OAAO,CAAC,eAAe,CAAC,CAib1B"}
|
||||
10
dist/mcp/handlers-workflow-diff.js
vendored
10
dist/mcp/handlers-workflow-diff.js
vendored
@@ -51,7 +51,7 @@ function getValidator(repository) {
|
||||
return cachedValidator;
|
||||
}
|
||||
const NODE_TARGETING_OPERATIONS = new Set([
|
||||
'updateNode', 'removeNode', 'moveNode', 'enableNode', 'disableNode'
|
||||
'updateNode', 'removeNode', 'moveNode', 'enableNode', 'disableNode', 'patchNodeField'
|
||||
]);
|
||||
const workflowDiffSchema = zod_1.z.object({
|
||||
id: zod_1.z.string(),
|
||||
@@ -62,6 +62,8 @@ const workflowDiffSchema = zod_1.z.object({
|
||||
nodeId: zod_1.z.string().optional(),
|
||||
nodeName: zod_1.z.string().optional(),
|
||||
updates: zod_1.z.any().optional(),
|
||||
fieldPath: zod_1.z.string().optional(),
|
||||
patches: zod_1.z.any().optional(),
|
||||
position: zod_1.z.tuple([zod_1.z.number(), zod_1.z.number()]).optional(),
|
||||
source: zod_1.z.string().optional(),
|
||||
target: zod_1.z.string().optional(),
|
||||
@@ -506,6 +508,8 @@ function inferIntentFromOperations(operations) {
|
||||
return `Remove node ${op.nodeName || op.nodeId || ''}`.trim();
|
||||
case 'updateNode':
|
||||
return `Update node ${op.nodeName || op.nodeId || ''}`.trim();
|
||||
case 'patchNodeField':
|
||||
return `Patch field on node ${op.nodeName || op.nodeId || ''}`.trim();
|
||||
case 'addConnection':
|
||||
return `Connect ${op.source || 'node'} to ${op.target || 'node'}`;
|
||||
case 'removeConnection':
|
||||
@@ -538,6 +542,10 @@ function inferIntentFromOperations(operations) {
|
||||
const count = opTypes.filter((t) => t === 'updateNode').length;
|
||||
summary.push(`update ${count} node${count > 1 ? 's' : ''}`);
|
||||
}
|
||||
if (typeSet.has('patchNodeField')) {
|
||||
const count = opTypes.filter((t) => t === 'patchNodeField').length;
|
||||
summary.push(`patch ${count} field${count > 1 ? 's' : ''}`);
|
||||
}
|
||||
if (typeSet.has('addConnection') || typeSet.has('rewireConnection')) {
|
||||
summary.push('modify connections');
|
||||
}
|
||||
|
||||
2
dist/mcp/handlers-workflow-diff.js.map
vendored
2
dist/mcp/handlers-workflow-diff.js.map
vendored
File diff suppressed because one or more lines are too long
2
dist/mcp/server.d.ts.map
vendored
2
dist/mcp/server.d.ts.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"server.d.ts","sourceRoot":"","sources":["../../src/mcp/server.ts"],"names":[],"mappings":"AA0CA,OAAO,EAAE,eAAe,EAAE,MAAM,2BAA2B,CAAC;AAC5D,OAAO,EAAE,uBAAuB,EAA2B,MAAM,4BAA4B,CAAC;AAE9F,OAAO,EAAE,gBAAgB,EAAE,MAAM,iCAAiC,CAAC;AAmGnE,UAAU,gBAAgB;IACxB,uBAAuB,CAAC,EAAE,uBAAuB,CAAC;CACnD;AAED,qBAAa,yBAAyB;IACpC,OAAO,CAAC,MAAM,CAAS;IACvB,OAAO,CAAC,EAAE,CAAgC;IAC1C,OAAO,CAAC,UAAU,CAA+B;IACjD,OAAO,CAAC,eAAe,CAAgC;IACvD,OAAO,CAAC,WAAW,CAAgB;IACnC,OAAO,CAAC,KAAK,CAAqB;IAClC,OAAO,CAAC,UAAU,CAAa;IAC/B,OAAO,CAAC,eAAe,CAAC,CAAkB;IAC1C,OAAO,CAAC,YAAY,CAAuB;IAC3C,OAAO,CAAC,qBAAqB,CAAsB;IACnD,OAAO,CAAC,WAAW,CAAiC;IACpD,OAAO,CAAC,kBAAkB,CAA4B;IACtD,OAAO,CAAC,iBAAiB,CAAkB;IAC3C,OAAO,CAAC,aAAa,CAAoC;IACzD,OAAO,CAAC,UAAU,CAAkB;IACpC,OAAO,CAAC,uBAAuB,CAAC,CAA0B;gBAE9C,eAAe,CAAC,EAAE,eAAe,EAAE,WAAW,CAAC,EAAE,gBAAgB,EAAE,OAAO,CAAC,EAAE,gBAAgB;IA+GnG,KAAK,IAAI,OAAO,CAAC,IAAI,CAAC;YA+Cd,kBAAkB;YAiDlB,wBAAwB;IA0BtC,OAAO,CAAC,kBAAkB;YA6CZ,iBAAiB;IAa/B,OAAO,CAAC,eAAe,CAAkB;YAE3B,sBAAsB;IAgDpC,OAAO,CAAC,gBAAgB;IAqCxB,OAAO,CAAC,aAAa;IAiYrB,OAAO,CAAC,wBAAwB;IAoFhC,OAAO,CAAC,kBAAkB;IA0E1B,OAAO,CAAC,uBAAuB;IAwB/B,OAAO,CAAC,qBAAqB;IAiF7B,OAAO,CAAC,2BAA2B;YAgZrB,SAAS;YA2DT,WAAW;YAkFX,WAAW;YA2CX,cAAc;YAuNd,gBAAgB;IAuE9B,OAAO,CAAC,mBAAmB;IAwE3B,OAAO,CAAC,eAAe;YAsBT,eAAe;IA4M7B,OAAO,CAAC,kBAAkB;IAQ1B,OAAO,CAAC,uBAAuB;IA0D/B,OAAO,CAAC,iBAAiB;YAqFX,WAAW;YAgCX,oBAAoB;IAuFlC,OAAO,CAAC,aAAa;YAQP,qBAAqB;IA4DnC,OAAO,CAAC,mBAAmB;YAyCb,iBAAiB;YAiKjB,OAAO;YAgDP,cAAc;YAwFd,iBAAiB;IAqC/B,OAAO,CAAC,iBAAiB;IA0BzB,OAAO,CAAC,iBAAiB;IA0BzB,OAAO,CAAC,eAAe;IAwCvB,OAAO,CAAC,kBAAkB;IAiC1B,OAAO,CAAC,aAAa;IAoCrB,OAAO,CAAC,0BAA0B;IAgClC,OAAO,CAAC,4BAA4B;YAKtB,oBAAoB;IAsDlC,OAAO,CAAC,gBAAgB;YAiBV,SAAS;YA6CT,kBAAkB;YAqElB,uBAAuB;YAsDvB,iBAAiB;IAqE/B,OAAO,CAAC,qBAAqB;IA8C7B,OAAO,CAAC,uBAAuB;IA4D/B,OAAO,CAAC,wBAAwB;IAkChC,OAAO,CAAC,iBAAiB;YAoDX,mBAAmB;YAoEnB,qBAAqB;IAS7B,OAAO,CAAC,SAAS,EAAE,GAAG,GAAG,OAAO,CAAC,IAAI,CAAC;YAS9B,aAAa;YAcb,iBAAiB;YAoBjB,WAAW;YAwBX,eAAe;IAqB7B,OAAO,CAAC,qBAAqB,CASb;IAEhB,OAAO,CAAC,mBAAmB;YAsDb,mBAAmB;YAwBnB,yBAAyB;IA4CvC,OAAO,CAAC,kBAAkB;YAiBZ,gBAAgB;YA6HhB,2BAA2B;YAiE3B,2BAA2B;IAyEnC,GAAG,IAAI,OAAO,CAAC,IAAI,CAAC;IA0BpB,QAAQ,IAAI,OAAO,CAAC,IAAI,CAAC;CAgEhC"}
|
||||
{"version":3,"file":"server.d.ts","sourceRoot":"","sources":["../../src/mcp/server.ts"],"names":[],"mappings":"AA0CA,OAAO,EAAE,eAAe,EAAE,MAAM,2BAA2B,CAAC;AAC5D,OAAO,EAAE,uBAAuB,EAA2B,MAAM,4BAA4B,CAAC;AAE9F,OAAO,EAAE,gBAAgB,EAAE,MAAM,iCAAiC,CAAC;AAmGnE,UAAU,gBAAgB;IACxB,uBAAuB,CAAC,EAAE,uBAAuB,CAAC;CACnD;AAED,qBAAa,yBAAyB;IACpC,OAAO,CAAC,MAAM,CAAS;IACvB,OAAO,CAAC,EAAE,CAAgC;IAC1C,OAAO,CAAC,UAAU,CAA+B;IACjD,OAAO,CAAC,eAAe,CAAgC;IACvD,OAAO,CAAC,WAAW,CAAgB;IACnC,OAAO,CAAC,KAAK,CAAqB;IAClC,OAAO,CAAC,UAAU,CAAa;IAC/B,OAAO,CAAC,eAAe,CAAC,CAAkB;IAC1C,OAAO,CAAC,YAAY,CAAuB;IAC3C,OAAO,CAAC,qBAAqB,CAAsB;IACnD,OAAO,CAAC,WAAW,CAAiC;IACpD,OAAO,CAAC,kBAAkB,CAA4B;IACtD,OAAO,CAAC,iBAAiB,CAAkB;IAC3C,OAAO,CAAC,aAAa,CAAoC;IACzD,OAAO,CAAC,UAAU,CAAkB;IACpC,OAAO,CAAC,uBAAuB,CAAC,CAA0B;gBAE9C,eAAe,CAAC,EAAE,eAAe,EAAE,WAAW,CAAC,EAAE,gBAAgB,EAAE,OAAO,CAAC,EAAE,gBAAgB;IA+GnG,KAAK,IAAI,OAAO,CAAC,IAAI,CAAC;YA+Cd,kBAAkB;YAiDlB,wBAAwB;IA0BtC,OAAO,CAAC,kBAAkB;YA6CZ,iBAAiB;IAa/B,OAAO,CAAC,eAAe,CAAkB;YAE3B,sBAAsB;IAgDpC,OAAO,CAAC,gBAAgB;IAqCxB,OAAO,CAAC,aAAa;IAiYrB,OAAO,CAAC,wBAAwB;IAoFhC,OAAO,CAAC,kBAAkB;IAmF1B,OAAO,CAAC,uBAAuB;IAwB/B,OAAO,CAAC,qBAAqB;IAiF7B,OAAO,CAAC,2BAA2B;YAmarB,SAAS;YA2DT,WAAW;YAkFX,WAAW;YA2CX,cAAc;YAuNd,gBAAgB;IAuE9B,OAAO,CAAC,mBAAmB;IAwE3B,OAAO,CAAC,eAAe;YAsBT,eAAe;IA4M7B,OAAO,CAAC,kBAAkB;IAQ1B,OAAO,CAAC,uBAAuB;IA0D/B,OAAO,CAAC,iBAAiB;YAqFX,WAAW;YAgCX,oBAAoB;IAuFlC,OAAO,CAAC,aAAa;YAQP,qBAAqB;IA4DnC,OAAO,CAAC,mBAAmB;YAyCb,iBAAiB;YAiKjB,OAAO;YAgDP,cAAc;YAwFd,iBAAiB;IAqC/B,OAAO,CAAC,iBAAiB;IA0BzB,OAAO,CAAC,iBAAiB;IA0BzB,OAAO,CAAC,eAAe;IAwCvB,OAAO,CAAC,kBAAkB;IAiC1B,OAAO,CAAC,aAAa;IAoCrB,OAAO,CAAC,0BAA0B;IAgClC,OAAO,CAAC,4BAA4B;YAKtB,oBAAoB;IAsDlC,OAAO,CAAC,gBAAgB;YAiBV,SAAS;YA6CT,kBAAkB;YAqElB,uBAAuB;YAsDvB,iBAAiB;IAqE/B,OAAO,CAAC,qBAAqB;IA8C7B,OAAO,CAAC,uBAAuB;IA4D/B,OAAO,CAAC,wBAAwB;IAkChC,OAAO,CAAC,iBAAiB;YAoDX,mBAAmB;YAoEnB,qBAAqB;IAS7B,OAAO,CAAC,SAAS,EAAE,GAAG,GAAG,OAAO,CAAC,IAAI,CAAC;YAS9B,aAAa;YAcb,iBAAiB;YAoBjB,WAAW;YAwBX,eAAe;IAqB7B,OAAO,CAAC,qBAAqB,CASb;IAEhB,OAAO,CAAC,mBAAmB;YAsDb,mBAAmB;YAwBnB,yBAAyB;IA4CvC,OAAO,CAAC,kBAAkB;YAiBZ,gBAAgB;YA6HhB,2BAA2B;YAiE3B,2BAA2B;IAyEnC,GAAG,IAAI,OAAO,CAAC,IAAI,CAAC;IA0BpB,QAAQ,IAAI,OAAO,CAAC,IAAI,CAAC;CAgEhC"}
|
||||
24
dist/mcp/server.js
vendored
24
dist/mcp/server.js
vendored
@@ -730,6 +730,14 @@ class N8NDocumentationMCPServer {
|
||||
? { valid: true, errors: [] }
|
||||
: { valid: false, errors: [{ field: 'action', message: 'action is required' }] };
|
||||
break;
|
||||
case 'n8n_manage_credentials':
|
||||
validationResult = args.action
|
||||
? { valid: true, errors: [] }
|
||||
: { valid: false, errors: [{ field: 'action', message: 'action is required' }] };
|
||||
break;
|
||||
case 'n8n_audit_instance':
|
||||
validationResult = { valid: true, errors: [] };
|
||||
break;
|
||||
case 'n8n_deploy_template':
|
||||
validationResult = args.templateId !== undefined
|
||||
? { valid: true, errors: [] }
|
||||
@@ -1140,6 +1148,22 @@ class N8NDocumentationMCPServer {
|
||||
throw new Error(`Unknown action: ${dtAction}. Valid actions: createTable, listTables, getTable, updateTable, deleteTable, getRows, insertRows, updateRows, upsertRows, deleteRows`);
|
||||
}
|
||||
}
|
||||
case 'n8n_manage_credentials': {
|
||||
this.validateToolParams(name, args, ['action']);
|
||||
const credAction = args.action;
|
||||
switch (credAction) {
|
||||
case 'list': return n8nHandlers.handleListCredentials(args, this.instanceContext);
|
||||
case 'get': return n8nHandlers.handleGetCredential(args, this.instanceContext);
|
||||
case 'create': return n8nHandlers.handleCreateCredential(args, this.instanceContext);
|
||||
case 'update': return n8nHandlers.handleUpdateCredential(args, this.instanceContext);
|
||||
case 'delete': return n8nHandlers.handleDeleteCredential(args, this.instanceContext);
|
||||
case 'getSchema': return n8nHandlers.handleGetCredentialSchema(args, this.instanceContext);
|
||||
default:
|
||||
throw new Error(`Unknown action: ${credAction}. Valid actions: list, get, create, update, delete, getSchema`);
|
||||
}
|
||||
}
|
||||
case 'n8n_audit_instance':
|
||||
return n8nHandlers.handleAuditInstance(args, this.instanceContext);
|
||||
case 'n8n_generate_workflow': {
|
||||
this.validateToolParams(name, args, ['description']);
|
||||
if (this.generateWorkflowHandler && this.instanceContext) {
|
||||
|
||||
2
dist/mcp/server.js.map
vendored
2
dist/mcp/server.js.map
vendored
File diff suppressed because one or more lines are too long
2
dist/mcp/tool-docs/index.d.ts.map
vendored
2
dist/mcp/tool-docs/index.d.ts.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../../src/mcp/tool-docs/index.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,iBAAiB,EAAE,MAAM,SAAS,CAAC;AA8B5C,eAAO,MAAM,kBAAkB,EAAE,MAAM,CAAC,MAAM,EAAE,iBAAiB,CAqChE,CAAC;AAGF,YAAY,EAAE,iBAAiB,EAAE,MAAM,SAAS,CAAC"}
|
||||
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../../src/mcp/tool-docs/index.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,iBAAiB,EAAE,MAAM,SAAS,CAAC;AAgC5C,eAAO,MAAM,kBAAkB,EAAE,MAAM,CAAC,MAAM,EAAE,iBAAiB,CAuChE,CAAC;AAGF,YAAY,EAAE,iBAAiB,EAAE,MAAM,SAAS,CAAC"}
|
||||
4
dist/mcp/tool-docs/index.js
vendored
4
dist/mcp/tool-docs/index.js
vendored
@@ -11,6 +11,7 @@ const workflow_management_1 = require("./workflow_management");
|
||||
exports.toolsDocumentation = {
|
||||
tools_documentation: system_1.toolsDocumentationDoc,
|
||||
n8n_health_check: system_1.n8nHealthCheckDoc,
|
||||
n8n_audit_instance: system_1.n8nAuditInstanceDoc,
|
||||
ai_agents_guide: guides_1.aiAgentsGuide,
|
||||
search_nodes: discovery_1.searchNodesDoc,
|
||||
get_node: configuration_1.getNodeDoc,
|
||||
@@ -31,6 +32,7 @@ exports.toolsDocumentation = {
|
||||
n8n_workflow_versions: workflow_management_1.n8nWorkflowVersionsDoc,
|
||||
n8n_deploy_template: workflow_management_1.n8nDeployTemplateDoc,
|
||||
n8n_manage_datatable: workflow_management_1.n8nManageDatatableDoc,
|
||||
n8n_generate_workflow: workflow_management_1.n8nGenerateWorkflowDoc
|
||||
n8n_generate_workflow: workflow_management_1.n8nGenerateWorkflowDoc,
|
||||
n8n_manage_credentials: workflow_management_1.n8nManageCredentialsDoc
|
||||
};
|
||||
//# sourceMappingURL=index.js.map
|
||||
2
dist/mcp/tool-docs/index.js.map
vendored
2
dist/mcp/tool-docs/index.js.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"index.js","sourceRoot":"","sources":["../../../src/mcp/tool-docs/index.ts"],"names":[],"mappings":";;;AAGA,2CAA6C;AAC7C,mDAA6C;AAC7C,6CAAoE;AACpE,2CAAiE;AACjE,qCAGkB;AAClB,qCAAyC;AACzC,+DAe+B;AAGlB,QAAA,kBAAkB,GAAsC;IAEnE,mBAAmB,EAAE,8BAAqB;IAC1C,gBAAgB,EAAE,0BAAiB;IAGnC,eAAe,EAAE,sBAAa;IAG9B,YAAY,EAAE,0BAAc;IAG5B,QAAQ,EAAE,0BAAU;IAGpB,aAAa,EAAE,4BAAe;IAC9B,iBAAiB,EAAE,gCAAmB;IAGtC,YAAY,EAAE,0BAAc;IAC5B,gBAAgB,EAAE,8BAAkB;IAGpC,mBAAmB,EAAE,0CAAoB;IACzC,gBAAgB,EAAE,uCAAiB;IACnC,wBAAwB,EAAE,8CAAwB;IAClD,2BAA2B,EAAE,iDAA2B;IACxD,mBAAmB,EAAE,0CAAoB;IACzC,kBAAkB,EAAE,yCAAmB;IACvC,qBAAqB,EAAE,4CAAsB;IAC7C,oBAAoB,EAAE,2CAAqB;IAC3C,iBAAiB,EAAE,wCAAkB;IACrC,cAAc,EAAE,sCAAgB;IAChC,qBAAqB,EAAE,4CAAsB;IAC7C,mBAAmB,EAAE,0CAAoB;IACzC,oBAAoB,EAAE,2CAAqB;IAC3C,qBAAqB,EAAE,4CAAsB;CAC9C,CAAC"}
|
||||
{"version":3,"file":"index.js","sourceRoot":"","sources":["../../../src/mcp/tool-docs/index.ts"],"names":[],"mappings":";;;AAGA,2CAA6C;AAC7C,mDAA6C;AAC7C,6CAAoE;AACpE,2CAAiE;AACjE,qCAIkB;AAClB,qCAAyC;AACzC,+DAgB+B;AAGlB,QAAA,kBAAkB,GAAsC;IAEnE,mBAAmB,EAAE,8BAAqB;IAC1C,gBAAgB,EAAE,0BAAiB;IACnC,kBAAkB,EAAE,4BAAmB;IAGvC,eAAe,EAAE,sBAAa;IAG9B,YAAY,EAAE,0BAAc;IAG5B,QAAQ,EAAE,0BAAU;IAGpB,aAAa,EAAE,4BAAe;IAC9B,iBAAiB,EAAE,gCAAmB;IAGtC,YAAY,EAAE,0BAAc;IAC5B,gBAAgB,EAAE,8BAAkB;IAGpC,mBAAmB,EAAE,0CAAoB;IACzC,gBAAgB,EAAE,uCAAiB;IACnC,wBAAwB,EAAE,8CAAwB;IAClD,2BAA2B,EAAE,iDAA2B;IACxD,mBAAmB,EAAE,0CAAoB;IACzC,kBAAkB,EAAE,yCAAmB;IACvC,qBAAqB,EAAE,4CAAsB;IAC7C,oBAAoB,EAAE,2CAAqB;IAC3C,iBAAiB,EAAE,wCAAkB;IACrC,cAAc,EAAE,sCAAgB;IAChC,qBAAqB,EAAE,4CAAsB;IAC7C,mBAAmB,EAAE,0CAAoB;IACzC,oBAAoB,EAAE,2CAAqB;IAC3C,qBAAqB,EAAE,4CAAsB;IAC7C,sBAAsB,EAAE,6CAAuB;CAChD,CAAC"}
|
||||
1
dist/mcp/tool-docs/system/index.d.ts
vendored
1
dist/mcp/tool-docs/system/index.d.ts
vendored
@@ -1,3 +1,4 @@
|
||||
export { toolsDocumentationDoc } from './tools-documentation';
|
||||
export { n8nHealthCheckDoc } from './n8n-health-check';
|
||||
export { n8nAuditInstanceDoc } from './n8n-audit-instance';
|
||||
//# sourceMappingURL=index.d.ts.map
|
||||
2
dist/mcp/tool-docs/system/index.d.ts.map
vendored
2
dist/mcp/tool-docs/system/index.d.ts.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/system/index.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,qBAAqB,EAAE,MAAM,uBAAuB,CAAC;AAC9D,OAAO,EAAE,iBAAiB,EAAE,MAAM,oBAAoB,CAAC"}
|
||||
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/system/index.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,qBAAqB,EAAE,MAAM,uBAAuB,CAAC;AAC9D,OAAO,EAAE,iBAAiB,EAAE,MAAM,oBAAoB,CAAC;AACvD,OAAO,EAAE,mBAAmB,EAAE,MAAM,sBAAsB,CAAC"}
|
||||
4
dist/mcp/tool-docs/system/index.js
vendored
4
dist/mcp/tool-docs/system/index.js
vendored
@@ -1,8 +1,10 @@
|
||||
"use strict";
|
||||
Object.defineProperty(exports, "__esModule", { value: true });
|
||||
exports.n8nHealthCheckDoc = exports.toolsDocumentationDoc = void 0;
|
||||
exports.n8nAuditInstanceDoc = exports.n8nHealthCheckDoc = exports.toolsDocumentationDoc = void 0;
|
||||
var tools_documentation_1 = require("./tools-documentation");
|
||||
Object.defineProperty(exports, "toolsDocumentationDoc", { enumerable: true, get: function () { return tools_documentation_1.toolsDocumentationDoc; } });
|
||||
var n8n_health_check_1 = require("./n8n-health-check");
|
||||
Object.defineProperty(exports, "n8nHealthCheckDoc", { enumerable: true, get: function () { return n8n_health_check_1.n8nHealthCheckDoc; } });
|
||||
var n8n_audit_instance_1 = require("./n8n-audit-instance");
|
||||
Object.defineProperty(exports, "n8nAuditInstanceDoc", { enumerable: true, get: function () { return n8n_audit_instance_1.n8nAuditInstanceDoc; } });
|
||||
//# sourceMappingURL=index.js.map
|
||||
2
dist/mcp/tool-docs/system/index.js.map
vendored
2
dist/mcp/tool-docs/system/index.js.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"index.js","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/system/index.ts"],"names":[],"mappings":";;;AAAA,6DAA8D;AAArD,4HAAA,qBAAqB,OAAA;AAC9B,uDAAuD;AAA9C,qHAAA,iBAAiB,OAAA"}
|
||||
{"version":3,"file":"index.js","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/system/index.ts"],"names":[],"mappings":";;;AAAA,6DAA8D;AAArD,4HAAA,qBAAqB,OAAA;AAC9B,uDAAuD;AAA9C,qHAAA,iBAAiB,OAAA;AAC1B,2DAA2D;AAAlD,yHAAA,mBAAmB,OAAA"}
|
||||
@@ -12,4 +12,5 @@ export { n8nWorkflowVersionsDoc } from './n8n-workflow-versions';
|
||||
export { n8nDeployTemplateDoc } from './n8n-deploy-template';
|
||||
export { n8nManageDatatableDoc } from './n8n-manage-datatable';
|
||||
export { n8nGenerateWorkflowDoc } from './n8n-generate-workflow';
|
||||
export { n8nManageCredentialsDoc } from './n8n-manage-credentials';
|
||||
//# sourceMappingURL=index.d.ts.map
|
||||
@@ -1 +1 @@
|
||||
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/workflow_management/index.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,oBAAoB,EAAE,MAAM,uBAAuB,CAAC;AAC7D,OAAO,EAAE,iBAAiB,EAAE,MAAM,oBAAoB,CAAC;AACvD,OAAO,EAAE,wBAAwB,EAAE,MAAM,4BAA4B,CAAC;AACtE,OAAO,EAAE,2BAA2B,EAAE,MAAM,+BAA+B,CAAC;AAC5E,OAAO,EAAE,oBAAoB,EAAE,MAAM,uBAAuB,CAAC;AAC7D,OAAO,EAAE,mBAAmB,EAAE,MAAM,sBAAsB,CAAC;AAC3D,OAAO,EAAE,sBAAsB,EAAE,MAAM,yBAAyB,CAAC;AACjE,OAAO,EAAE,qBAAqB,EAAE,MAAM,wBAAwB,CAAC;AAC/D,OAAO,EAAE,kBAAkB,EAAE,MAAM,qBAAqB,CAAC;AACzD,OAAO,EAAE,gBAAgB,EAAE,MAAM,kBAAkB,CAAC;AACpD,OAAO,EAAE,sBAAsB,EAAE,MAAM,yBAAyB,CAAC;AACjE,OAAO,EAAE,oBAAoB,EAAE,MAAM,uBAAuB,CAAC;AAC7D,OAAO,EAAE,qBAAqB,EAAE,MAAM,wBAAwB,CAAC;AAC/D,OAAO,EAAE,sBAAsB,EAAE,MAAM,yBAAyB,CAAC"}
|
||||
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/workflow_management/index.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,oBAAoB,EAAE,MAAM,uBAAuB,CAAC;AAC7D,OAAO,EAAE,iBAAiB,EAAE,MAAM,oBAAoB,CAAC;AACvD,OAAO,EAAE,wBAAwB,EAAE,MAAM,4BAA4B,CAAC;AACtE,OAAO,EAAE,2BAA2B,EAAE,MAAM,+BAA+B,CAAC;AAC5E,OAAO,EAAE,oBAAoB,EAAE,MAAM,uBAAuB,CAAC;AAC7D,OAAO,EAAE,mBAAmB,EAAE,MAAM,sBAAsB,CAAC;AAC3D,OAAO,EAAE,sBAAsB,EAAE,MAAM,yBAAyB,CAAC;AACjE,OAAO,EAAE,qBAAqB,EAAE,MAAM,wBAAwB,CAAC;AAC/D,OAAO,EAAE,kBAAkB,EAAE,MAAM,qBAAqB,CAAC;AACzD,OAAO,EAAE,gBAAgB,EAAE,MAAM,kBAAkB,CAAC;AACpD,OAAO,EAAE,sBAAsB,EAAE,MAAM,yBAAyB,CAAC;AACjE,OAAO,EAAE,oBAAoB,EAAE,MAAM,uBAAuB,CAAC;AAC7D,OAAO,EAAE,qBAAqB,EAAE,MAAM,wBAAwB,CAAC;AAC/D,OAAO,EAAE,sBAAsB,EAAE,MAAM,yBAAyB,CAAC;AACjE,OAAO,EAAE,uBAAuB,EAAE,MAAM,0BAA0B,CAAC"}
|
||||
@@ -1,6 +1,6 @@
|
||||
"use strict";
|
||||
Object.defineProperty(exports, "__esModule", { value: true });
|
||||
exports.n8nGenerateWorkflowDoc = exports.n8nManageDatatableDoc = exports.n8nDeployTemplateDoc = exports.n8nWorkflowVersionsDoc = exports.n8nExecutionsDoc = exports.n8nTestWorkflowDoc = exports.n8nAutofixWorkflowDoc = exports.n8nValidateWorkflowDoc = exports.n8nListWorkflowsDoc = exports.n8nDeleteWorkflowDoc = exports.n8nUpdatePartialWorkflowDoc = exports.n8nUpdateFullWorkflowDoc = exports.n8nGetWorkflowDoc = exports.n8nCreateWorkflowDoc = void 0;
|
||||
exports.n8nManageCredentialsDoc = exports.n8nGenerateWorkflowDoc = exports.n8nManageDatatableDoc = exports.n8nDeployTemplateDoc = exports.n8nWorkflowVersionsDoc = exports.n8nExecutionsDoc = exports.n8nTestWorkflowDoc = exports.n8nAutofixWorkflowDoc = exports.n8nValidateWorkflowDoc = exports.n8nListWorkflowsDoc = exports.n8nDeleteWorkflowDoc = exports.n8nUpdatePartialWorkflowDoc = exports.n8nUpdateFullWorkflowDoc = exports.n8nGetWorkflowDoc = exports.n8nCreateWorkflowDoc = void 0;
|
||||
var n8n_create_workflow_1 = require("./n8n-create-workflow");
|
||||
Object.defineProperty(exports, "n8nCreateWorkflowDoc", { enumerable: true, get: function () { return n8n_create_workflow_1.n8nCreateWorkflowDoc; } });
|
||||
var n8n_get_workflow_1 = require("./n8n-get-workflow");
|
||||
@@ -29,4 +29,6 @@ var n8n_manage_datatable_1 = require("./n8n-manage-datatable");
|
||||
Object.defineProperty(exports, "n8nManageDatatableDoc", { enumerable: true, get: function () { return n8n_manage_datatable_1.n8nManageDatatableDoc; } });
|
||||
var n8n_generate_workflow_1 = require("./n8n-generate-workflow");
|
||||
Object.defineProperty(exports, "n8nGenerateWorkflowDoc", { enumerable: true, get: function () { return n8n_generate_workflow_1.n8nGenerateWorkflowDoc; } });
|
||||
var n8n_manage_credentials_1 = require("./n8n-manage-credentials");
|
||||
Object.defineProperty(exports, "n8nManageCredentialsDoc", { enumerable: true, get: function () { return n8n_manage_credentials_1.n8nManageCredentialsDoc; } });
|
||||
//# sourceMappingURL=index.js.map
|
||||
@@ -1 +1 @@
|
||||
{"version":3,"file":"index.js","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/workflow_management/index.ts"],"names":[],"mappings":";;;AAAA,6DAA6D;AAApD,2HAAA,oBAAoB,OAAA;AAC7B,uDAAuD;AAA9C,qHAAA,iBAAiB,OAAA;AAC1B,uEAAsE;AAA7D,oIAAA,wBAAwB,OAAA;AACjC,6EAA4E;AAAnE,0IAAA,2BAA2B,OAAA;AACpC,6DAA6D;AAApD,2HAAA,oBAAoB,OAAA;AAC7B,2DAA2D;AAAlD,yHAAA,mBAAmB,OAAA;AAC5B,iEAAiE;AAAxD,+HAAA,sBAAsB,OAAA;AAC/B,+DAA+D;AAAtD,6HAAA,qBAAqB,OAAA;AAC9B,yDAAyD;AAAhD,uHAAA,kBAAkB,OAAA;AAC3B,mDAAoD;AAA3C,kHAAA,gBAAgB,OAAA;AACzB,iEAAiE;AAAxD,+HAAA,sBAAsB,OAAA;AAC/B,6DAA6D;AAApD,2HAAA,oBAAoB,OAAA;AAC7B,+DAA+D;AAAtD,6HAAA,qBAAqB,OAAA;AAC9B,iEAAiE;AAAxD,+HAAA,sBAAsB,OAAA"}
|
||||
{"version":3,"file":"index.js","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/workflow_management/index.ts"],"names":[],"mappings":";;;AAAA,6DAA6D;AAApD,2HAAA,oBAAoB,OAAA;AAC7B,uDAAuD;AAA9C,qHAAA,iBAAiB,OAAA;AAC1B,uEAAsE;AAA7D,oIAAA,wBAAwB,OAAA;AACjC,6EAA4E;AAAnE,0IAAA,2BAA2B,OAAA;AACpC,6DAA6D;AAApD,2HAAA,oBAAoB,OAAA;AAC7B,2DAA2D;AAAlD,yHAAA,mBAAmB,OAAA;AAC5B,iEAAiE;AAAxD,+HAAA,sBAAsB,OAAA;AAC/B,+DAA+D;AAAtD,6HAAA,qBAAqB,OAAA;AAC9B,yDAAyD;AAAhD,uHAAA,kBAAkB,OAAA;AAC3B,mDAAoD;AAA3C,kHAAA,gBAAgB,OAAA;AACzB,iEAAiE;AAAxD,+HAAA,sBAAsB,OAAA;AAC/B,6DAA6D;AAApD,2HAAA,oBAAoB,OAAA;AAC7B,+DAA+D;AAAtD,6HAAA,qBAAqB,OAAA;AAC9B,iEAAiE;AAAxD,+HAAA,sBAAsB,OAAA;AAC/B,mEAAmE;AAA1D,iIAAA,uBAAuB,OAAA"}
|
||||
@@ -1 +1 @@
|
||||
{"version":3,"file":"n8n-update-partial-workflow.d.ts","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/workflow_management/n8n-update-partial-workflow.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,iBAAiB,EAAE,MAAM,UAAU,CAAC;AAE7C,eAAO,MAAM,2BAA2B,EAAE,iBA2azC,CAAC"}
|
||||
{"version":3,"file":"n8n-update-partial-workflow.d.ts","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/workflow_management/n8n-update-partial-workflow.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,iBAAiB,EAAE,MAAM,UAAU,CAAC;AAE7C,eAAO,MAAM,2BAA2B,EAAE,iBA0bzC,CAAC"}
|
||||
@@ -5,7 +5,7 @@ exports.n8nUpdatePartialWorkflowDoc = {
|
||||
name: 'n8n_update_partial_workflow',
|
||||
category: 'workflow_management',
|
||||
essentials: {
|
||||
description: 'Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, moveNode, enable/disableNode, addConnection, removeConnection, rewireConnection, cleanStaleConnections, replaceConnections, updateSettings, updateName, add/removeTag, activateWorkflow, deactivateWorkflow, transferWorkflow. Supports smart parameters (branch, case) for multi-output nodes. Full support for AI connections (ai_languageModel, ai_tool, ai_memory, ai_embedding, ai_vectorStore, ai_document, ai_textSplitter, ai_outputParser).',
|
||||
description: 'Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, patchNodeField, moveNode, enable/disableNode, addConnection, removeConnection, rewireConnection, cleanStaleConnections, replaceConnections, updateSettings, updateName, add/removeTag, activateWorkflow, deactivateWorkflow, transferWorkflow. Supports smart parameters (branch, case) for multi-output nodes. Full support for AI connections (ai_languageModel, ai_tool, ai_memory, ai_embedding, ai_vectorStore, ai_document, ai_textSplitter, ai_outputParser).',
|
||||
keyParameters: ['id', 'operations', 'continueOnError'],
|
||||
example: 'n8n_update_partial_workflow({id: "wf_123", operations: [{type: "rewireConnection", source: "IF", from: "Old", to: "New", branch: "true"}]})',
|
||||
performance: 'Fast (50-200ms)',
|
||||
@@ -28,14 +28,15 @@ exports.n8nUpdatePartialWorkflowDoc = {
|
||||
]
|
||||
},
|
||||
full: {
|
||||
description: `Updates workflows using surgical diff operations instead of full replacement. Supports 17 operation types for precise modifications. Operations are validated and applied atomically by default - all succeed or none are applied.
|
||||
description: `Updates workflows using surgical diff operations instead of full replacement. Supports 18 operation types for precise modifications. Operations are validated and applied atomically by default - all succeed or none are applied.
|
||||
|
||||
## Available Operations:
|
||||
|
||||
### Node Operations (6 types):
|
||||
### Node Operations (7 types):
|
||||
- **addNode**: Add a new node with name, type, and position (required)
|
||||
- **removeNode**: Remove a node by ID or name
|
||||
- **updateNode**: Update node properties using dot notation (e.g., 'parameters.url')
|
||||
- **patchNodeField**: Surgically edit string fields using find/replace patches. Strict mode: errors if find string not found, errors if multiple matches (ambiguity) unless replaceAll is set. Supports replaceAll and regex flags.
|
||||
- **moveNode**: Change node position [x, y]
|
||||
- **enableNode**: Enable a disabled node
|
||||
- **disableNode**: Disable an active node
|
||||
@@ -336,6 +337,11 @@ n8n_update_partial_workflow({
|
||||
'// Validate before applying\nn8n_update_partial_workflow({id: "bcd", operations: [{type: "removeNode", nodeName: "Old Process"}], validateOnly: true})',
|
||||
'// Surgically edit code using __patch_find_replace (avoids replacing entire code block)\nn8n_update_partial_workflow({id: "pfr1", operations: [{type: "updateNode", nodeName: "Code", updates: {"parameters.jsCode": {"__patch_find_replace": [{"find": "const limit = 10;", "replace": "const limit = 50;"}]}}}]})',
|
||||
'// Multiple sequential patches on the same property\nn8n_update_partial_workflow({id: "pfr2", operations: [{type: "updateNode", nodeName: "Code", updates: {"parameters.jsCode": {"__patch_find_replace": [{"find": "api.old-domain.com", "replace": "api.new-domain.com"}, {"find": "Authorization: Bearer old_token", "replace": "Authorization: Bearer new_token"}]}}}]})',
|
||||
'\n// ============ PATCHNODEFIELD EXAMPLES (strict find/replace) ============',
|
||||
'// Surgical code edit with patchNodeField (errors if not found)\nn8n_update_partial_workflow({id: "pnf1", operations: [{type: "patchNodeField", nodeName: "Code", fieldPath: "parameters.jsCode", patches: [{find: "const limit = 10;", replace: "const limit = 50;"}]}]})',
|
||||
'// Replace all occurrences of a string\nn8n_update_partial_workflow({id: "pnf2", operations: [{type: "patchNodeField", nodeName: "Code", fieldPath: "parameters.jsCode", patches: [{find: "api.old.com", replace: "api.new.com", replaceAll: true}]}]})',
|
||||
'// Multiple sequential patches\nn8n_update_partial_workflow({id: "pnf3", operations: [{type: "patchNodeField", nodeName: "Set Email", fieldPath: "parameters.assignments.assignments.6.value", patches: [{find: "© 2025 n8n-mcp", replace: "© 2026 n8n-mcp"}, {find: "<p>Unsubscribe</p>", replace: ""}]}]})',
|
||||
'// Regex-based replacement\nn8n_update_partial_workflow({id: "pnf4", operations: [{type: "patchNodeField", nodeName: "Code", fieldPath: "parameters.jsCode", patches: [{find: "const\\\\s+limit\\\\s*=\\\\s*\\\\d+", replace: "const limit = 100", regex: true}]}]})',
|
||||
'\n// ============ AI CONNECTION EXAMPLES ============',
|
||||
'// Connect language model to AI Agent\nn8n_update_partial_workflow({id: "ai1", operations: [{type: "addConnection", source: "OpenAI Chat Model", target: "AI Agent", sourceOutput: "ai_languageModel"}]})',
|
||||
'// Connect tool to AI Agent\nn8n_update_partial_workflow({id: "ai2", operations: [{type: "addConnection", source: "HTTP Request Tool", target: "AI Agent", sourceOutput: "ai_tool"}]})',
|
||||
@@ -374,7 +380,10 @@ n8n_update_partial_workflow({
|
||||
'Configure Vector Store retrieval systems',
|
||||
'Swap language models in existing AI workflows',
|
||||
'Batch-update AI tool connections',
|
||||
'Transfer workflows between team projects (enterprise)'
|
||||
'Transfer workflows between team projects (enterprise)',
|
||||
'Surgical string edits in email templates, code, or JSON bodies (patchNodeField)',
|
||||
'Fix typos or update URLs in large HTML content without re-transmitting the full string',
|
||||
'Bulk find/replace across node field content (replaceAll flag)'
|
||||
],
|
||||
performance: 'Very fast - typically 50-200ms. Much faster than full updates as only changes are processed.',
|
||||
bestPractices: [
|
||||
@@ -397,7 +406,10 @@ n8n_update_partial_workflow({
|
||||
'To remove properties, set them to null in the updates object',
|
||||
'When migrating from deprecated properties, remove the old property and add the new one in the same operation',
|
||||
'Use null to resolve mutual exclusivity validation errors between properties',
|
||||
'Batch multiple property removals in a single updateNode operation for efficiency'
|
||||
'Batch multiple property removals in a single updateNode operation for efficiency',
|
||||
'Prefer patchNodeField over __patch_find_replace for strict error handling — patchNodeField errors on not-found and detects ambiguous matches',
|
||||
'Use replaceAll: true in patchNodeField when you want to replace all occurrences of a string',
|
||||
'Use regex: true in patchNodeField for pattern-based replacements (e.g., whitespace-insensitive matching)'
|
||||
],
|
||||
pitfalls: [
|
||||
'**REQUIRES N8N_API_URL and N8N_API_KEY environment variables** - will not work without n8n API access',
|
||||
@@ -420,6 +432,9 @@ n8n_update_partial_workflow({
|
||||
'**Corrupted workflows beyond repair**: Workflows in paradoxical states (API returns corrupt, API rejects updates) cannot be fixed via API - must be recreated',
|
||||
'**__patch_find_replace for code edits**: Instead of replacing entire code blocks, use `{"parameters.jsCode": {"__patch_find_replace": [{"find": "old text", "replace": "new text"}]}}` to surgically edit string properties',
|
||||
'__patch_find_replace replaces the FIRST occurrence of each find string. Patches are applied sequentially — order matters',
|
||||
'**patchNodeField is strict**: it ERRORS if the find string is not found (unlike __patch_find_replace which only warns)',
|
||||
'**patchNodeField detects ambiguity**: if find matches multiple times, it ERRORS unless replaceAll: true is set',
|
||||
'When using regex: true in patchNodeField, escape special regex characters (., *, +, etc.) if you want literal matching',
|
||||
'To remove a property, set it to null in the updates object',
|
||||
'When properties are mutually exclusive (e.g., continueOnFail and onError), setting only the new property will fail - you must remove the old one with null',
|
||||
'Removing a required property may cause validation errors - check node documentation first',
|
||||
|
||||
@@ -1 +1 @@
|
||||
{"version":3,"file":"n8n-update-partial-workflow.js","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/workflow_management/n8n-update-partial-workflow.ts"],"names":[],"mappings":";;;AAEa,QAAA,2BAA2B,GAAsB;IAC5D,IAAI,EAAE,6BAA6B;IACnC,QAAQ,EAAE,qBAAqB;IAC/B,UAAU,EAAE;QACV,WAAW,EAAE,khBAAkhB;QAC/hB,aAAa,EAAE,CAAC,IAAI,EAAE,YAAY,EAAE,iBAAiB,CAAC;QACtD,OAAO,EAAE,6IAA6I;QACtJ,WAAW,EAAE,iBAAiB;QAC9B,IAAI,EAAE;YACJ,gJAAgJ;YAChJ,oGAAoG;YACpG,mDAAmD;YACnD,wCAAwC;YACxC,6BAA6B;YAC7B,6DAA6D;YAC7D,uDAAuD;YACvD,0DAA0D;YAC1D,kCAAkC;YAClC,iFAAiF;YACjF,mDAAmD;YACnD,gGAAgG;YAChG,sGAAsG;YACtG,yIAAyI;YACzI,0GAA0G;SAC3G;KACF;IACD,IAAI,EAAE;QACJ,WAAW,EAAE;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;iCAqRgB;QAC7B,UAAU,EAAE;YACV,EAAE,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,QAAQ,EAAE,IAAI,EAAE,WAAW,EAAE,uBAAuB,EAAE;YAC5E,UAAU,EAAE;gBACV,IAAI,EAAE,OAAO;gBACb,QAAQ,EAAE,IAAI;gBACd,WAAW,EAAE,iIAAiI;aAC/I;YACD,YAAY,EAAE,EAAE,IAAI,EAAE,SAAS,EAAE,WAAW,EAAE,yDAAyD,EAAE;YACzG,eAAe,EAAE,EAAE,IAAI,EAAE,SAAS,EAAE,WAAW,EAAE,6IAA6I,EAAE;YAChM,MAAM,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,qIAAqI,EAAE;SAC/K;QACD,OAAO,EAAE,uNAAuN;QAChO,QAAQ,EAAE;YACR,mOAAmO;YACnO,wNAAwN;YACxN,kTAAkT;YAClT,0VAA0V;YAC1V,gMAAgM;YAChM,mLAAmL;YACnL,mLAAmL;YACnL,6UAA6U;YAC7U,oMAAoM;YACpM,oYAAoY;YACpY,qJAAqJ;YACrJ,+MAA+M;YAC/M,kSAAkS;YAClS,0LAA0L;YAC1L,wJAAwJ;YACxJ,qTAAqT;YACrT,8WAA8W;YAC9W,uDAAuD;YACvD,2MAA2M;YAC3M,wLAAwL;YACxL,+LAA+L;YAC/L,gNAAgN;YAChN,4hBAA4hB;YAC5hB,+WAA+W;YAC/W,qWAAqW;YACrW,uVAAuV;YACvV,qPAAqP;YACrP,0eAA0e;YAC1e,6DAA6D;YAC7D,+JAA+J;YAC/J,+NAA+N;YAC/N,gLAAgL;YAChL,oOAAoO;YACpO,gLAAgL;YAChL,0DAA0D;YAC1D,0KAA0K;YAC1K,+LAA+L;SAChM;QACD,QAAQ,EAAE;YACR,yCAAyC;YACzC,uDAAuD;YACvD,wDAAwD;YACxD,+CAA+C;YAC/C,+BAA+B;YAC/B,iCAAiC;YACjC,8CAA8C;YAC9C,sBAAsB;YACtB,2BAA2B;YAC3B,yBAAyB;YACzB,iEAAiE;YACjE,+CAA+C;YAC/C,2CAA2C;YAC3C,0CAA0C;YAC1C,+CAA+C;YAC/C,kCAAkC;YAClC,uDAAuD;SACxD;QACD,WAAW,EAAE,8FAA8F;QAC3G,aAAa,EAAE;YACb,kPAAkP;YAClP,iEAAiE;YACjE,+DAA+D;YAC/D,oDAAoD;YACpD,yDAAyD;YACzD,iDAAiD;YACjD,gEAAgE;YAChE,qDAAqD;YACrD,mCAAmC;YACnC,wCAAwC;YACxC,gDAAgD;YAChD,8FAA8F;YAC9F,2EAA2E;YAC3E,6DAA6D;YAC7D,oEAAoE;YACpE,8EAA8E;YAC9E,8DAA8D;YAC9D,8GAA8G;YAC9G,6EAA6E;YAC7E,kFAAkF;SACnF;QACD,QAAQ,EAAE;YACR,uGAAuG;YACvG,wEAAwE;YACxE,6DAA6D;YAC7D,sFAAsF;YACtF,4DAA4D;YAC5D,yEAAyE;YACzE,yFAAyF;YACzF,wFAAwF;YACxF,mGAAmG;YACnG,iFAAiF;YACjF,iNAAiN;YACjN,kKAAkK;YAClK,4EAA4E;YAC5E,yFAAyF;YACzF,wLAAwL;YACxL,oIAAoI;YACpI,wJAAwJ;YACxJ,+JAA+J;YAC/J,6NAA6N;YAC7N,0HAA0H;YAC1H,4DAA4D;YAC5D,4JAA4J;YAC5J,2FAA2F;YAC3F,gHAAgH;YAChH,kHAAkH;SACnH;QACD,YAAY,EAAE,CAAC,0BAA0B,EAAE,kBAAkB,EAAE,mBAAmB,EAAE,qBAAqB,CAAC;KAC3G;CACF,CAAC"}
|
||||
{"version":3,"file":"n8n-update-partial-workflow.js","sourceRoot":"","sources":["../../../../src/mcp/tool-docs/workflow_management/n8n-update-partial-workflow.ts"],"names":[],"mappings":";;;AAEa,QAAA,2BAA2B,GAAsB;IAC5D,IAAI,EAAE,6BAA6B;IACnC,QAAQ,EAAE,qBAAqB;IAC/B,UAAU,EAAE;QACV,WAAW,EAAE,kiBAAkiB;QAC/iB,aAAa,EAAE,CAAC,IAAI,EAAE,YAAY,EAAE,iBAAiB,CAAC;QACtD,OAAO,EAAE,6IAA6I;QACtJ,WAAW,EAAE,iBAAiB;QAC9B,IAAI,EAAE;YACJ,gJAAgJ;YAChJ,oGAAoG;YACpG,mDAAmD;YACnD,wCAAwC;YACxC,6BAA6B;YAC7B,6DAA6D;YAC7D,uDAAuD;YACvD,0DAA0D;YAC1D,kCAAkC;YAClC,iFAAiF;YACjF,mDAAmD;YACnD,gGAAgG;YAChG,sGAAsG;YACtG,yIAAyI;YACzI,0GAA0G;SAC3G;KACF;IACD,IAAI,EAAE;QACJ,WAAW,EAAE;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;iCAsRgB;QAC7B,UAAU,EAAE;YACV,EAAE,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,QAAQ,EAAE,IAAI,EAAE,WAAW,EAAE,uBAAuB,EAAE;YAC5E,UAAU,EAAE;gBACV,IAAI,EAAE,OAAO;gBACb,QAAQ,EAAE,IAAI;gBACd,WAAW,EAAE,iIAAiI;aAC/I;YACD,YAAY,EAAE,EAAE,IAAI,EAAE,SAAS,EAAE,WAAW,EAAE,yDAAyD,EAAE;YACzG,eAAe,EAAE,EAAE,IAAI,EAAE,SAAS,EAAE,WAAW,EAAE,6IAA6I,EAAE;YAChM,MAAM,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,EAAE,qIAAqI,EAAE;SAC/K;QACD,OAAO,EAAE,uNAAuN;QAChO,QAAQ,EAAE;YACR,mOAAmO;YACnO,wNAAwN;YACxN,kTAAkT;YAClT,0VAA0V;YAC1V,gMAAgM;YAChM,mLAAmL;YACnL,mLAAmL;YACnL,6UAA6U;YAC7U,oMAAoM;YACpM,oYAAoY;YACpY,qJAAqJ;YACrJ,+MAA+M;YAC/M,kSAAkS;YAClS,0LAA0L;YAC1L,wJAAwJ;YACxJ,qTAAqT;YACrT,8WAA8W;YAC9W,8EAA8E;YAC9E,4QAA4Q;YAC5Q,yPAAyP;YACzP,8SAA8S;YAC9S,sQAAsQ;YACtQ,uDAAuD;YACvD,2MAA2M;YAC3M,wLAAwL;YACxL,+LAA+L;YAC/L,gNAAgN;YAChN,4hBAA4hB;YAC5hB,+WAA+W;YAC/W,qWAAqW;YACrW,uVAAuV;YACvV,qPAAqP;YACrP,0eAA0e;YAC1e,6DAA6D;YAC7D,+JAA+J;YAC/J,+NAA+N;YAC/N,gLAAgL;YAChL,oOAAoO;YACpO,gLAAgL;YAChL,0DAA0D;YAC1D,0KAA0K;YAC1K,+LAA+L;SAChM;QACD,QAAQ,EAAE;YACR,yCAAyC;YACzC,uDAAuD;YACvD,wDAAwD;YACxD,+CAA+C;YAC/C,+BAA+B;YAC/B,iCAAiC;YACjC,8CAA8C;YAC9C,sBAAsB;YACtB,2BAA2B;YAC3B,yBAAyB;YACzB,iEAAiE;YACjE,+CAA+C;YAC/C,2CAA2C;YAC3C,0CAA0C;YAC1C,+CAA+C;YAC/C,kCAAkC;YAClC,uDAAuD;YACvD,iFAAiF;YACjF,wFAAwF;YACxF,+DAA+D;SAChE;QACD,WAAW,EAAE,8FAA8F;QAC3G,aAAa,EAAE;YACb,kPAAkP;YAClP,iEAAiE;YACjE,+DAA+D;YAC/D,oDAAoD;YACpD,yDAAyD;YACzD,iDAAiD;YACjD,gEAAgE;YAChE,qDAAqD;YACrD,mCAAmC;YACnC,wCAAwC;YACxC,gDAAgD;YAChD,8FAA8F;YAC9F,2EAA2E;YAC3E,6DAA6D;YAC7D,oEAAoE;YACpE,8EAA8E;YAC9E,8DAA8D;YAC9D,8GAA8G;YAC9G,6EAA6E;YAC7E,kFAAkF;YAClF,8IAA8I;YAC9I,6FAA6F;YAC7F,0GAA0G;SAC3G;QACD,QAAQ,EAAE;YACR,uGAAuG;YACvG,wEAAwE;YACxE,6DAA6D;YAC7D,sFAAsF;YACtF,4DAA4D;YAC5D,yEAAyE;YACzE,yFAAyF;YACzF,wFAAwF;YACxF,mGAAmG;YACnG,iFAAiF;YACjF,iNAAiN;YACjN,kKAAkK;YAClK,4EAA4E;YAC5E,yFAAyF;YACzF,wLAAwL;YACxL,oIAAoI;YACpI,wJAAwJ;YACxJ,+JAA+J;YAC/J,6NAA6N;YAC7N,0HAA0H;YAC1H,wHAAwH;YACxH,gHAAgH;YAChH,wHAAwH;YACxH,4DAA4D;YAC5D,4JAA4J;YAC5J,2FAA2F;YAC3F,gHAAgH;YAChH,kHAAkH;SACnH;QACD,YAAY,EAAE,CAAC,0BAA0B,EAAE,kBAAkB,EAAE,mBAAmB,EAAE,qBAAqB,CAAC;KAC3G;CACF,CAAC"}
|
||||
2
dist/mcp/tools-n8n-manager.d.ts.map
vendored
2
dist/mcp/tools-n8n-manager.d.ts.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"tools-n8n-manager.d.ts","sourceRoot":"","sources":["../../src/mcp/tools-n8n-manager.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,cAAc,EAAE,MAAM,UAAU,CAAC;AAQ1C,eAAO,MAAM,kBAAkB,EAAE,cAAc,EAgrB9C,CAAC"}
|
||||
{"version":3,"file":"tools-n8n-manager.d.ts","sourceRoot":"","sources":["../../src/mcp/tools-n8n-manager.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,cAAc,EAAE,MAAM,UAAU,CAAC;AAQ1C,eAAO,MAAM,kBAAkB,EAAE,cAAc,EA6uB9C,CAAC"}
|
||||
63
dist/mcp/tools-n8n-manager.js
vendored
63
dist/mcp/tools-n8n-manager.js
vendored
@@ -141,7 +141,7 @@ exports.n8nManagementTools = [
|
||||
},
|
||||
{
|
||||
name: 'n8n_update_partial_workflow',
|
||||
description: `Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, moveNode, enable/disableNode, addConnection, removeConnection, updateSettings, updateName, add/removeTag, activate/deactivateWorkflow, transferWorkflow. See tools_documentation("n8n_update_partial_workflow", "full") for details.`,
|
||||
description: `Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, patchNodeField, moveNode, enable/disableNode, addConnection, removeConnection, updateSettings, updateName, add/removeTag, activate/deactivateWorkflow, transferWorkflow. See tools_documentation("n8n_update_partial_workflow", "full") for details.`,
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
additionalProperties: true,
|
||||
@@ -635,6 +635,27 @@ exports.n8nManagementTools = [
|
||||
openWorldHint: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'n8n_manage_credentials',
|
||||
description: 'Manage n8n credentials. Actions: list, get, create, update, delete, getSchema. Use getSchema to discover required fields before creating. SECURITY: credential data values are never logged.',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
action: { type: 'string', enum: ['list', 'get', 'create', 'update', 'delete', 'getSchema'], description: 'Action to perform' },
|
||||
id: { type: 'string', description: 'Credential ID (required for get, update, delete)' },
|
||||
name: { type: 'string', description: 'Credential name (required for create)' },
|
||||
type: { type: 'string', description: 'Credential type e.g. httpHeaderAuth, httpBasicAuth, oAuth2Api (required for create, getSchema)' },
|
||||
data: { type: 'object', description: 'Credential data fields - use getSchema to discover required fields (required for create, optional for update)' },
|
||||
},
|
||||
required: ['action'],
|
||||
},
|
||||
annotations: {
|
||||
title: 'Manage Credentials',
|
||||
readOnlyHint: false,
|
||||
destructiveHint: false,
|
||||
openWorldHint: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'n8n_generate_workflow',
|
||||
description: 'Generate an n8n workflow from a natural language description using AI. ' +
|
||||
@@ -675,5 +696,45 @@ exports.n8nManagementTools = [
|
||||
openWorldHint: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'n8n_audit_instance',
|
||||
description: `Security audit of n8n instance. Combines n8n's built-in audit API (credentials, database, nodes, instance, filesystem risks) with deep workflow scanning (hardcoded secrets via 50+ regex patterns, unauthenticated webhooks, error handling gaps, data retention risks). Returns actionable markdown report with remediation steps using n8n_manage_credentials and n8n_update_partial_workflow.`,
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
categories: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'string',
|
||||
enum: ['credentials', 'database', 'nodes', 'instance', 'filesystem'],
|
||||
},
|
||||
description: 'Built-in audit categories to check (default: all 5)',
|
||||
},
|
||||
includeCustomScan: {
|
||||
type: 'boolean',
|
||||
description: 'Run deep workflow scanning for secrets, webhooks, error handling (default: true)',
|
||||
},
|
||||
daysAbandonedWorkflow: {
|
||||
type: 'number',
|
||||
description: 'Days threshold for abandoned workflow detection (default: 90)',
|
||||
},
|
||||
customChecks: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'string',
|
||||
enum: ['hardcoded_secrets', 'unauthenticated_webhooks', 'error_handling', 'data_retention'],
|
||||
},
|
||||
description: 'Specific custom checks to run (default: all 4)',
|
||||
},
|
||||
},
|
||||
},
|
||||
annotations: {
|
||||
title: 'Audit Instance Security',
|
||||
readOnlyHint: true,
|
||||
destructiveHint: false,
|
||||
idempotentHint: true,
|
||||
openWorldHint: true,
|
||||
},
|
||||
},
|
||||
];
|
||||
//# sourceMappingURL=tools-n8n-manager.js.map
|
||||
2
dist/mcp/tools-n8n-manager.js.map
vendored
2
dist/mcp/tools-n8n-manager.js.map
vendored
File diff suppressed because one or more lines are too long
6
dist/services/n8n-api-client.d.ts
vendored
6
dist/services/n8n-api-client.d.ts
vendored
@@ -24,6 +24,11 @@ export declare class N8nApiClient {
|
||||
activateWorkflow(id: string): Promise<Workflow>;
|
||||
deactivateWorkflow(id: string): Promise<Workflow>;
|
||||
listWorkflows(params?: WorkflowListParams): Promise<WorkflowListResponse>;
|
||||
generateAudit(options?: {
|
||||
categories?: string[];
|
||||
daysAbandonedWorkflow?: number;
|
||||
}): Promise<any>;
|
||||
listAllWorkflows(): Promise<Workflow[]>;
|
||||
getExecution(id: string, includeData?: boolean): Promise<Execution>;
|
||||
listExecutions(params?: ExecutionListParams): Promise<ExecutionListResponse>;
|
||||
deleteExecution(id: string): Promise<void>;
|
||||
@@ -33,6 +38,7 @@ export declare class N8nApiClient {
|
||||
createCredential(credential: Partial<Credential>): Promise<Credential>;
|
||||
updateCredential(id: string, credential: Partial<Credential>): Promise<Credential>;
|
||||
deleteCredential(id: string): Promise<void>;
|
||||
getCredentialSchema(typeName: string): Promise<any>;
|
||||
listTags(params?: TagListParams): Promise<TagListResponse>;
|
||||
createTag(tag: Partial<Tag>): Promise<Tag>;
|
||||
updateTag(id: string, tag: Partial<Tag>): Promise<Tag>;
|
||||
|
||||
2
dist/services/n8n-api-client.d.ts.map
vendored
2
dist/services/n8n-api-client.d.ts.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"n8n-api-client.d.ts","sourceRoot":"","sources":["../../src/services/n8n-api-client.ts"],"names":[],"mappings":"AAEA,OAAO,EACL,QAAQ,EACR,kBAAkB,EAClB,oBAAoB,EACpB,SAAS,EACT,mBAAmB,EACnB,qBAAqB,EACrB,UAAU,EACV,oBAAoB,EACpB,sBAAsB,EACtB,GAAG,EACH,aAAa,EACb,eAAe,EACf,mBAAmB,EACnB,cAAc,EACd,QAAQ,EACR,cAAc,EAGd,mBAAmB,EACnB,uBAAuB,EACvB,uBAAuB,EACvB,SAAS,EACT,eAAe,EACf,mBAAmB,EACnB,YAAY,EACZ,sBAAsB,EACtB,yBAAyB,EACzB,yBAAyB,EACzB,wBAAwB,EACxB,yBAAyB,EAC1B,MAAM,kBAAkB,CAAC;AAS1B,MAAM,WAAW,kBAAkB;IACjC,OAAO,EAAE,MAAM,CAAC;IAChB,MAAM,EAAE,MAAM,CAAC;IACf,OAAO,CAAC,EAAE,MAAM,CAAC;IACjB,UAAU,CAAC,EAAE,MAAM,CAAC;CACrB;AAED,qBAAa,YAAY;IACvB,OAAO,CAAC,MAAM,CAAgB;IAC9B,OAAO,CAAC,UAAU,CAAS;IAC3B,OAAO,CAAC,OAAO,CAAS;IACxB,OAAO,CAAC,WAAW,CAA+B;IAClD,OAAO,CAAC,cAAc,CAA+C;gBAEzD,MAAM,EAAE,kBAAkB;IAqDhC,UAAU,IAAI,OAAO,CAAC,cAAc,GAAG,IAAI,CAAC;YAyBpC,gBAAgB;IAa9B,oBAAoB,IAAI,cAAc,GAAG,IAAI;IAKvC,WAAW,IAAI,OAAO,CAAC,mBAAmB,CAAC;IA6C3C,cAAc,CAAC,QAAQ,EAAE,OAAO,CAAC,QAAQ,CAAC,GAAG,OAAO,CAAC,QAAQ,CAAC;IAU9D,WAAW,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,QAAQ,CAAC;IAS1C,cAAc,CAAC,EAAE,EAAE,MAAM,EAAE,QAAQ,EAAE,OAAO,CAAC,QAAQ,CAAC,GAAG,OAAO,CAAC,QAAQ,CAAC;IAsC1E,cAAc,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,QAAQ,CAAC;IAS7C,gBAAgB,CAAC,EAAE,EAAE,MAAM,EAAE,oBAAoB,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAQzE,gBAAgB,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,QAAQ,CAAC;IAS/C,kBAAkB,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,QAAQ,CAAC;IAsBjD,aAAa,CAAC,MAAM,GAAE,kBAAuB,GAAG,OAAO,CAAC,oBAAoB,CAAC;IAU7E,YAAY,CAAC,EAAE,EAAE,MAAM,EAAE,WAAW,UAAQ,GAAG,OAAO,CAAC,SAAS,CAAC;IAwBjE,cAAc,CAAC,MAAM,GAAE,mBAAwB,GAAG,OAAO,CAAC,qBAAqB,CAAC;IAShF,eAAe,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAS1C,cAAc,CAAC,OAAO,EAAE,cAAc,GAAG,OAAO,CAAC,GAAG,CAAC;IAiErD,eAAe,CAAC,MAAM,GAAE,oBAAyB,GAAG,OAAO,CAAC,sBAAsB,CAAC;IASnF,aAAa,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,UAAU,CAAC;IAS9C,gBAAgB,CAAC,UAAU,EAAE,OAAO,CAAC,UAAU,CAAC,GAAG,OAAO,CAAC,UAAU,CAAC;IAStE,gBAAgB,CAAC,EAAE,EAAE,MAAM,EAAE,UAAU,EAAE,OAAO,CAAC,UAAU,CAAC,GAAG,OAAO,CAAC,UAAU,CAAC;IASlF,gBAAgB,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAsB3C,QAAQ,CAAC,MAAM,GAAE,aAAkB,GAAG,OAAO,CAAC,eAAe,CAAC;IAS9D,SAAS,CAAC,GAAG,EAAE,OAAO,CAAC,GAAG,CAAC,GAAG,OAAO,CAAC,GAAG,CAAC;IAS1C,SAAS,CAAC,EAAE,EAAE,MAAM,EAAE,GAAG,EAAE,OAAO,CAAC,GAAG,CAAC,GAAG,OAAO,CAAC,GAAG,CAAC;IAStD,SAAS,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAQpC,kBAAkB,CAAC,UAAU,EAAE,MAAM,EAAE,MAAM,EAAE,MAAM,EAAE,GAAG,OAAO,CAAC,GAAG,EAAE,CAAC;IAUxE,sBAAsB,IAAI,OAAO,CAAC,mBAAmB,CAAC;IAStD,iBAAiB,CAAC,KAAK,UAAQ,GAAG,OAAO,CAAC,uBAAuB,CAAC;IASlE,iBAAiB,CACrB,OAAO,EAAE,MAAM,EACf,SAAS,CAAC,EAAE,MAAM,EAAE,GACnB,OAAO,CAAC,uBAAuB,CAAC;IAa7B,YAAY,IAAI,OAAO,CAAC,QAAQ,EAAE,CAAC;IAWnC,cAAc,CAAC,QAAQ,EAAE,OAAO,CAAC,QAAQ,CAAC,GAAG,OAAO,CAAC,QAAQ,CAAC;IAS9D,cAAc,CAAC,EAAE,EAAE,MAAM,EAAE,QAAQ,EAAE,OAAO,CAAC,QAAQ,CAAC,GAAG,OAAO,CAAC,QAAQ,CAAC;IAS1E,cAAc,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAQzC,eAAe,CAAC,MAAM,EAAE;QAAE,IAAI,EAAE,MAAM,CAAC;QAAC,OAAO,CAAC,EAAE,eAAe,EAAE,CAAA;KAAE,GAAG,OAAO,CAAC,SAAS,CAAC;IAS1F,cAAc,CAAC,MAAM,GAAE,mBAAwB,GAAG,OAAO,CAAC;QAAE,IAAI,EAAE,SAAS,EAAE,CAAC;QAAC,UAAU,CAAC,EAAE,MAAM,GAAG,IAAI,CAAA;KAAE,CAAC;IAS5G,YAAY,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,SAAS,CAAC;IAS5C,eAAe,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,EAAE;QAAE,IAAI,EAAE,MAAM,CAAA;KAAE,GAAG,OAAO,CAAC,SAAS,CAAC;IASzE,eAAe,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAQ1C,gBAAgB,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,GAAE,sBAA2B,GAAG,OAAO,CAAC;QAAE,IAAI,EAAE,YAAY,EAAE,CAAC;QAAC,UAAU,CAAC,EAAE,MAAM,GAAG,IAAI,CAAA;KAAE,CAAC;IAYhI,mBAAmB,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,EAAE,yBAAyB,GAAG,OAAO,CAAC,GAAG,CAAC;IAShF,mBAAmB,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,EAAE,yBAAyB,GAAG,OAAO,CAAC,GAAG,CAAC;IAShF,kBAAkB,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,EAAE,wBAAwB,GAAG,OAAO,CAAC,GAAG,CAAC;IAS9E,mBAAmB,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,EAAE,yBAAyB,GAAG,OAAO,CAAC,GAAG,CAAC;IAgBtF,OAAO,CAAC,wBAAwB;IAkBhC,OAAO,CAAC,oBAAoB;CAmC7B"}
|
||||
{"version":3,"file":"n8n-api-client.d.ts","sourceRoot":"","sources":["../../src/services/n8n-api-client.ts"],"names":[],"mappings":"AAEA,OAAO,EACL,QAAQ,EACR,kBAAkB,EAClB,oBAAoB,EACpB,SAAS,EACT,mBAAmB,EACnB,qBAAqB,EACrB,UAAU,EACV,oBAAoB,EACpB,sBAAsB,EACtB,GAAG,EACH,aAAa,EACb,eAAe,EACf,mBAAmB,EACnB,cAAc,EACd,QAAQ,EACR,cAAc,EAGd,mBAAmB,EACnB,uBAAuB,EACvB,uBAAuB,EACvB,SAAS,EACT,eAAe,EACf,mBAAmB,EACnB,YAAY,EACZ,sBAAsB,EACtB,yBAAyB,EACzB,yBAAyB,EACzB,wBAAwB,EACxB,yBAAyB,EAC1B,MAAM,kBAAkB,CAAC;AAS1B,MAAM,WAAW,kBAAkB;IACjC,OAAO,EAAE,MAAM,CAAC;IAChB,MAAM,EAAE,MAAM,CAAC;IACf,OAAO,CAAC,EAAE,MAAM,CAAC;IACjB,UAAU,CAAC,EAAE,MAAM,CAAC;CACrB;AAED,qBAAa,YAAY;IACvB,OAAO,CAAC,MAAM,CAAgB;IAC9B,OAAO,CAAC,UAAU,CAAS;IAC3B,OAAO,CAAC,OAAO,CAAS;IACxB,OAAO,CAAC,WAAW,CAA+B;IAClD,OAAO,CAAC,cAAc,CAA+C;gBAEzD,MAAM,EAAE,kBAAkB;IAuDhC,UAAU,IAAI,OAAO,CAAC,cAAc,GAAG,IAAI,CAAC;YAyBpC,gBAAgB;IAa9B,oBAAoB,IAAI,cAAc,GAAG,IAAI;IAKvC,WAAW,IAAI,OAAO,CAAC,mBAAmB,CAAC;IA6C3C,cAAc,CAAC,QAAQ,EAAE,OAAO,CAAC,QAAQ,CAAC,GAAG,OAAO,CAAC,QAAQ,CAAC;IAU9D,WAAW,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,QAAQ,CAAC;IAS1C,cAAc,CAAC,EAAE,EAAE,MAAM,EAAE,QAAQ,EAAE,OAAO,CAAC,QAAQ,CAAC,GAAG,OAAO,CAAC,QAAQ,CAAC;IAsC1E,cAAc,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,QAAQ,CAAC;IAS7C,gBAAgB,CAAC,EAAE,EAAE,MAAM,EAAE,oBAAoB,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAQzE,gBAAgB,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,QAAQ,CAAC;IAS/C,kBAAkB,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,QAAQ,CAAC;IAsBjD,aAAa,CAAC,MAAM,GAAE,kBAAuB,GAAG,OAAO,CAAC,oBAAoB,CAAC;IAU7E,aAAa,CAAC,OAAO,CAAC,EAAE;QAAE,UAAU,CAAC,EAAE,MAAM,EAAE,CAAC;QAAC,qBAAqB,CAAC,EAAE,MAAM,CAAA;KAAE,GAAG,OAAO,CAAC,GAAG,CAAC;IAehG,gBAAgB,IAAI,OAAO,CAAC,QAAQ,EAAE,CAAC;IAmBvC,YAAY,CAAC,EAAE,EAAE,MAAM,EAAE,WAAW,UAAQ,GAAG,OAAO,CAAC,SAAS,CAAC;IAwBjE,cAAc,CAAC,MAAM,GAAE,mBAAwB,GAAG,OAAO,CAAC,qBAAqB,CAAC;IAShF,eAAe,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAS1C,cAAc,CAAC,OAAO,EAAE,cAAc,GAAG,OAAO,CAAC,GAAG,CAAC;IAiErD,eAAe,CAAC,MAAM,GAAE,oBAAyB,GAAG,OAAO,CAAC,sBAAsB,CAAC;IASnF,aAAa,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,UAAU,CAAC;IAS9C,gBAAgB,CAAC,UAAU,EAAE,OAAO,CAAC,UAAU,CAAC,GAAG,OAAO,CAAC,UAAU,CAAC;IAStE,gBAAgB,CAAC,EAAE,EAAE,MAAM,EAAE,UAAU,EAAE,OAAO,CAAC,UAAU,CAAC,GAAG,OAAO,CAAC,UAAU,CAAC;IASlF,gBAAgB,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAQ3C,mBAAmB,CAAC,QAAQ,EAAE,MAAM,GAAG,OAAO,CAAC,GAAG,CAAC;IAuBnD,QAAQ,CAAC,MAAM,GAAE,aAAkB,GAAG,OAAO,CAAC,eAAe,CAAC;IAS9D,SAAS,CAAC,GAAG,EAAE,OAAO,CAAC,GAAG,CAAC,GAAG,OAAO,CAAC,GAAG,CAAC;IAS1C,SAAS,CAAC,EAAE,EAAE,MAAM,EAAE,GAAG,EAAE,OAAO,CAAC,GAAG,CAAC,GAAG,OAAO,CAAC,GAAG,CAAC;IAStD,SAAS,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAQpC,kBAAkB,CAAC,UAAU,EAAE,MAAM,EAAE,MAAM,EAAE,MAAM,EAAE,GAAG,OAAO,CAAC,GAAG,EAAE,CAAC;IAUxE,sBAAsB,IAAI,OAAO,CAAC,mBAAmB,CAAC;IAStD,iBAAiB,CAAC,KAAK,UAAQ,GAAG,OAAO,CAAC,uBAAuB,CAAC;IASlE,iBAAiB,CACrB,OAAO,EAAE,MAAM,EACf,SAAS,CAAC,EAAE,MAAM,EAAE,GACnB,OAAO,CAAC,uBAAuB,CAAC;IAa7B,YAAY,IAAI,OAAO,CAAC,QAAQ,EAAE,CAAC;IAWnC,cAAc,CAAC,QAAQ,EAAE,OAAO,CAAC,QAAQ,CAAC,GAAG,OAAO,CAAC,QAAQ,CAAC;IAS9D,cAAc,CAAC,EAAE,EAAE,MAAM,EAAE,QAAQ,EAAE,OAAO,CAAC,QAAQ,CAAC,GAAG,OAAO,CAAC,QAAQ,CAAC;IAS1E,cAAc,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAQzC,eAAe,CAAC,MAAM,EAAE;QAAE,IAAI,EAAE,MAAM,CAAC;QAAC,OAAO,CAAC,EAAE,eAAe,EAAE,CAAA;KAAE,GAAG,OAAO,CAAC,SAAS,CAAC;IAS1F,cAAc,CAAC,MAAM,GAAE,mBAAwB,GAAG,OAAO,CAAC;QAAE,IAAI,EAAE,SAAS,EAAE,CAAC;QAAC,UAAU,CAAC,EAAE,MAAM,GAAG,IAAI,CAAA;KAAE,CAAC;IAS5G,YAAY,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,SAAS,CAAC;IAS5C,eAAe,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,EAAE;QAAE,IAAI,EAAE,MAAM,CAAA;KAAE,GAAG,OAAO,CAAC,SAAS,CAAC;IASzE,eAAe,CAAC,EAAE,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC;IAQ1C,gBAAgB,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,GAAE,sBAA2B,GAAG,OAAO,CAAC;QAAE,IAAI,EAAE,YAAY,EAAE,CAAC;QAAC,UAAU,CAAC,EAAE,MAAM,GAAG,IAAI,CAAA;KAAE,CAAC;IAYhI,mBAAmB,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,EAAE,yBAAyB,GAAG,OAAO,CAAC,GAAG,CAAC;IAShF,mBAAmB,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,EAAE,yBAAyB,GAAG,OAAO,CAAC,GAAG,CAAC;IAShF,kBAAkB,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,EAAE,wBAAwB,GAAG,OAAO,CAAC,GAAG,CAAC;IAS9E,mBAAmB,CAAC,EAAE,EAAE,MAAM,EAAE,MAAM,EAAE,yBAAyB,GAAG,OAAO,CAAC,GAAG,CAAC;IAgBtF,OAAO,CAAC,wBAAwB;IAkBhC,OAAO,CAAC,oBAAoB;CAmC7B"}
|
||||
44
dist/services/n8n-api-client.js
vendored
44
dist/services/n8n-api-client.js
vendored
@@ -61,9 +61,10 @@ class N8nApiClient {
|
||||
},
|
||||
});
|
||||
this.client.interceptors.request.use((config) => {
|
||||
const isSensitive = config.url?.includes('/credentials') && config.method !== 'get';
|
||||
logger_1.logger.debug(`n8n API Request: ${config.method?.toUpperCase()} ${config.url}`, {
|
||||
params: config.params,
|
||||
data: config.data,
|
||||
data: isSensitive ? '[REDACTED]' : config.data,
|
||||
});
|
||||
return config;
|
||||
}, (error) => {
|
||||
@@ -229,6 +230,38 @@ class N8nApiClient {
|
||||
throw (0, n8n_errors_1.handleN8nApiError)(error);
|
||||
}
|
||||
}
|
||||
async generateAudit(options) {
|
||||
try {
|
||||
const additionalOptions = {};
|
||||
if (options?.categories)
|
||||
additionalOptions.categories = options.categories;
|
||||
if (options?.daysAbandonedWorkflow !== undefined)
|
||||
additionalOptions.daysAbandonedWorkflow = options.daysAbandonedWorkflow;
|
||||
const body = Object.keys(additionalOptions).length > 0 ? { additionalOptions } : {};
|
||||
const response = await this.client.post('/audit', body);
|
||||
return response.data;
|
||||
}
|
||||
catch (error) {
|
||||
throw (0, n8n_errors_1.handleN8nApiError)(error);
|
||||
}
|
||||
}
|
||||
async listAllWorkflows() {
|
||||
const allWorkflows = [];
|
||||
let cursor;
|
||||
const seenCursors = new Set();
|
||||
const PAGE_SIZE = 100;
|
||||
const MAX_PAGES = 50;
|
||||
for (let page = 0; page < MAX_PAGES; page++) {
|
||||
const params = { limit: PAGE_SIZE, cursor };
|
||||
const response = await this.listWorkflows(params);
|
||||
allWorkflows.push(...response.data);
|
||||
if (!response.nextCursor || seenCursors.has(response.nextCursor))
|
||||
break;
|
||||
seenCursors.add(response.nextCursor);
|
||||
cursor = response.nextCursor;
|
||||
}
|
||||
return allWorkflows;
|
||||
}
|
||||
async getExecution(id, includeData = false) {
|
||||
try {
|
||||
const response = await this.client.get(`/executions/${id}`, {
|
||||
@@ -338,6 +371,15 @@ class N8nApiClient {
|
||||
throw (0, n8n_errors_1.handleN8nApiError)(error);
|
||||
}
|
||||
}
|
||||
async getCredentialSchema(typeName) {
|
||||
try {
|
||||
const response = await this.client.get(`/credentials/schema/${typeName}`);
|
||||
return response.data;
|
||||
}
|
||||
catch (error) {
|
||||
throw (0, n8n_errors_1.handleN8nApiError)(error);
|
||||
}
|
||||
}
|
||||
async listTags(params = {}) {
|
||||
try {
|
||||
const response = await this.client.get('/tags', { params });
|
||||
|
||||
2
dist/services/n8n-api-client.js.map
vendored
2
dist/services/n8n-api-client.js.map
vendored
File diff suppressed because one or more lines are too long
2
dist/services/workflow-diff-engine.d.ts
vendored
2
dist/services/workflow-diff-engine.d.ts
vendored
@@ -14,6 +14,7 @@ export declare class WorkflowDiffEngine {
|
||||
private validateAddNode;
|
||||
private validateRemoveNode;
|
||||
private validateUpdateNode;
|
||||
private validatePatchNodeField;
|
||||
private validateMoveNode;
|
||||
private validateToggleNode;
|
||||
private validateAddConnection;
|
||||
@@ -22,6 +23,7 @@ export declare class WorkflowDiffEngine {
|
||||
private applyAddNode;
|
||||
private applyRemoveNode;
|
||||
private applyUpdateNode;
|
||||
private applyPatchNodeField;
|
||||
private applyMoveNode;
|
||||
private applyEnableNode;
|
||||
private applyDisableNode;
|
||||
|
||||
2
dist/services/workflow-diff-engine.d.ts.map
vendored
2
dist/services/workflow-diff-engine.d.ts.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"workflow-diff-engine.d.ts","sourceRoot":"","sources":["../../src/services/workflow-diff-engine.ts"],"names":[],"mappings":"AAMA,OAAO,EAEL,mBAAmB,EACnB,kBAAkB,EAuBnB,MAAM,wBAAwB,CAAC;AAChC,OAAO,EAAE,QAAQ,EAAoC,MAAM,kBAAkB,CAAC;AAY9E,qBAAa,kBAAkB;IAE7B,OAAO,CAAC,SAAS,CAAkC;IAEnD,OAAO,CAAC,QAAQ,CAAqC;IAErD,OAAO,CAAC,eAAe,CAAqB;IAE5C,OAAO,CAAC,gBAAgB,CAAqB;IAE7C,OAAO,CAAC,SAAS,CAAgB;IACjC,OAAO,CAAC,YAAY,CAAgB;IAEpC,OAAO,CAAC,mBAAmB,CAAqB;IAK1C,SAAS,CACb,QAAQ,EAAE,QAAQ,EAClB,OAAO,EAAE,mBAAmB,GAC3B,OAAO,CAAC,kBAAkB,CAAC;IAgO9B,OAAO,CAAC,iBAAiB;IA0CzB,OAAO,CAAC,cAAc;IA4DtB,OAAO,CAAC,eAAe;IAwBvB,OAAO,CAAC,kBAAkB;IAuB1B,OAAO,CAAC,kBAAkB;IA6D1B,OAAO,CAAC,gBAAgB;IAQxB,OAAO,CAAC,kBAAkB;IAU1B,OAAO,CAAC,qBAAqB;IAkD7B,OAAO,CAAC,wBAAwB;IA6ChC,OAAO,CAAC,wBAAwB;IAmDhC,OAAO,CAAC,YAAY;IA4BpB,OAAO,CAAC,eAAe;IAwCvB,OAAO,CAAC,eAAe;IA6CvB,OAAO,CAAC,aAAa;IAOrB,OAAO,CAAC,eAAe;IAOvB,OAAO,CAAC,gBAAgB;IAWxB,OAAO,CAAC,sBAAsB;IA0D9B,OAAO,CAAC,kBAAkB;IAiD1B,OAAO,CAAC,qBAAqB;IAuC7B,OAAO,CAAC,qBAAqB;IA0B7B,OAAO,CAAC,mBAAmB;IAW3B,OAAO,CAAC,eAAe;IAIvB,OAAO,CAAC,WAAW;IAYnB,OAAO,CAAC,cAAc;IAatB,OAAO,CAAC,wBAAwB;IAchC,OAAO,CAAC,0BAA0B;IAMlC,OAAO,CAAC,qBAAqB;IAM7B,OAAO,CAAC,uBAAuB;IAO/B,OAAO,CAAC,wBAAwB;IAOhC,OAAO,CAAC,qBAAqB;IAK7B,OAAO,CAAC,6BAA6B;IAKrC,OAAO,CAAC,0BAA0B;IA0BlC,OAAO,CAAC,0BAA0B;IA+ElC,OAAO,CAAC,uBAAuB;IAe/B,OAAO,CAAC,0BAA0B;IAmElC,OAAO,CAAC,iBAAiB;IAkBzB,OAAO,CAAC,QAAQ;IAsChB,OAAO,CAAC,uBAAuB;IAW/B,OAAO,CAAC,iBAAiB;IAUzB,OAAO,CAAC,iBAAiB;CAoB1B"}
|
||||
{"version":3,"file":"workflow-diff-engine.d.ts","sourceRoot":"","sources":["../../src/services/workflow-diff-engine.ts"],"names":[],"mappings":"AAMA,OAAO,EAEL,mBAAmB,EACnB,kBAAkB,EAwBnB,MAAM,wBAAwB,CAAC;AAChC,OAAO,EAAE,QAAQ,EAAoC,MAAM,kBAAkB,CAAC;AA6D9E,qBAAa,kBAAkB;IAE7B,OAAO,CAAC,SAAS,CAAkC;IAEnD,OAAO,CAAC,QAAQ,CAAqC;IAErD,OAAO,CAAC,eAAe,CAAqB;IAE5C,OAAO,CAAC,gBAAgB,CAAqB;IAE7C,OAAO,CAAC,SAAS,CAAgB;IACjC,OAAO,CAAC,YAAY,CAAgB;IAEpC,OAAO,CAAC,mBAAmB,CAAqB;IAK1C,SAAS,CACb,QAAQ,EAAE,QAAQ,EAClB,OAAO,EAAE,mBAAmB,GAC3B,OAAO,CAAC,kBAAkB,CAAC;IAgO9B,OAAO,CAAC,iBAAiB;IA4CzB,OAAO,CAAC,cAAc;IA+DtB,OAAO,CAAC,eAAe;IAwBvB,OAAO,CAAC,kBAAkB;IAuB1B,OAAO,CAAC,kBAAkB;IA6D1B,OAAO,CAAC,sBAAsB;IAuE9B,OAAO,CAAC,gBAAgB;IAQxB,OAAO,CAAC,kBAAkB;IAU1B,OAAO,CAAC,qBAAqB;IAkD7B,OAAO,CAAC,wBAAwB;IA6ChC,OAAO,CAAC,wBAAwB;IAmDhC,OAAO,CAAC,YAAY;IA4BpB,OAAO,CAAC,eAAe;IAwCvB,OAAO,CAAC,eAAe;IA6CvB,OAAO,CAAC,mBAAmB;IAgE3B,OAAO,CAAC,aAAa;IAOrB,OAAO,CAAC,eAAe;IAOvB,OAAO,CAAC,gBAAgB;IAWxB,OAAO,CAAC,sBAAsB;IA0D9B,OAAO,CAAC,kBAAkB;IAiD1B,OAAO,CAAC,qBAAqB;IAuC7B,OAAO,CAAC,qBAAqB;IA0B7B,OAAO,CAAC,mBAAmB;IAW3B,OAAO,CAAC,eAAe;IAIvB,OAAO,CAAC,WAAW;IAYnB,OAAO,CAAC,cAAc;IAatB,OAAO,CAAC,wBAAwB;IAchC,OAAO,CAAC,0BAA0B;IAMlC,OAAO,CAAC,qBAAqB;IAM7B,OAAO,CAAC,uBAAuB;IAO/B,OAAO,CAAC,wBAAwB;IAOhC,OAAO,CAAC,qBAAqB;IAK7B,OAAO,CAAC,6BAA6B;IAKrC,OAAO,CAAC,0BAA0B;IA0BlC,OAAO,CAAC,0BAA0B;IA+ElC,OAAO,CAAC,uBAAuB;IAe/B,OAAO,CAAC,0BAA0B;IAmElC,OAAO,CAAC,iBAAiB;IAkBzB,OAAO,CAAC,QAAQ;IAsChB,OAAO,CAAC,uBAAuB;IAW/B,OAAO,CAAC,iBAAiB;IAWzB,OAAO,CAAC,iBAAiB;CAyB1B"}
|
||||
147
dist/services/workflow-diff-engine.js
vendored
147
dist/services/workflow-diff-engine.js
vendored
@@ -6,6 +6,39 @@ const logger_1 = require("../utils/logger");
|
||||
const node_sanitizer_1 = require("./node-sanitizer");
|
||||
const node_type_utils_1 = require("../utils/node-type-utils");
|
||||
const logger = new logger_1.Logger({ prefix: '[WorkflowDiffEngine]' });
|
||||
const PATCH_LIMITS = {
|
||||
MAX_PATCHES: 50,
|
||||
MAX_REGEX_LENGTH: 500,
|
||||
MAX_FIELD_SIZE_REGEX: 512 * 1024,
|
||||
};
|
||||
const DANGEROUS_PATH_KEYS = new Set(['__proto__', 'constructor', 'prototype']);
|
||||
function isUnsafeRegex(pattern) {
|
||||
const nestedQuantifier = /\([^)]*[+*][^)]*\)[+*{]/;
|
||||
if (nestedQuantifier.test(pattern))
|
||||
return true;
|
||||
const overlappingAlternation = /\([^)]*\|[^)]*\)[+*{]/;
|
||||
if (overlappingAlternation.test(pattern)) {
|
||||
const match = pattern.match(/\(([^)]*)\|([^)]*)\)[+*{]/);
|
||||
if (match) {
|
||||
const [, left, right] = match;
|
||||
const broadClasses = ['.', '\\w', '\\d', '\\s', '\\S', '\\W', '\\D', '[^'];
|
||||
const leftHasBroad = broadClasses.some(c => left.includes(c));
|
||||
const rightHasBroad = broadClasses.some(c => right.includes(c));
|
||||
if (leftHasBroad && rightHasBroad)
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
function countOccurrences(str, search) {
|
||||
let count = 0;
|
||||
let pos = 0;
|
||||
while ((pos = str.indexOf(search, pos)) !== -1) {
|
||||
count++;
|
||||
pos += search.length;
|
||||
}
|
||||
return count;
|
||||
}
|
||||
class WorkflowDiffEngine {
|
||||
constructor() {
|
||||
this.renameMap = new Map();
|
||||
@@ -25,7 +58,7 @@ class WorkflowDiffEngine {
|
||||
this.tagsToRemove = [];
|
||||
this.transferToProjectId = undefined;
|
||||
const workflowCopy = JSON.parse(JSON.stringify(workflow));
|
||||
const nodeOperationTypes = ['addNode', 'removeNode', 'updateNode', 'moveNode', 'enableNode', 'disableNode'];
|
||||
const nodeOperationTypes = ['addNode', 'removeNode', 'updateNode', 'patchNodeField', 'moveNode', 'enableNode', 'disableNode'];
|
||||
const nodeOperations = [];
|
||||
const otherOperations = [];
|
||||
request.operations.forEach((operation, index) => {
|
||||
@@ -213,6 +246,8 @@ class WorkflowDiffEngine {
|
||||
return this.validateRemoveNode(workflow, operation);
|
||||
case 'updateNode':
|
||||
return this.validateUpdateNode(workflow, operation);
|
||||
case 'patchNodeField':
|
||||
return this.validatePatchNodeField(workflow, operation);
|
||||
case 'moveNode':
|
||||
return this.validateMoveNode(workflow, operation);
|
||||
case 'enableNode':
|
||||
@@ -254,6 +289,9 @@ class WorkflowDiffEngine {
|
||||
case 'updateNode':
|
||||
this.applyUpdateNode(workflow, operation);
|
||||
break;
|
||||
case 'patchNodeField':
|
||||
this.applyPatchNodeField(workflow, operation);
|
||||
break;
|
||||
case 'moveNode':
|
||||
this.applyMoveNode(workflow, operation);
|
||||
break;
|
||||
@@ -375,6 +413,63 @@ class WorkflowDiffEngine {
|
||||
}
|
||||
return null;
|
||||
}
|
||||
validatePatchNodeField(workflow, operation) {
|
||||
if (!operation.nodeId && !operation.nodeName) {
|
||||
return `patchNodeField requires either "nodeId" or "nodeName"`;
|
||||
}
|
||||
if (!operation.fieldPath || typeof operation.fieldPath !== 'string') {
|
||||
return `patchNodeField requires a "fieldPath" string (e.g., "parameters.jsCode")`;
|
||||
}
|
||||
const pathSegments = operation.fieldPath.split('.');
|
||||
if (pathSegments.some(k => DANGEROUS_PATH_KEYS.has(k))) {
|
||||
return `patchNodeField: fieldPath "${operation.fieldPath}" contains a forbidden key (__proto__, constructor, or prototype)`;
|
||||
}
|
||||
if (!Array.isArray(operation.patches) || operation.patches.length === 0) {
|
||||
return `patchNodeField requires a non-empty "patches" array of {find, replace} objects`;
|
||||
}
|
||||
if (operation.patches.length > PATCH_LIMITS.MAX_PATCHES) {
|
||||
return `patchNodeField: too many patches (${operation.patches.length}). Maximum is ${PATCH_LIMITS.MAX_PATCHES} per operation. Split into multiple operations if needed.`;
|
||||
}
|
||||
for (let i = 0; i < operation.patches.length; i++) {
|
||||
const patch = operation.patches[i];
|
||||
if (!patch || typeof patch.find !== 'string' || typeof patch.replace !== 'string') {
|
||||
return `Invalid patch entry at index ${i}: each entry must have "find" (string) and "replace" (string)`;
|
||||
}
|
||||
if (patch.find.length === 0) {
|
||||
return `Invalid patch entry at index ${i}: "find" must not be empty`;
|
||||
}
|
||||
if (patch.regex) {
|
||||
if (patch.find.length > PATCH_LIMITS.MAX_REGEX_LENGTH) {
|
||||
return `Regex pattern at patch index ${i} is too long (${patch.find.length} chars). Maximum is ${PATCH_LIMITS.MAX_REGEX_LENGTH} characters.`;
|
||||
}
|
||||
try {
|
||||
new RegExp(patch.find);
|
||||
}
|
||||
catch (e) {
|
||||
return `Invalid regex pattern at patch index ${i}: ${e instanceof Error ? e.message : 'invalid regex'}`;
|
||||
}
|
||||
if (isUnsafeRegex(patch.find)) {
|
||||
return `Potentially unsafe regex pattern at patch index ${i}: nested quantifiers or overlapping alternations can cause excessive backtracking. Simplify the pattern or use literal matching (regex: false).`;
|
||||
}
|
||||
}
|
||||
}
|
||||
const node = this.findNode(workflow, operation.nodeId, operation.nodeName);
|
||||
if (!node) {
|
||||
return this.formatNodeNotFoundError(workflow, operation.nodeId || operation.nodeName || '', 'patchNodeField');
|
||||
}
|
||||
const currentValue = this.getNestedProperty(node, operation.fieldPath);
|
||||
if (currentValue === undefined) {
|
||||
return `Cannot apply patchNodeField to "${operation.fieldPath}": property does not exist on node "${node.name}"`;
|
||||
}
|
||||
if (typeof currentValue !== 'string') {
|
||||
return `Cannot apply patchNodeField to "${operation.fieldPath}": current value is ${typeof currentValue}, expected string`;
|
||||
}
|
||||
const hasRegex = operation.patches.some(p => p.regex);
|
||||
if (hasRegex && typeof currentValue === 'string' && currentValue.length > PATCH_LIMITS.MAX_FIELD_SIZE_REGEX) {
|
||||
return `Field "${operation.fieldPath}" is too large for regex operations (${Math.round(currentValue.length / 1024)}KB). Maximum is ${PATCH_LIMITS.MAX_FIELD_SIZE_REGEX / 1024}KB. Use literal matching (regex: false) for large fields.`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
validateMoveNode(workflow, operation) {
|
||||
const node = this.findNode(workflow, operation.nodeId, operation.nodeName);
|
||||
if (!node) {
|
||||
@@ -586,6 +681,51 @@ class WorkflowDiffEngine {
|
||||
const sanitized = (0, node_sanitizer_1.sanitizeNode)(node);
|
||||
Object.assign(node, sanitized);
|
||||
}
|
||||
applyPatchNodeField(workflow, operation) {
|
||||
const node = this.findNode(workflow, operation.nodeId, operation.nodeName);
|
||||
if (!node)
|
||||
return;
|
||||
this.modifiedNodeIds.add(node.id);
|
||||
let current = this.getNestedProperty(node, operation.fieldPath);
|
||||
for (let i = 0; i < operation.patches.length; i++) {
|
||||
const patch = operation.patches[i];
|
||||
if (patch.regex) {
|
||||
const globalRegex = new RegExp(patch.find, 'g');
|
||||
const matches = current.match(globalRegex);
|
||||
if (!matches || matches.length === 0) {
|
||||
throw new Error(`patchNodeField: regex pattern "${patch.find}" not found in "${operation.fieldPath}" (patch index ${i}). ` +
|
||||
`Use n8n_get_workflow to inspect the current value.`);
|
||||
}
|
||||
if (matches.length > 1 && !patch.replaceAll) {
|
||||
throw new Error(`patchNodeField: regex pattern "${patch.find}" matches ${matches.length} times in "${operation.fieldPath}" (patch index ${i}). ` +
|
||||
`Set "replaceAll": true to replace all occurrences, or refine the pattern to match exactly once.`);
|
||||
}
|
||||
const regex = patch.replaceAll ? globalRegex : new RegExp(patch.find);
|
||||
current = current.replace(regex, patch.replace);
|
||||
}
|
||||
else {
|
||||
const occurrences = countOccurrences(current, patch.find);
|
||||
if (occurrences === 0) {
|
||||
throw new Error(`patchNodeField: "${patch.find.substring(0, 80)}" not found in "${operation.fieldPath}" (patch index ${i}). ` +
|
||||
`Ensure the find string exactly matches the current content (including whitespace and newlines). ` +
|
||||
`Use n8n_get_workflow to inspect the current value.`);
|
||||
}
|
||||
if (occurrences > 1 && !patch.replaceAll) {
|
||||
throw new Error(`patchNodeField: "${patch.find.substring(0, 80)}" found ${occurrences} times in "${operation.fieldPath}" (patch index ${i}). ` +
|
||||
`Set "replaceAll": true to replace all occurrences, or use a more specific find string that matches exactly once.`);
|
||||
}
|
||||
if (patch.replaceAll) {
|
||||
current = current.split(patch.find).join(patch.replace);
|
||||
}
|
||||
else {
|
||||
current = current.replace(patch.find, patch.replace);
|
||||
}
|
||||
}
|
||||
}
|
||||
this.setNestedProperty(node, operation.fieldPath, current);
|
||||
const sanitized = (0, node_sanitizer_1.sanitizeNode)(node);
|
||||
Object.assign(node, sanitized);
|
||||
}
|
||||
applyMoveNode(workflow, operation) {
|
||||
const node = this.findNode(workflow, operation.nodeId, operation.nodeName);
|
||||
if (!node)
|
||||
@@ -924,6 +1064,8 @@ class WorkflowDiffEngine {
|
||||
const keys = path.split('.');
|
||||
let current = obj;
|
||||
for (const key of keys) {
|
||||
if (DANGEROUS_PATH_KEYS.has(key))
|
||||
return undefined;
|
||||
if (current == null || typeof current !== 'object')
|
||||
return undefined;
|
||||
current = current[key];
|
||||
@@ -933,6 +1075,9 @@ class WorkflowDiffEngine {
|
||||
setNestedProperty(obj, path, value) {
|
||||
const keys = path.split('.');
|
||||
let current = obj;
|
||||
if (keys.some(k => DANGEROUS_PATH_KEYS.has(k))) {
|
||||
throw new Error(`Invalid property path: "${path}" contains a forbidden key`);
|
||||
}
|
||||
for (let i = 0; i < keys.length - 1; i++) {
|
||||
const key = keys[i];
|
||||
if (!(key in current) || typeof current[key] !== 'object' || current[key] === null) {
|
||||
|
||||
2
dist/services/workflow-diff-engine.js.map
vendored
2
dist/services/workflow-diff-engine.js.map
vendored
File diff suppressed because one or more lines are too long
16
dist/types/workflow-diff.d.ts
vendored
16
dist/types/workflow-diff.d.ts
vendored
@@ -40,6 +40,18 @@ export interface DisableNodeOperation extends DiffOperation {
|
||||
nodeId?: string;
|
||||
nodeName?: string;
|
||||
}
|
||||
export interface PatchNodeFieldOperation extends DiffOperation {
|
||||
type: 'patchNodeField';
|
||||
nodeId?: string;
|
||||
nodeName?: string;
|
||||
fieldPath: string;
|
||||
patches: Array<{
|
||||
find: string;
|
||||
replace: string;
|
||||
replaceAll?: boolean;
|
||||
regex?: boolean;
|
||||
}>;
|
||||
}
|
||||
export interface AddConnectionOperation extends DiffOperation {
|
||||
type: 'addConnection';
|
||||
source: string;
|
||||
@@ -114,7 +126,7 @@ export interface ReplaceConnectionsOperation extends DiffOperation {
|
||||
};
|
||||
};
|
||||
}
|
||||
export type WorkflowDiffOperation = AddNodeOperation | RemoveNodeOperation | UpdateNodeOperation | MoveNodeOperation | EnableNodeOperation | DisableNodeOperation | AddConnectionOperation | RemoveConnectionOperation | RewireConnectionOperation | UpdateSettingsOperation | UpdateNameOperation | AddTagOperation | RemoveTagOperation | ActivateWorkflowOperation | DeactivateWorkflowOperation | CleanStaleConnectionsOperation | ReplaceConnectionsOperation | TransferWorkflowOperation;
|
||||
export type WorkflowDiffOperation = AddNodeOperation | RemoveNodeOperation | UpdateNodeOperation | PatchNodeFieldOperation | MoveNodeOperation | EnableNodeOperation | DisableNodeOperation | AddConnectionOperation | RemoveConnectionOperation | RewireConnectionOperation | UpdateSettingsOperation | UpdateNameOperation | AddTagOperation | RemoveTagOperation | ActivateWorkflowOperation | DeactivateWorkflowOperation | CleanStaleConnectionsOperation | ReplaceConnectionsOperation | TransferWorkflowOperation;
|
||||
export interface WorkflowDiffRequest {
|
||||
id: string;
|
||||
operations: WorkflowDiffOperation[];
|
||||
@@ -149,7 +161,7 @@ export interface NodeReference {
|
||||
id?: string;
|
||||
name?: string;
|
||||
}
|
||||
export declare function isNodeOperation(op: WorkflowDiffOperation): op is AddNodeOperation | RemoveNodeOperation | UpdateNodeOperation | MoveNodeOperation | EnableNodeOperation | DisableNodeOperation;
|
||||
export declare function isNodeOperation(op: WorkflowDiffOperation): op is AddNodeOperation | RemoveNodeOperation | UpdateNodeOperation | PatchNodeFieldOperation | MoveNodeOperation | EnableNodeOperation | DisableNodeOperation;
|
||||
export declare function isConnectionOperation(op: WorkflowDiffOperation): op is AddConnectionOperation | RemoveConnectionOperation | RewireConnectionOperation | CleanStaleConnectionsOperation | ReplaceConnectionsOperation;
|
||||
export declare function isMetadataOperation(op: WorkflowDiffOperation): op is UpdateSettingsOperation | UpdateNameOperation | AddTagOperation | RemoveTagOperation;
|
||||
//# sourceMappingURL=workflow-diff.d.ts.map
|
||||
2
dist/types/workflow-diff.d.ts.map
vendored
2
dist/types/workflow-diff.d.ts.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"workflow-diff.d.ts","sourceRoot":"","sources":["../../src/types/workflow-diff.ts"],"names":[],"mappings":"AAKA,OAAO,EAAE,YAAY,EAAsB,MAAM,WAAW,CAAC;AAG7D,MAAM,WAAW,aAAa;IAC5B,IAAI,EAAE,MAAM,CAAC;IACb,WAAW,CAAC,EAAE,MAAM,CAAC;CACtB;AAGD,MAAM,WAAW,gBAAiB,SAAQ,aAAa;IACrD,IAAI,EAAE,SAAS,CAAC;IAChB,IAAI,EAAE,OAAO,CAAC,YAAY,CAAC,GAAG;QAC5B,IAAI,EAAE,MAAM,CAAC;QACb,IAAI,EAAE,MAAM,CAAC;QACb,QAAQ,EAAE,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;KAC5B,CAAC;CACH;AAED,MAAM,WAAW,mBAAoB,SAAQ,aAAa;IACxD,IAAI,EAAE,YAAY,CAAC;IACnB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;CACnB;AAED,MAAM,WAAW,mBAAoB,SAAQ,aAAa;IACxD,IAAI,EAAE,YAAY,CAAC;IACnB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;IAClB,OAAO,EAAE;QACP,CAAC,IAAI,EAAE,MAAM,GAAG,GAAG,CAAC;KACrB,CAAC;CACH;AAED,MAAM,WAAW,iBAAkB,SAAQ,aAAa;IACtD,IAAI,EAAE,UAAU,CAAC;IACjB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;IAClB,QAAQ,EAAE,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;CAC5B;AAED,MAAM,WAAW,mBAAoB,SAAQ,aAAa;IACxD,IAAI,EAAE,YAAY,CAAC;IACnB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;CACnB;AAED,MAAM,WAAW,oBAAqB,SAAQ,aAAa;IACzD,IAAI,EAAE,aAAa,CAAC;IACpB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;CACnB;AAGD,MAAM,WAAW,sBAAuB,SAAQ,aAAa;IAC3D,IAAI,EAAE,eAAe,CAAC;IACtB,MAAM,EAAE,MAAM,CAAC;IACf,MAAM,EAAE,MAAM,CAAC;IACf,YAAY,CAAC,EAAE,MAAM,CAAC;IACtB,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,WAAW,CAAC,EAAE,MAAM,CAAC;IAErB,MAAM,CAAC,EAAE,MAAM,GAAG,OAAO,CAAC;IAC1B,IAAI,CAAC,EAAE,MAAM,CAAC;CACf;AAED,MAAM,WAAW,yBAA0B,SAAQ,aAAa;IAC9D,IAAI,EAAE,kBAAkB,CAAC;IACzB,MAAM,EAAE,MAAM,CAAC;IACf,MAAM,EAAE,MAAM,CAAC;IACf,YAAY,CAAC,EAAE,MAAM,CAAC;IACtB,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,YAAY,CAAC,EAAE,OAAO,CAAC;CACxB;AAED,MAAM,WAAW,yBAA0B,SAAQ,aAAa;IAC9D,IAAI,EAAE,kBAAkB,CAAC;IACzB,MAAM,EAAE,MAAM,CAAC;IACf,IAAI,EAAE,MAAM,CAAC;IACb,EAAE,EAAE,MAAM,CAAC;IACX,YAAY,CAAC,EAAE,MAAM,CAAC;IACtB,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,WAAW,CAAC,EAAE,MAAM,CAAC;IAErB,MAAM,CAAC,EAAE,MAAM,GAAG,OAAO,CAAC;IAC1B,IAAI,CAAC,EAAE,MAAM,CAAC;CACf;AAGD,MAAM,WAAW,uBAAwB,SAAQ,aAAa;IAC5D,IAAI,EAAE,gBAAgB,CAAC;IACvB,QAAQ,EAAE;QACR,CAAC,GAAG,EAAE,MAAM,GAAG,GAAG,CAAC;KACpB,CAAC;CACH;AAED,MAAM,WAAW,mBAAoB,SAAQ,aAAa;IACxD,IAAI,EAAE,YAAY,CAAC;IACnB,IAAI,EAAE,MAAM,CAAC;CACd;AAED,MAAM,WAAW,eAAgB,SAAQ,aAAa;IACpD,IAAI,EAAE,QAAQ,CAAC;IACf,GAAG,EAAE,MAAM,CAAC;CACb;AAED,MAAM,WAAW,kBAAmB,SAAQ,aAAa;IACvD,IAAI,EAAE,WAAW,CAAC;IAClB,GAAG,EAAE,MAAM,CAAC;CACb;AAED,MAAM,WAAW,yBAA0B,SAAQ,aAAa;IAC9D,IAAI,EAAE,kBAAkB,CAAC;CAE1B;AAED,MAAM,WAAW,2BAA4B,SAAQ,aAAa;IAChE,IAAI,EAAE,oBAAoB,CAAC;CAE5B;AAED,MAAM,WAAW,yBAA0B,SAAQ,aAAa;IAC9D,IAAI,EAAE,kBAAkB,CAAC;IACzB,oBAAoB,EAAE,MAAM,CAAC;CAC9B;AAGD,MAAM,WAAW,8BAA+B,SAAQ,aAAa;IACnE,IAAI,EAAE,uBAAuB,CAAC;IAC9B,MAAM,CAAC,EAAE,OAAO,CAAC;CAClB;AAED,MAAM,WAAW,2BAA4B,SAAQ,aAAa;IAChE,IAAI,EAAE,oBAAoB,CAAC;IAC3B,WAAW,EAAE;QACX,CAAC,QAAQ,EAAE,MAAM,GAAG;YAClB,CAAC,UAAU,EAAE,MAAM,GAAG,KAAK,CAAC,KAAK,CAAC;gBAChC,IAAI,EAAE,MAAM,CAAC;gBACb,IAAI,EAAE,MAAM,CAAC;gBACb,KAAK,EAAE,MAAM,CAAC;aACf,CAAC,CAAC,CAAC;SACL,CAAC;KACH,CAAC;CACH;AAGD,MAAM,MAAM,qBAAqB,GAC7B,gBAAgB,GAChB,mBAAmB,GACnB,mBAAmB,GACnB,iBAAiB,GACjB,mBAAmB,GACnB,oBAAoB,GACpB,sBAAsB,GACtB,yBAAyB,GACzB,yBAAyB,GACzB,uBAAuB,GACvB,mBAAmB,GACnB,eAAe,GACf,kBAAkB,GAClB,yBAAyB,GACzB,2BAA2B,GAC3B,8BAA8B,GAC9B,2BAA2B,GAC3B,yBAAyB,CAAC;AAG9B,MAAM,WAAW,mBAAmB;IAClC,EAAE,EAAE,MAAM,CAAC;IACX,UAAU,EAAE,qBAAqB,EAAE,CAAC;IACpC,YAAY,CAAC,EAAE,OAAO,CAAC;IACvB,eAAe,CAAC,EAAE,OAAO,CAAC;CAC3B;AAGD,MAAM,WAAW,2BAA2B;IAC1C,SAAS,EAAE,MAAM,CAAC;IAClB,OAAO,EAAE,MAAM,CAAC;IAChB,OAAO,CAAC,EAAE,GAAG,CAAC;CACf;AAED,MAAM,WAAW,kBAAkB;IACjC,OAAO,EAAE,OAAO,CAAC;IACjB,QAAQ,CAAC,EAAE,GAAG,CAAC;IACf,MAAM,CAAC,EAAE,2BAA2B,EAAE,CAAC;IACvC,QAAQ,CAAC,EAAE,2BAA2B,EAAE,CAAC;IACzC,iBAAiB,CAAC,EAAE,MAAM,CAAC;IAC3B,OAAO,CAAC,EAAE,MAAM,CAAC;IACjB,OAAO,CAAC,EAAE,MAAM,EAAE,CAAC;IACnB,MAAM,CAAC,EAAE,MAAM,EAAE,CAAC;IAClB,uBAAuB,CAAC,EAAE,KAAK,CAAC;QAAE,IAAI,EAAE,MAAM,CAAC;QAAC,EAAE,EAAE,MAAM,CAAA;KAAE,CAAC,CAAC;IAC9D,cAAc,CAAC,EAAE,OAAO,CAAC;IACzB,gBAAgB,CAAC,EAAE,OAAO,CAAC;IAC3B,SAAS,CAAC,EAAE,MAAM,EAAE,CAAC;IACrB,YAAY,CAAC,EAAE,MAAM,EAAE,CAAC;IACxB,mBAAmB,CAAC,EAAE,MAAM,CAAC;CAC9B;AAGD,MAAM,WAAW,aAAa;IAC5B,EAAE,CAAC,EAAE,MAAM,CAAC;IACZ,IAAI,CAAC,EAAE,MAAM,CAAC;CACf;AAGD,wBAAgB,eAAe,CAAC,EAAE,EAAE,qBAAqB,GAAG,EAAE,IAC5D,gBAAgB,GAAG,mBAAmB,GAAG,mBAAmB,GAC5D,iBAAiB,GAAG,mBAAmB,GAAG,oBAAoB,CAE/D;AAED,wBAAgB,qBAAqB,CAAC,EAAE,EAAE,qBAAqB,GAAG,EAAE,IAClE,sBAAsB,GAAG,yBAAyB,GAAG,yBAAyB,GAAG,8BAA8B,GAAG,2BAA2B,CAE9I;AAED,wBAAgB,mBAAmB,CAAC,EAAE,EAAE,qBAAqB,GAAG,EAAE,IAChE,uBAAuB,GAAG,mBAAmB,GAAG,eAAe,GAAG,kBAAkB,CAErF"}
|
||||
{"version":3,"file":"workflow-diff.d.ts","sourceRoot":"","sources":["../../src/types/workflow-diff.ts"],"names":[],"mappings":"AAKA,OAAO,EAAE,YAAY,EAAsB,MAAM,WAAW,CAAC;AAG7D,MAAM,WAAW,aAAa;IAC5B,IAAI,EAAE,MAAM,CAAC;IACb,WAAW,CAAC,EAAE,MAAM,CAAC;CACtB;AAGD,MAAM,WAAW,gBAAiB,SAAQ,aAAa;IACrD,IAAI,EAAE,SAAS,CAAC;IAChB,IAAI,EAAE,OAAO,CAAC,YAAY,CAAC,GAAG;QAC5B,IAAI,EAAE,MAAM,CAAC;QACb,IAAI,EAAE,MAAM,CAAC;QACb,QAAQ,EAAE,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;KAC5B,CAAC;CACH;AAED,MAAM,WAAW,mBAAoB,SAAQ,aAAa;IACxD,IAAI,EAAE,YAAY,CAAC;IACnB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;CACnB;AAED,MAAM,WAAW,mBAAoB,SAAQ,aAAa;IACxD,IAAI,EAAE,YAAY,CAAC;IACnB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;IAClB,OAAO,EAAE;QACP,CAAC,IAAI,EAAE,MAAM,GAAG,GAAG,CAAC;KACrB,CAAC;CACH;AAED,MAAM,WAAW,iBAAkB,SAAQ,aAAa;IACtD,IAAI,EAAE,UAAU,CAAC;IACjB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;IAClB,QAAQ,EAAE,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;CAC5B;AAED,MAAM,WAAW,mBAAoB,SAAQ,aAAa;IACxD,IAAI,EAAE,YAAY,CAAC;IACnB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;CACnB;AAED,MAAM,WAAW,oBAAqB,SAAQ,aAAa;IACzD,IAAI,EAAE,aAAa,CAAC;IACpB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;CACnB;AAED,MAAM,WAAW,uBAAwB,SAAQ,aAAa;IAC5D,IAAI,EAAE,gBAAgB,CAAC;IACvB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,QAAQ,CAAC,EAAE,MAAM,CAAC;IAClB,SAAS,EAAE,MAAM,CAAC;IAClB,OAAO,EAAE,KAAK,CAAC;QACb,IAAI,EAAE,MAAM,CAAC;QACb,OAAO,EAAE,MAAM,CAAC;QAChB,UAAU,CAAC,EAAE,OAAO,CAAC;QACrB,KAAK,CAAC,EAAE,OAAO,CAAC;KACjB,CAAC,CAAC;CACJ;AAGD,MAAM,WAAW,sBAAuB,SAAQ,aAAa;IAC3D,IAAI,EAAE,eAAe,CAAC;IACtB,MAAM,EAAE,MAAM,CAAC;IACf,MAAM,EAAE,MAAM,CAAC;IACf,YAAY,CAAC,EAAE,MAAM,CAAC;IACtB,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,WAAW,CAAC,EAAE,MAAM,CAAC;IAErB,MAAM,CAAC,EAAE,MAAM,GAAG,OAAO,CAAC;IAC1B,IAAI,CAAC,EAAE,MAAM,CAAC;CACf;AAED,MAAM,WAAW,yBAA0B,SAAQ,aAAa;IAC9D,IAAI,EAAE,kBAAkB,CAAC;IACzB,MAAM,EAAE,MAAM,CAAC;IACf,MAAM,EAAE,MAAM,CAAC;IACf,YAAY,CAAC,EAAE,MAAM,CAAC;IACtB,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,YAAY,CAAC,EAAE,OAAO,CAAC;CACxB;AAED,MAAM,WAAW,yBAA0B,SAAQ,aAAa;IAC9D,IAAI,EAAE,kBAAkB,CAAC;IACzB,MAAM,EAAE,MAAM,CAAC;IACf,IAAI,EAAE,MAAM,CAAC;IACb,EAAE,EAAE,MAAM,CAAC;IACX,YAAY,CAAC,EAAE,MAAM,CAAC;IACtB,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,WAAW,CAAC,EAAE,MAAM,CAAC;IAErB,MAAM,CAAC,EAAE,MAAM,GAAG,OAAO,CAAC;IAC1B,IAAI,CAAC,EAAE,MAAM,CAAC;CACf;AAGD,MAAM,WAAW,uBAAwB,SAAQ,aAAa;IAC5D,IAAI,EAAE,gBAAgB,CAAC;IACvB,QAAQ,EAAE;QACR,CAAC,GAAG,EAAE,MAAM,GAAG,GAAG,CAAC;KACpB,CAAC;CACH;AAED,MAAM,WAAW,mBAAoB,SAAQ,aAAa;IACxD,IAAI,EAAE,YAAY,CAAC;IACnB,IAAI,EAAE,MAAM,CAAC;CACd;AAED,MAAM,WAAW,eAAgB,SAAQ,aAAa;IACpD,IAAI,EAAE,QAAQ,CAAC;IACf,GAAG,EAAE,MAAM,CAAC;CACb;AAED,MAAM,WAAW,kBAAmB,SAAQ,aAAa;IACvD,IAAI,EAAE,WAAW,CAAC;IAClB,GAAG,EAAE,MAAM,CAAC;CACb;AAED,MAAM,WAAW,yBAA0B,SAAQ,aAAa;IAC9D,IAAI,EAAE,kBAAkB,CAAC;CAE1B;AAED,MAAM,WAAW,2BAA4B,SAAQ,aAAa;IAChE,IAAI,EAAE,oBAAoB,CAAC;CAE5B;AAED,MAAM,WAAW,yBAA0B,SAAQ,aAAa;IAC9D,IAAI,EAAE,kBAAkB,CAAC;IACzB,oBAAoB,EAAE,MAAM,CAAC;CAC9B;AAGD,MAAM,WAAW,8BAA+B,SAAQ,aAAa;IACnE,IAAI,EAAE,uBAAuB,CAAC;IAC9B,MAAM,CAAC,EAAE,OAAO,CAAC;CAClB;AAED,MAAM,WAAW,2BAA4B,SAAQ,aAAa;IAChE,IAAI,EAAE,oBAAoB,CAAC;IAC3B,WAAW,EAAE;QACX,CAAC,QAAQ,EAAE,MAAM,GAAG;YAClB,CAAC,UAAU,EAAE,MAAM,GAAG,KAAK,CAAC,KAAK,CAAC;gBAChC,IAAI,EAAE,MAAM,CAAC;gBACb,IAAI,EAAE,MAAM,CAAC;gBACb,KAAK,EAAE,MAAM,CAAC;aACf,CAAC,CAAC,CAAC;SACL,CAAC;KACH,CAAC;CACH;AAGD,MAAM,MAAM,qBAAqB,GAC7B,gBAAgB,GAChB,mBAAmB,GACnB,mBAAmB,GACnB,uBAAuB,GACvB,iBAAiB,GACjB,mBAAmB,GACnB,oBAAoB,GACpB,sBAAsB,GACtB,yBAAyB,GACzB,yBAAyB,GACzB,uBAAuB,GACvB,mBAAmB,GACnB,eAAe,GACf,kBAAkB,GAClB,yBAAyB,GACzB,2BAA2B,GAC3B,8BAA8B,GAC9B,2BAA2B,GAC3B,yBAAyB,CAAC;AAG9B,MAAM,WAAW,mBAAmB;IAClC,EAAE,EAAE,MAAM,CAAC;IACX,UAAU,EAAE,qBAAqB,EAAE,CAAC;IACpC,YAAY,CAAC,EAAE,OAAO,CAAC;IACvB,eAAe,CAAC,EAAE,OAAO,CAAC;CAC3B;AAGD,MAAM,WAAW,2BAA2B;IAC1C,SAAS,EAAE,MAAM,CAAC;IAClB,OAAO,EAAE,MAAM,CAAC;IAChB,OAAO,CAAC,EAAE,GAAG,CAAC;CACf;AAED,MAAM,WAAW,kBAAkB;IACjC,OAAO,EAAE,OAAO,CAAC;IACjB,QAAQ,CAAC,EAAE,GAAG,CAAC;IACf,MAAM,CAAC,EAAE,2BAA2B,EAAE,CAAC;IACvC,QAAQ,CAAC,EAAE,2BAA2B,EAAE,CAAC;IACzC,iBAAiB,CAAC,EAAE,MAAM,CAAC;IAC3B,OAAO,CAAC,EAAE,MAAM,CAAC;IACjB,OAAO,CAAC,EAAE,MAAM,EAAE,CAAC;IACnB,MAAM,CAAC,EAAE,MAAM,EAAE,CAAC;IAClB,uBAAuB,CAAC,EAAE,KAAK,CAAC;QAAE,IAAI,EAAE,MAAM,CAAC;QAAC,EAAE,EAAE,MAAM,CAAA;KAAE,CAAC,CAAC;IAC9D,cAAc,CAAC,EAAE,OAAO,CAAC;IACzB,gBAAgB,CAAC,EAAE,OAAO,CAAC;IAC3B,SAAS,CAAC,EAAE,MAAM,EAAE,CAAC;IACrB,YAAY,CAAC,EAAE,MAAM,EAAE,CAAC;IACxB,mBAAmB,CAAC,EAAE,MAAM,CAAC;CAC9B;AAGD,MAAM,WAAW,aAAa;IAC5B,EAAE,CAAC,EAAE,MAAM,CAAC;IACZ,IAAI,CAAC,EAAE,MAAM,CAAC;CACf;AAGD,wBAAgB,eAAe,CAAC,EAAE,EAAE,qBAAqB,GAAG,EAAE,IAC5D,gBAAgB,GAAG,mBAAmB,GAAG,mBAAmB,GAAG,uBAAuB,GACtF,iBAAiB,GAAG,mBAAmB,GAAG,oBAAoB,CAE/D;AAED,wBAAgB,qBAAqB,CAAC,EAAE,EAAE,qBAAqB,GAAG,EAAE,IAClE,sBAAsB,GAAG,yBAAyB,GAAG,yBAAyB,GAAG,8BAA8B,GAAG,2BAA2B,CAE9I;AAED,wBAAgB,mBAAmB,CAAC,EAAE,EAAE,qBAAqB,GAAG,EAAE,IAChE,uBAAuB,GAAG,mBAAmB,GAAG,eAAe,GAAG,kBAAkB,CAErF"}
|
||||
2
dist/types/workflow-diff.js
vendored
2
dist/types/workflow-diff.js
vendored
@@ -4,7 +4,7 @@ exports.isNodeOperation = isNodeOperation;
|
||||
exports.isConnectionOperation = isConnectionOperation;
|
||||
exports.isMetadataOperation = isMetadataOperation;
|
||||
function isNodeOperation(op) {
|
||||
return ['addNode', 'removeNode', 'updateNode', 'moveNode', 'enableNode', 'disableNode'].includes(op.type);
|
||||
return ['addNode', 'removeNode', 'updateNode', 'patchNodeField', 'moveNode', 'enableNode', 'disableNode'].includes(op.type);
|
||||
}
|
||||
function isConnectionOperation(op) {
|
||||
return ['addConnection', 'removeConnection', 'rewireConnection', 'cleanStaleConnections', 'replaceConnections'].includes(op.type);
|
||||
|
||||
2
dist/types/workflow-diff.js.map
vendored
2
dist/types/workflow-diff.js.map
vendored
@@ -1 +1 @@
|
||||
{"version":3,"file":"workflow-diff.js","sourceRoot":"","sources":["../../src/types/workflow-diff.ts"],"names":[],"mappings":";;AAkNA,0CAIC;AAED,sDAGC;AAED,kDAGC;AAdD,SAAgB,eAAe,CAAC,EAAyB;IAGvD,OAAO,CAAC,SAAS,EAAE,YAAY,EAAE,YAAY,EAAE,UAAU,EAAE,YAAY,EAAE,aAAa,CAAC,CAAC,QAAQ,CAAC,EAAE,CAAC,IAAI,CAAC,CAAC;AAC5G,CAAC;AAED,SAAgB,qBAAqB,CAAC,EAAyB;IAE7D,OAAO,CAAC,eAAe,EAAE,kBAAkB,EAAE,kBAAkB,EAAE,uBAAuB,EAAE,oBAAoB,CAAC,CAAC,QAAQ,CAAC,EAAE,CAAC,IAAI,CAAC,CAAC;AACpI,CAAC;AAED,SAAgB,mBAAmB,CAAC,EAAyB;IAE3D,OAAO,CAAC,gBAAgB,EAAE,YAAY,EAAE,QAAQ,EAAE,WAAW,CAAC,CAAC,QAAQ,CAAC,EAAE,CAAC,IAAI,CAAC,CAAC;AACnF,CAAC"}
|
||||
{"version":3,"file":"workflow-diff.js","sourceRoot":"","sources":["../../src/types/workflow-diff.ts"],"names":[],"mappings":";;AAgOA,0CAIC;AAED,sDAGC;AAED,kDAGC;AAdD,SAAgB,eAAe,CAAC,EAAyB;IAGvD,OAAO,CAAC,SAAS,EAAE,YAAY,EAAE,YAAY,EAAE,gBAAgB,EAAE,UAAU,EAAE,YAAY,EAAE,aAAa,CAAC,CAAC,QAAQ,CAAC,EAAE,CAAC,IAAI,CAAC,CAAC;AAC9H,CAAC;AAED,SAAgB,qBAAqB,CAAC,EAAyB;IAE7D,OAAO,CAAC,eAAe,EAAE,kBAAkB,EAAE,kBAAkB,EAAE,uBAAuB,EAAE,oBAAoB,CAAC,CAAC,QAAQ,CAAC,EAAE,CAAC,IAAI,CAAC,CAAC;AACpI,CAAC;AAED,SAAgB,mBAAmB,CAAC,EAAyB;IAE3D,OAAO,CAAC,gBAAgB,EAAE,YAAY,EAAE,QAAQ,EAAE,WAAW,CAAC,CAAC,QAAQ,CAAC,EAAE,CAAC,IAAI,CAAC,CAAC;AACnF,CAAC"}
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "n8n-mcp",
|
||||
"version": "2.44.1",
|
||||
"version": "2.47.1",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "n8n-mcp",
|
||||
"version": "2.44.1",
|
||||
"version": "2.47.1",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "1.28.0",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "n8n-mcp",
|
||||
"version": "2.45.1",
|
||||
"version": "2.47.1",
|
||||
"description": "Integration between n8n workflow automation and Model Context Protocol (MCP)",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "n8n-mcp-runtime",
|
||||
"version": "2.45.1",
|
||||
"version": "2.46.0",
|
||||
"description": "n8n MCP Server Runtime Dependencies Only",
|
||||
"private": true,
|
||||
"dependencies": {
|
||||
|
||||
@@ -46,15 +46,6 @@ interface MultiTenantHeaders {
|
||||
const MAX_SESSIONS = Math.max(1, parseInt(process.env.N8N_MCP_MAX_SESSIONS || '100', 10));
|
||||
const SESSION_CLEANUP_INTERVAL = 5 * 60 * 1000; // 5 minutes
|
||||
|
||||
interface Session {
|
||||
server: N8NDocumentationMCPServer;
|
||||
transport: StreamableHTTPServerTransport | SSEServerTransport;
|
||||
lastAccess: Date;
|
||||
sessionId: string;
|
||||
initialized: boolean;
|
||||
isSSE: boolean;
|
||||
}
|
||||
|
||||
interface SessionMetrics {
|
||||
totalSessions: number;
|
||||
activeSessions: number;
|
||||
@@ -104,12 +95,12 @@ export interface SingleSessionHTTPServerOptions {
|
||||
|
||||
export class SingleSessionHTTPServer {
|
||||
// Map to store transports by session ID (following SDK pattern)
|
||||
private transports: { [sessionId: string]: StreamableHTTPServerTransport } = {};
|
||||
// Stores both StreamableHTTP and SSE transports; use instanceof to discriminate
|
||||
private transports: { [sessionId: string]: StreamableHTTPServerTransport | SSEServerTransport } = {};
|
||||
private servers: { [sessionId: string]: N8NDocumentationMCPServer } = {};
|
||||
private sessionMetadata: { [sessionId: string]: { lastAccess: Date; createdAt: Date } } = {};
|
||||
private sessionContexts: { [sessionId: string]: InstanceContext | undefined } = {};
|
||||
private contextSwitchLocks: Map<string, Promise<void>> = new Map();
|
||||
private session: Session | null = null; // Keep for SSE compatibility
|
||||
private consoleManager = new ConsoleManager();
|
||||
private expressServer: any;
|
||||
// Session timeout — configurable via SESSION_TIMEOUT_MINUTES environment variable
|
||||
@@ -319,6 +310,49 @@ export class SingleSessionHTTPServer {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Authenticate a request by validating the Bearer token.
|
||||
* Returns true if authentication succeeds, false if it fails
|
||||
* (and the response has already been sent with a 401 status).
|
||||
*/
|
||||
private authenticateRequest(req: express.Request, res: express.Response): boolean {
|
||||
const authHeader = req.headers.authorization;
|
||||
|
||||
if (!authHeader || !authHeader.startsWith('Bearer ')) {
|
||||
const reason = !authHeader ? 'no_auth_header' : 'invalid_auth_format';
|
||||
logger.warn('Authentication failed', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
reason
|
||||
});
|
||||
res.status(401).json({
|
||||
jsonrpc: '2.0',
|
||||
error: { code: -32001, message: 'Unauthorized' },
|
||||
id: null
|
||||
});
|
||||
return false;
|
||||
}
|
||||
|
||||
const token = authHeader.slice(7).trim();
|
||||
const isValid = this.authToken && AuthManager.timingSafeCompare(token, this.authToken);
|
||||
|
||||
if (!isValid) {
|
||||
logger.warn('Authentication failed: Invalid token', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
reason: 'invalid_token'
|
||||
});
|
||||
res.status(401).json({
|
||||
jsonrpc: '2.0',
|
||||
error: { code: -32001, message: 'Unauthorized' },
|
||||
id: null
|
||||
});
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Switch session context with locking to prevent race conditions
|
||||
*/
|
||||
@@ -636,7 +670,22 @@ export class SingleSessionHTTPServer {
|
||||
|
||||
// For non-initialize requests: reuse existing transport for this session
|
||||
logger.info('handleRequest: Reusing existing transport for session', { sessionId });
|
||||
transport = this.transports[sessionId];
|
||||
|
||||
// Guard: reject SSE transports on the StreamableHTTP path
|
||||
if (this.transports[sessionId] instanceof SSEServerTransport) {
|
||||
logger.warn('handleRequest: SSE session used on StreamableHTTP endpoint', { sessionId });
|
||||
res.status(400).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32000,
|
||||
message: 'Session uses SSE transport. Send messages to POST /messages?sessionId=<id> instead.'
|
||||
},
|
||||
id: req.body?.id || null
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
transport = this.transports[sessionId] as StreamableHTTPServerTransport;
|
||||
|
||||
// TOCTOU guard: session may have been removed between the check above and here
|
||||
if (!transport) {
|
||||
@@ -751,73 +800,47 @@ export class SingleSessionHTTPServer {
|
||||
|
||||
|
||||
/**
|
||||
* Reset the session for SSE - clean up old and create new SSE transport
|
||||
* Create a new SSE session and store it in the shared transports map.
|
||||
* Following SDK pattern: SSE uses /messages endpoint, separate from /mcp.
|
||||
*/
|
||||
private async resetSessionSSE(res: express.Response): Promise<void> {
|
||||
// Clean up old session if exists
|
||||
if (this.session) {
|
||||
const sessionId = this.session.sessionId;
|
||||
logger.info('Closing previous session for SSE', { sessionId });
|
||||
|
||||
// Close server first to free resources (database, cache timer, etc.)
|
||||
// This mirrors the cleanup pattern in removeSession() (issue #542)
|
||||
// Handle server close errors separately so transport close still runs
|
||||
if (this.session.server && typeof this.session.server.close === 'function') {
|
||||
try {
|
||||
await this.session.server.close();
|
||||
} catch (serverError) {
|
||||
logger.warn('Error closing server for SSE session', { sessionId, error: serverError });
|
||||
}
|
||||
}
|
||||
|
||||
// Close transport last - always attempt even if server.close() failed
|
||||
try {
|
||||
await this.session.transport.close();
|
||||
} catch (transportError) {
|
||||
logger.warn('Error closing transport for SSE session', { sessionId, error: transportError });
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
// Create new session
|
||||
logger.info('Creating new N8NDocumentationMCPServer for SSE...');
|
||||
const server = new N8NDocumentationMCPServer(undefined, undefined, {
|
||||
generateWorkflowHandler: this.generateWorkflowHandler,
|
||||
private async createSSESession(res: express.Response): Promise<void> {
|
||||
if (!this.canCreateSession()) {
|
||||
logger.warn('SSE session creation rejected: session limit reached', {
|
||||
currentSessions: this.getActiveSessionCount(),
|
||||
maxSessions: MAX_SESSIONS
|
||||
});
|
||||
|
||||
// Generate cryptographically secure session ID
|
||||
const sessionId = uuidv4();
|
||||
|
||||
logger.info('Creating SSEServerTransport...');
|
||||
const transport = new SSEServerTransport('/mcp', res);
|
||||
|
||||
logger.info('Connecting server to SSE transport...');
|
||||
await server.connect(transport);
|
||||
|
||||
// Note: server.connect() automatically calls transport.start(), so we don't need to call it again
|
||||
|
||||
this.session = {
|
||||
server,
|
||||
transport,
|
||||
lastAccess: new Date(),
|
||||
sessionId,
|
||||
initialized: false,
|
||||
isSSE: true
|
||||
};
|
||||
|
||||
logger.info('Created new SSE session successfully', { sessionId: this.session.sessionId });
|
||||
} catch (error) {
|
||||
logger.error('Failed to create SSE session:', error);
|
||||
throw error;
|
||||
throw new Error(`Session limit reached (${MAX_SESSIONS})`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if current session is expired
|
||||
*/
|
||||
private isExpired(): boolean {
|
||||
if (!this.session) return true;
|
||||
return Date.now() - this.session.lastAccess.getTime() > this.sessionTimeout;
|
||||
|
||||
// Note: SSE sessions do not support multi-tenant context.
|
||||
// The SaaS backend uses StreamableHTTP exclusively.
|
||||
const server = new N8NDocumentationMCPServer(undefined, undefined, {
|
||||
generateWorkflowHandler: this.generateWorkflowHandler,
|
||||
});
|
||||
|
||||
const transport = new SSEServerTransport('/messages', res);
|
||||
// Use the SDK-assigned session ID — the client receives this via the SSE
|
||||
// `endpoint` event and sends it back as ?sessionId on POST /messages.
|
||||
const sessionId = transport.sessionId;
|
||||
|
||||
this.transports[sessionId] = transport;
|
||||
this.servers[sessionId] = server;
|
||||
this.sessionMetadata[sessionId] = {
|
||||
lastAccess: new Date(),
|
||||
createdAt: new Date()
|
||||
};
|
||||
|
||||
// Clean up on SSE disconnect
|
||||
res.on('close', () => {
|
||||
logger.info('SSE connection closed by client', { sessionId });
|
||||
this.removeSession(sessionId, 'sse_disconnect').catch(err => {
|
||||
logger.warn('Error cleaning up SSE session on disconnect', { sessionId, error: err });
|
||||
});
|
||||
});
|
||||
|
||||
await server.connect(transport);
|
||||
|
||||
logger.info('SSE session created', { sessionId, transport: 'SSEServerTransport' });
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -913,7 +936,7 @@ export class SingleSessionHTTPServer {
|
||||
authentication: {
|
||||
type: 'Bearer Token',
|
||||
header: 'Authorization: Bearer <token>',
|
||||
required_for: ['POST /mcp']
|
||||
required_for: ['POST /mcp', 'GET /sse', 'POST /messages']
|
||||
},
|
||||
documentation: 'https://github.com/czlonkowski/n8n-mcp'
|
||||
});
|
||||
@@ -948,7 +971,7 @@ export class SingleSessionHTTPServer {
|
||||
},
|
||||
activeTransports: activeTransports.length, // Legacy field
|
||||
activeServers: activeServers.length, // Legacy field
|
||||
legacySessionActive: !!this.session, // For SSE compatibility
|
||||
legacySessionActive: false, // Deprecated: SSE now uses shared transports map
|
||||
memory: {
|
||||
used: Math.round(process.memoryUsage().heapUsed / 1024 / 1024),
|
||||
total: Math.round(process.memoryUsage().heapTotal / 1024 / 1024),
|
||||
@@ -1005,10 +1028,11 @@ export class SingleSessionHTTPServer {
|
||||
app.get('/mcp', async (req, res) => {
|
||||
// Handle StreamableHTTP transport requests with new pattern
|
||||
const sessionId = req.headers['mcp-session-id'] as string | undefined;
|
||||
if (sessionId && this.transports[sessionId]) {
|
||||
const existingTransport = sessionId ? this.transports[sessionId] : undefined;
|
||||
if (existingTransport && existingTransport instanceof StreamableHTTPServerTransport) {
|
||||
// Let the StreamableHTTPServerTransport handle the GET request
|
||||
try {
|
||||
await this.transports[sessionId].handleRequest(req, res, undefined);
|
||||
await existingTransport.handleRequest(req, res, undefined);
|
||||
return;
|
||||
} catch (error) {
|
||||
logger.error('StreamableHTTP GET request failed:', error);
|
||||
@@ -1016,26 +1040,15 @@ export class SingleSessionHTTPServer {
|
||||
}
|
||||
}
|
||||
|
||||
// Check Accept header for text/event-stream (SSE support)
|
||||
// SSE clients should use GET /sse instead (SDK pattern: separate endpoints)
|
||||
const accept = req.headers.accept;
|
||||
if (accept && accept.includes('text/event-stream')) {
|
||||
logger.info('SSE stream request received - establishing SSE connection');
|
||||
|
||||
try {
|
||||
// Create or reset session for SSE
|
||||
await this.resetSessionSSE(res);
|
||||
logger.info('SSE connection established successfully');
|
||||
} catch (error) {
|
||||
logger.error('Failed to establish SSE connection:', error);
|
||||
res.status(500).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32603,
|
||||
message: 'Failed to establish SSE connection'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
}
|
||||
logger.info('SSE request on /mcp redirected to /sse', { ip: req.ip });
|
||||
res.status(400).json({
|
||||
error: 'SSE transport uses /sse endpoint',
|
||||
message: 'Connect via GET /sse for SSE streaming. POST messages to /messages?sessionId=<id>.',
|
||||
documentation: 'https://github.com/czlonkowski/n8n-mcp'
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1072,9 +1085,23 @@ export class SingleSessionHTTPServer {
|
||||
mcp: {
|
||||
method: 'POST',
|
||||
path: '/mcp',
|
||||
description: 'Main MCP JSON-RPC endpoint',
|
||||
description: 'Main MCP JSON-RPC endpoint (StreamableHTTP)',
|
||||
authentication: 'Bearer token required'
|
||||
},
|
||||
sse: {
|
||||
method: 'GET',
|
||||
path: '/sse',
|
||||
description: 'DEPRECATED: SSE stream for legacy clients. Migrate to StreamableHTTP (POST /mcp).',
|
||||
authentication: 'Bearer token required',
|
||||
deprecated: true
|
||||
},
|
||||
messages: {
|
||||
method: 'POST',
|
||||
path: '/messages',
|
||||
description: 'DEPRECATED: Message delivery for SSE sessions. Migrate to StreamableHTTP (POST /mcp).',
|
||||
authentication: 'Bearer token required',
|
||||
deprecated: true
|
||||
},
|
||||
health: {
|
||||
method: 'GET',
|
||||
path: '/health',
|
||||
@@ -1092,6 +1119,110 @@ export class SingleSessionHTTPServer {
|
||||
});
|
||||
});
|
||||
|
||||
// SECURITY: Rate limiting for authentication endpoints
|
||||
// Prevents brute force attacks and DoS
|
||||
// See: https://github.com/czlonkowski/n8n-mcp/issues/265 (HIGH-02)
|
||||
const authLimiter = rateLimit({
|
||||
windowMs: parseInt(process.env.AUTH_RATE_LIMIT_WINDOW || '900000'), // 15 minutes
|
||||
max: parseInt(process.env.AUTH_RATE_LIMIT_MAX || '20'), // 20 authentication attempts per IP
|
||||
message: {
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32000,
|
||||
message: 'Too many authentication attempts. Please try again later.'
|
||||
},
|
||||
id: null
|
||||
},
|
||||
standardHeaders: true, // Return rate limit info in `RateLimit-*` headers
|
||||
legacyHeaders: false, // Disable `X-RateLimit-*` headers
|
||||
skipSuccessfulRequests: true, // Only count failed auth attempts (#617)
|
||||
handler: (req, res) => {
|
||||
logger.warn('Rate limit exceeded', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
event: 'rate_limit'
|
||||
});
|
||||
res.status(429).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32000,
|
||||
message: 'Too many authentication attempts'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Legacy SSE stream endpoint (protocol version 2024-11-05)
|
||||
// DEPRECATED: SSE transport is deprecated in MCP SDK v1.x and removed in v2.x.
|
||||
// Clients should migrate to StreamableHTTP (POST /mcp). This endpoint will be
|
||||
// removed in a future major release.
|
||||
app.get('/sse', authLimiter, async (req: express.Request, res: express.Response): Promise<void> => {
|
||||
if (!this.authenticateRequest(req, res)) return;
|
||||
|
||||
logger.warn('SSE transport is deprecated and will be removed in a future release. Migrate to StreamableHTTP (POST /mcp).', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent')
|
||||
});
|
||||
|
||||
try {
|
||||
await this.createSSESession(res);
|
||||
} catch (error) {
|
||||
logger.error('Failed to create SSE session:', error);
|
||||
if (!res.headersSent) {
|
||||
res.status(error instanceof Error && error.message.includes('Session limit')
|
||||
? 429 : 500
|
||||
).json({
|
||||
error: error instanceof Error ? error.message : 'Failed to establish SSE connection'
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// SSE message delivery endpoint (receives JSON-RPC messages from SSE clients)
|
||||
app.post('/messages', authLimiter, jsonParser, async (req: express.Request, res: express.Response): Promise<void> => {
|
||||
if (!this.authenticateRequest(req, res)) return;
|
||||
|
||||
// SSE uses ?sessionId query param (not mcp-session-id header)
|
||||
const sessionId = req.query.sessionId as string | undefined;
|
||||
|
||||
if (!sessionId) {
|
||||
res.status(400).json({
|
||||
jsonrpc: '2.0',
|
||||
error: { code: -32602, message: 'Missing sessionId query parameter' },
|
||||
id: req.body?.id || null
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const transport = this.transports[sessionId];
|
||||
|
||||
if (!transport || !(transport instanceof SSEServerTransport)) {
|
||||
res.status(400).json({
|
||||
jsonrpc: '2.0',
|
||||
error: { code: -32000, message: 'SSE session not found or expired' },
|
||||
id: req.body?.id || null
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Update session access time
|
||||
this.updateSessionAccess(sessionId);
|
||||
|
||||
try {
|
||||
await transport.handlePostMessage(req, res, req.body);
|
||||
} catch (error) {
|
||||
logger.error('SSE message handling error', { sessionId, error });
|
||||
if (!res.headersSent) {
|
||||
res.status(500).json({
|
||||
jsonrpc: '2.0',
|
||||
error: { code: -32603, message: 'Internal error processing SSE message' },
|
||||
id: req.body?.id || null
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Session termination endpoint
|
||||
app.delete('/mcp', async (req: express.Request, res: express.Response): Promise<void> => {
|
||||
const mcpSessionId = req.headers['mcp-session-id'] as string;
|
||||
@@ -1150,40 +1281,6 @@ export class SingleSessionHTTPServer {
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
// SECURITY: Rate limiting for authentication endpoint
|
||||
// Prevents brute force attacks and DoS
|
||||
// See: https://github.com/czlonkowski/n8n-mcp/issues/265 (HIGH-02)
|
||||
const authLimiter = rateLimit({
|
||||
windowMs: parseInt(process.env.AUTH_RATE_LIMIT_WINDOW || '900000'), // 15 minutes
|
||||
max: parseInt(process.env.AUTH_RATE_LIMIT_MAX || '20'), // 20 authentication attempts per IP
|
||||
message: {
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32000,
|
||||
message: 'Too many authentication attempts. Please try again later.'
|
||||
},
|
||||
id: null
|
||||
},
|
||||
standardHeaders: true, // Return rate limit info in `RateLimit-*` headers
|
||||
legacyHeaders: false, // Disable `X-RateLimit-*` headers
|
||||
handler: (req, res) => {
|
||||
logger.warn('Rate limit exceeded', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
event: 'rate_limit'
|
||||
});
|
||||
res.status(429).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32000,
|
||||
message: 'Too many authentication attempts'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Main MCP endpoint with authentication and rate limiting
|
||||
app.post('/mcp', authLimiter, jsonParser, async (req: express.Request, res: express.Response): Promise<void> => {
|
||||
// Log comprehensive debug info about the request
|
||||
@@ -1234,76 +1331,10 @@ export class SingleSessionHTTPServer {
|
||||
});
|
||||
}
|
||||
|
||||
// Enhanced authentication check with specific logging
|
||||
const authHeader = req.headers.authorization;
|
||||
|
||||
// Check if Authorization header is missing
|
||||
if (!authHeader) {
|
||||
logger.warn('Authentication failed: Missing Authorization header', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
reason: 'no_auth_header'
|
||||
});
|
||||
res.status(401).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32001,
|
||||
message: 'Unauthorized'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if Authorization header has Bearer prefix
|
||||
if (!authHeader.startsWith('Bearer ')) {
|
||||
logger.warn('Authentication failed: Invalid Authorization header format (expected Bearer token)', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
reason: 'invalid_auth_format',
|
||||
headerPrefix: authHeader.substring(0, Math.min(authHeader.length, 10)) + '...' // Log first 10 chars for debugging
|
||||
});
|
||||
res.status(401).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32001,
|
||||
message: 'Unauthorized'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Extract token and trim whitespace
|
||||
const token = authHeader.slice(7).trim();
|
||||
if (!this.authenticateRequest(req, res)) return;
|
||||
|
||||
// SECURITY: Use timing-safe comparison to prevent timing attacks
|
||||
// See: https://github.com/czlonkowski/n8n-mcp/issues/265 (CRITICAL-02)
|
||||
const isValidToken = this.authToken &&
|
||||
AuthManager.timingSafeCompare(token, this.authToken);
|
||||
|
||||
if (!isValidToken) {
|
||||
logger.warn('Authentication failed: Invalid token', {
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
reason: 'invalid_token'
|
||||
});
|
||||
res.status(401).json({
|
||||
jsonrpc: '2.0',
|
||||
error: {
|
||||
code: -32001,
|
||||
message: 'Unauthorized'
|
||||
},
|
||||
id: null
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Handle request with single session
|
||||
logger.info('Authentication successful - proceeding to handleRequest', {
|
||||
hasSession: !!this.session,
|
||||
sessionType: this.session?.isSSE ? 'SSE' : 'StreamableHTTP',
|
||||
sessionInitialized: this.session?.initialized
|
||||
activeSessions: this.getActiveSessionCount()
|
||||
});
|
||||
|
||||
// Extract instance context from headers if present (for multi-tenant support)
|
||||
@@ -1417,6 +1448,7 @@ export class SingleSessionHTTPServer {
|
||||
console.log(`Session Limits: ${MAX_SESSIONS} max sessions, ${this.sessionTimeout / 1000 / 60}min timeout`);
|
||||
console.log(`Health check: ${endpoints.health}`);
|
||||
console.log(`MCP endpoint: ${endpoints.mcp}`);
|
||||
console.log(`SSE endpoint: ${baseUrl}/sse (legacy clients)`);
|
||||
|
||||
if (isProduction) {
|
||||
console.log('🔒 Running in PRODUCTION mode - enhanced security enabled');
|
||||
@@ -1483,17 +1515,6 @@ export class SingleSessionHTTPServer {
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up legacy session (for SSE compatibility)
|
||||
if (this.session) {
|
||||
try {
|
||||
await this.session.transport.close();
|
||||
logger.info('Legacy session closed');
|
||||
} catch (error) {
|
||||
logger.warn('Error closing legacy session:', error);
|
||||
}
|
||||
this.session = null;
|
||||
}
|
||||
|
||||
// Close Express server
|
||||
if (this.expressServer) {
|
||||
await new Promise<void>((resolve) => {
|
||||
@@ -1532,25 +1553,9 @@ export class SingleSessionHTTPServer {
|
||||
};
|
||||
} {
|
||||
const metrics = this.getSessionMetrics();
|
||||
|
||||
// Legacy SSE session info
|
||||
if (!this.session) {
|
||||
return {
|
||||
active: false,
|
||||
sessions: {
|
||||
total: metrics.totalSessions,
|
||||
active: metrics.activeSessions,
|
||||
expired: metrics.expiredSessions,
|
||||
max: MAX_SESSIONS,
|
||||
sessionIds: Object.keys(this.transports)
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
return {
|
||||
active: true,
|
||||
sessionId: this.session.sessionId,
|
||||
age: Date.now() - this.session.lastAccess.getTime(),
|
||||
active: metrics.activeSessions > 0,
|
||||
sessions: {
|
||||
total: metrics.totalSessions,
|
||||
active: metrics.activeSessions,
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
import { N8nApiClient } from '../services/n8n-api-client';
|
||||
import { scanWorkflows, type CustomCheckType } from '../services/workflow-security-scanner';
|
||||
import { buildAuditReport } from '../services/audit-report-builder';
|
||||
import { getN8nApiConfig, getN8nApiConfigFromContext } from '../config/n8n-api';
|
||||
import {
|
||||
Workflow,
|
||||
@@ -2789,7 +2791,8 @@ const deleteRowsSchema = tableIdSchema.extend({
|
||||
dryRun: z.boolean().optional(),
|
||||
});
|
||||
|
||||
function handleDataTableError(error: unknown): McpToolResponse {
|
||||
/** Shared error handler for data table and credential operations. */
|
||||
function handleCrudError(error: unknown): McpToolResponse {
|
||||
if (error instanceof z.ZodError) {
|
||||
return { success: false, error: 'Invalid input', details: { errors: error.errors } };
|
||||
}
|
||||
@@ -2818,7 +2821,7 @@ export async function handleCreateTable(args: unknown, context?: InstanceContext
|
||||
message: `Data table "${dataTable.name}" created with ID: ${dataTable.id}`,
|
||||
};
|
||||
} catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2836,7 +2839,7 @@ export async function handleListTables(args: unknown, context?: InstanceContext)
|
||||
},
|
||||
};
|
||||
} catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2847,7 +2850,7 @@ export async function handleGetTable(args: unknown, context?: InstanceContext):
|
||||
const dataTable = await client.getDataTable(tableId);
|
||||
return { success: true, data: dataTable };
|
||||
} catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2865,7 +2868,7 @@ export async function handleUpdateTable(args: unknown, context?: InstanceContext
|
||||
(hasColumns ? '. Note: columns parameter was ignored — table schema is immutable after creation via the public API' : ''),
|
||||
};
|
||||
} catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2876,7 +2879,7 @@ export async function handleDeleteTable(args: unknown, context?: InstanceContext
|
||||
await client.deleteDataTable(tableId);
|
||||
return { success: true, message: `Data table ${tableId} deleted successfully` };
|
||||
} catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2901,7 +2904,7 @@ export async function handleGetRows(args: unknown, context?: InstanceContext): P
|
||||
},
|
||||
};
|
||||
} catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2916,7 +2919,7 @@ export async function handleInsertRows(args: unknown, context?: InstanceContext)
|
||||
message: `Rows inserted into data table ${tableId}`,
|
||||
};
|
||||
} catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2931,7 +2934,7 @@ export async function handleUpdateRows(args: unknown, context?: InstanceContext)
|
||||
message: params.dryRun ? 'Dry run: rows matched (no changes applied)' : 'Rows updated successfully',
|
||||
};
|
||||
} catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2946,7 +2949,7 @@ export async function handleUpsertRows(args: unknown, context?: InstanceContext)
|
||||
message: params.dryRun ? 'Dry run: upsert previewed (no changes applied)' : 'Row upserted successfully',
|
||||
};
|
||||
} catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2965,6 +2968,268 @@ export async function handleDeleteRows(args: unknown, context?: InstanceContext)
|
||||
message: params.dryRun ? 'Dry run: rows matched for deletion (no changes applied)' : 'Rows deleted successfully',
|
||||
};
|
||||
} catch (error) {
|
||||
return handleDataTableError(error);
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// Credential Management Handlers
|
||||
// ========================================================================
|
||||
|
||||
// SECURITY: Never log credential data values (they contain secrets like API keys, passwords).
|
||||
// Only log credential name, type, and ID.
|
||||
|
||||
const listCredentialsSchema = z.object({}).passthrough();
|
||||
|
||||
const getCredentialSchema = z.object({
|
||||
id: z.string({ required_error: 'Credential ID is required' }),
|
||||
});
|
||||
|
||||
const createCredentialSchema = z.object({
|
||||
name: z.string({ required_error: 'Credential name is required' }),
|
||||
type: z.string({ required_error: 'Credential type is required' }),
|
||||
data: z.record(z.any(), { required_error: 'Credential data is required' }),
|
||||
});
|
||||
|
||||
const updateCredentialSchema = z.object({
|
||||
id: z.string({ required_error: 'Credential ID is required' }),
|
||||
name: z.string().optional(),
|
||||
type: z.string().optional(),
|
||||
data: z.record(z.any()).optional(),
|
||||
});
|
||||
|
||||
const deleteCredentialSchema = z.object({
|
||||
id: z.string({ required_error: 'Credential ID is required' }),
|
||||
});
|
||||
|
||||
const getCredentialSchemaTypeSchema = z.object({
|
||||
type: z.string({ required_error: 'Credential type is required' }),
|
||||
});
|
||||
|
||||
export async function handleListCredentials(args: unknown, context?: InstanceContext): Promise<McpToolResponse> {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
listCredentialsSchema.parse(args);
|
||||
const result = await client.listCredentials();
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
credentials: result.data,
|
||||
count: result.data.length,
|
||||
nextCursor: result.nextCursor || undefined,
|
||||
},
|
||||
};
|
||||
} catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
export async function handleGetCredential(args: unknown, context?: InstanceContext): Promise<McpToolResponse> {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const { id } = getCredentialSchema.parse(args);
|
||||
let credential;
|
||||
try {
|
||||
credential = await client.getCredential(id);
|
||||
} catch (getError: unknown) {
|
||||
// GET /credentials/:id is not in the n8n public API — fall back to list + filter
|
||||
const status = (getError as { statusCode?: number }).statusCode;
|
||||
const msg = (getError as Error).message ?? '';
|
||||
const isUnsupported = status === 405 || status === 403 || msg.includes('not allowed');
|
||||
if (!isUnsupported) {
|
||||
throw getError;
|
||||
}
|
||||
const list = await client.listCredentials();
|
||||
credential = list.data.find((c) => c.id === id);
|
||||
if (!credential) {
|
||||
return { success: false, error: `Credential ${id} not found` };
|
||||
}
|
||||
}
|
||||
// Strip sensitive data field — defense in depth against future n8n versions returning decrypted values
|
||||
const { data: _sensitiveData, ...safeCred } = credential;
|
||||
return {
|
||||
success: true,
|
||||
data: safeCred,
|
||||
};
|
||||
} catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
export async function handleCreateCredential(args: unknown, context?: InstanceContext): Promise<McpToolResponse> {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const { name, type, data } = createCredentialSchema.parse(args);
|
||||
logger.info(`Creating credential: name="${name}", type="${type}"`);
|
||||
const credential = await client.createCredential({ name, type, data });
|
||||
const { data: _sensitiveData, ...safeCred } = credential;
|
||||
return {
|
||||
success: true,
|
||||
data: safeCred,
|
||||
message: `Credential "${name}" (type: ${type}) created with ID ${credential.id}`,
|
||||
};
|
||||
} catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
export async function handleUpdateCredential(args: unknown, context?: InstanceContext): Promise<McpToolResponse> {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const { id, name, type, data } = updateCredentialSchema.parse(args);
|
||||
logger.info(`Updating credential: id="${id}"${name ? `, name="${name}"` : ''}`);
|
||||
const updatePayload: Record<string, any> = {};
|
||||
if (name !== undefined) updatePayload.name = name;
|
||||
if (type !== undefined) updatePayload.type = type;
|
||||
if (data !== undefined) updatePayload.data = data;
|
||||
const credential = await client.updateCredential(id, updatePayload);
|
||||
const { data: _sensitiveData, ...safeCred } = credential;
|
||||
return {
|
||||
success: true,
|
||||
data: safeCred,
|
||||
message: `Credential ${id} updated successfully`,
|
||||
};
|
||||
} catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
export async function handleDeleteCredential(args: unknown, context?: InstanceContext): Promise<McpToolResponse> {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const { id } = deleteCredentialSchema.parse(args);
|
||||
logger.info(`Deleting credential: id="${id}"`);
|
||||
await client.deleteCredential(id);
|
||||
return {
|
||||
success: true,
|
||||
message: `Credential ${id} deleted successfully`,
|
||||
};
|
||||
} catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
export async function handleGetCredentialSchema(args: unknown, context?: InstanceContext): Promise<McpToolResponse> {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const { type } = getCredentialSchemaTypeSchema.parse(args);
|
||||
const schema = await client.getCredentialSchema(type);
|
||||
return {
|
||||
success: true,
|
||||
data: schema,
|
||||
message: `Schema for credential type "${type}"`,
|
||||
};
|
||||
} catch (error) {
|
||||
return handleCrudError(error);
|
||||
}
|
||||
}
|
||||
|
||||
// ── Audit Instance ─────────────────────────────────────────────────────────
|
||||
|
||||
const auditInstanceSchema = z.object({
|
||||
categories: z.array(z.enum([
|
||||
'credentials', 'database', 'nodes', 'instance', 'filesystem',
|
||||
])).optional(),
|
||||
includeCustomScan: z.boolean().optional().default(true),
|
||||
daysAbandonedWorkflow: z.number().optional(),
|
||||
customChecks: z.array(z.enum([
|
||||
'hardcoded_secrets', 'unauthenticated_webhooks', 'error_handling', 'data_retention',
|
||||
])).optional(),
|
||||
});
|
||||
|
||||
export async function handleAuditInstance(args: unknown, context?: InstanceContext): Promise<McpToolResponse> {
|
||||
try {
|
||||
const client = ensureApiConfigured(context);
|
||||
const input = auditInstanceSchema.parse(args);
|
||||
|
||||
const totalStart = Date.now();
|
||||
const warnings: string[] = [];
|
||||
|
||||
// Phase A: n8n built-in audit
|
||||
let builtinAudit: any = null;
|
||||
let builtinAuditMs = 0;
|
||||
try {
|
||||
const auditStart = Date.now();
|
||||
builtinAudit = await client.generateAudit({
|
||||
categories: input.categories,
|
||||
daysAbandonedWorkflow: input.daysAbandonedWorkflow,
|
||||
});
|
||||
builtinAuditMs = Date.now() - auditStart;
|
||||
} catch (auditError: any) {
|
||||
builtinAuditMs = Date.now() - totalStart;
|
||||
const msg = auditError?.statusCode === 404
|
||||
? 'Built-in audit endpoint not available on this n8n version.'
|
||||
: `Built-in audit failed: ${auditError?.message || 'unknown error'}`;
|
||||
warnings.push(msg);
|
||||
logger.warn(`Audit: ${msg}`);
|
||||
}
|
||||
|
||||
// Phase B: Custom workflow scanning
|
||||
let customReport = null;
|
||||
let workflowFetchMs = 0;
|
||||
let customScanMs = 0;
|
||||
|
||||
if (input.includeCustomScan) {
|
||||
try {
|
||||
const fetchStart = Date.now();
|
||||
const allWorkflows = await client.listAllWorkflows();
|
||||
workflowFetchMs = Date.now() - fetchStart;
|
||||
|
||||
logger.info(`Audit: fetched ${allWorkflows.length} workflows for scanning`);
|
||||
|
||||
const scanStart = Date.now();
|
||||
customReport = scanWorkflows(
|
||||
allWorkflows,
|
||||
input.customChecks as CustomCheckType[] | undefined,
|
||||
);
|
||||
customScanMs = Date.now() - scanStart;
|
||||
|
||||
logger.info(`Audit: custom scan found ${customReport.summary.total} findings across ${customReport.workflowsScanned} workflows`);
|
||||
} catch (scanError: any) {
|
||||
warnings.push(`Custom scan failed: ${scanError?.message || 'unknown error'}`);
|
||||
logger.warn(`Audit: custom scan failed: ${scanError?.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
const totalMs = Date.now() - totalStart;
|
||||
|
||||
// Build the API URL for the report (mask the key)
|
||||
const apiConfig = context?.n8nApiUrl
|
||||
? { baseUrl: context.n8nApiUrl }
|
||||
: getN8nApiConfig();
|
||||
const instanceUrl = apiConfig?.baseUrl || 'unknown';
|
||||
|
||||
// Build unified markdown report
|
||||
const report = buildAuditReport({
|
||||
builtinAudit,
|
||||
customReport,
|
||||
performance: { builtinAuditMs, workflowFetchMs, customScanMs, totalMs },
|
||||
instanceUrl,
|
||||
warnings: warnings.length > 0 ? warnings : undefined,
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
report: report.markdown,
|
||||
summary: report.summary,
|
||||
},
|
||||
};
|
||||
} catch (error) {
|
||||
if (error instanceof z.ZodError) {
|
||||
return {
|
||||
success: false,
|
||||
error: 'Invalid audit parameters',
|
||||
details: { issues: error.errors },
|
||||
};
|
||||
}
|
||||
if (error instanceof N8nApiError) {
|
||||
return {
|
||||
success: false,
|
||||
error: getUserFriendlyErrorMessage(error),
|
||||
};
|
||||
}
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
return { success: false, error: message };
|
||||
}
|
||||
}
|
||||
|
||||
@@ -33,7 +33,7 @@ function getValidator(repository: NodeRepository): WorkflowValidator {
|
||||
|
||||
// Operation types that identify nodes by nodeId/nodeName
|
||||
const NODE_TARGETING_OPERATIONS = new Set([
|
||||
'updateNode', 'removeNode', 'moveNode', 'enableNode', 'disableNode'
|
||||
'updateNode', 'removeNode', 'moveNode', 'enableNode', 'disableNode', 'patchNodeField'
|
||||
]);
|
||||
|
||||
// Zod schema for the diff request
|
||||
@@ -47,6 +47,8 @@ const workflowDiffSchema = z.object({
|
||||
nodeId: z.string().optional(),
|
||||
nodeName: z.string().optional(),
|
||||
updates: z.any().optional(),
|
||||
fieldPath: z.string().optional(),
|
||||
patches: z.any().optional(),
|
||||
position: z.tuple([z.number(), z.number()]).optional(),
|
||||
// Connection operations
|
||||
source: z.string().optional(),
|
||||
@@ -569,6 +571,8 @@ function inferIntentFromOperations(operations: any[]): string {
|
||||
return `Remove node ${op.nodeName || op.nodeId || ''}`.trim();
|
||||
case 'updateNode':
|
||||
return `Update node ${op.nodeName || op.nodeId || ''}`.trim();
|
||||
case 'patchNodeField':
|
||||
return `Patch field on node ${op.nodeName || op.nodeId || ''}`.trim();
|
||||
case 'addConnection':
|
||||
return `Connect ${op.source || 'node'} to ${op.target || 'node'}`;
|
||||
case 'removeConnection':
|
||||
@@ -604,6 +608,10 @@ function inferIntentFromOperations(operations: any[]): string {
|
||||
const count = opTypes.filter((t) => t === 'updateNode').length;
|
||||
summary.push(`update ${count} node${count > 1 ? 's' : ''}`);
|
||||
}
|
||||
if (typeSet.has('patchNodeField')) {
|
||||
const count = opTypes.filter((t) => t === 'patchNodeField').length;
|
||||
summary.push(`patch ${count} field${count > 1 ? 's' : ''}`);
|
||||
}
|
||||
if (typeSet.has('addConnection') || typeSet.has('rewireConnection')) {
|
||||
summary.push('modify connections');
|
||||
}
|
||||
|
||||
@@ -1048,6 +1048,15 @@ export class N8NDocumentationMCPServer {
|
||||
? { valid: true, errors: [] }
|
||||
: { valid: false, errors: [{ field: 'action', message: 'action is required' }] };
|
||||
break;
|
||||
case 'n8n_manage_credentials':
|
||||
validationResult = args.action
|
||||
? { valid: true, errors: [] }
|
||||
: { valid: false, errors: [{ field: 'action', message: 'action is required' }] };
|
||||
break;
|
||||
case 'n8n_audit_instance':
|
||||
// No required parameters - all are optional
|
||||
validationResult = { valid: true, errors: [] };
|
||||
break;
|
||||
case 'n8n_deploy_template':
|
||||
// Requires templateId parameter
|
||||
validationResult = args.templateId !== undefined
|
||||
@@ -1538,6 +1547,25 @@ export class N8NDocumentationMCPServer {
|
||||
}
|
||||
}
|
||||
|
||||
case 'n8n_manage_credentials': {
|
||||
this.validateToolParams(name, args, ['action']);
|
||||
const credAction = args.action;
|
||||
switch (credAction) {
|
||||
case 'list': return n8nHandlers.handleListCredentials(args, this.instanceContext);
|
||||
case 'get': return n8nHandlers.handleGetCredential(args, this.instanceContext);
|
||||
case 'create': return n8nHandlers.handleCreateCredential(args, this.instanceContext);
|
||||
case 'update': return n8nHandlers.handleUpdateCredential(args, this.instanceContext);
|
||||
case 'delete': return n8nHandlers.handleDeleteCredential(args, this.instanceContext);
|
||||
case 'getSchema': return n8nHandlers.handleGetCredentialSchema(args, this.instanceContext);
|
||||
default:
|
||||
throw new Error(`Unknown action: ${credAction}. Valid actions: list, get, create, update, delete, getSchema`);
|
||||
}
|
||||
}
|
||||
|
||||
case 'n8n_audit_instance':
|
||||
// No required parameters - all are optional
|
||||
return n8nHandlers.handleAuditInstance(args, this.instanceContext);
|
||||
|
||||
case 'n8n_generate_workflow': {
|
||||
this.validateToolParams(name, args, ['description']);
|
||||
|
||||
|
||||
@@ -7,7 +7,8 @@ import { validateNodeDoc, validateWorkflowDoc } from './validation';
|
||||
import { getTemplateDoc, searchTemplatesDoc } from './templates';
|
||||
import {
|
||||
toolsDocumentationDoc,
|
||||
n8nHealthCheckDoc
|
||||
n8nHealthCheckDoc,
|
||||
n8nAuditInstanceDoc
|
||||
} from './system';
|
||||
import { aiAgentsGuide } from './guides';
|
||||
import {
|
||||
@@ -24,7 +25,8 @@ import {
|
||||
n8nWorkflowVersionsDoc,
|
||||
n8nDeployTemplateDoc,
|
||||
n8nManageDatatableDoc,
|
||||
n8nGenerateWorkflowDoc
|
||||
n8nGenerateWorkflowDoc,
|
||||
n8nManageCredentialsDoc
|
||||
} from './workflow_management';
|
||||
|
||||
// Combine all tool documentations into a single object
|
||||
@@ -32,6 +34,7 @@ export const toolsDocumentation: Record<string, ToolDocumentation> = {
|
||||
// System tools
|
||||
tools_documentation: toolsDocumentationDoc,
|
||||
n8n_health_check: n8nHealthCheckDoc,
|
||||
n8n_audit_instance: n8nAuditInstanceDoc,
|
||||
|
||||
// Guides
|
||||
ai_agents_guide: aiAgentsGuide,
|
||||
@@ -64,7 +67,8 @@ export const toolsDocumentation: Record<string, ToolDocumentation> = {
|
||||
n8n_workflow_versions: n8nWorkflowVersionsDoc,
|
||||
n8n_deploy_template: n8nDeployTemplateDoc,
|
||||
n8n_manage_datatable: n8nManageDatatableDoc,
|
||||
n8n_generate_workflow: n8nGenerateWorkflowDoc
|
||||
n8n_generate_workflow: n8nGenerateWorkflowDoc,
|
||||
n8n_manage_credentials: n8nManageCredentialsDoc
|
||||
};
|
||||
|
||||
// Re-export types
|
||||
|
||||
@@ -1,2 +1,3 @@
|
||||
export { toolsDocumentationDoc } from './tools-documentation';
|
||||
export { n8nHealthCheckDoc } from './n8n-health-check';
|
||||
export { n8nHealthCheckDoc } from './n8n-health-check';
|
||||
export { n8nAuditInstanceDoc } from './n8n-audit-instance';
|
||||
106
src/mcp/tool-docs/system/n8n-audit-instance.ts
Normal file
106
src/mcp/tool-docs/system/n8n-audit-instance.ts
Normal file
@@ -0,0 +1,106 @@
|
||||
import { ToolDocumentation } from '../types';
|
||||
|
||||
export const n8nAuditInstanceDoc: ToolDocumentation = {
|
||||
name: 'n8n_audit_instance',
|
||||
category: 'system',
|
||||
essentials: {
|
||||
description: 'Security audit combining n8n built-in audit with deep workflow scanning',
|
||||
keyParameters: ['categories', 'includeCustomScan', 'customChecks'],
|
||||
example: 'n8n_audit_instance({}) for full audit, n8n_audit_instance({customChecks: ["hardcoded_secrets", "unauthenticated_webhooks"]}) for specific checks',
|
||||
performance: 'Moderate - fetches all workflows (2-30s depending on instance size)',
|
||||
tips: [
|
||||
'Returns actionable markdown with remediation steps',
|
||||
'Use n8n_manage_credentials to fix credential findings',
|
||||
'Custom scan covers 50+ secret patterns including API keys, tokens, and passwords',
|
||||
'Built-in audit checks credentials, database, nodes, instance, and filesystem risks',
|
||||
]
|
||||
},
|
||||
full: {
|
||||
description: `Performs a comprehensive security audit of the configured n8n instance by combining two scanning approaches:
|
||||
|
||||
**Built-in Audit (via n8n API):**
|
||||
- credentials: Unused credentials, shared credentials with elevated access
|
||||
- database: Database-level security settings and exposure
|
||||
- nodes: Community nodes with known vulnerabilities, outdated nodes
|
||||
- instance: Instance configuration risks (e.g., public registration, weak auth)
|
||||
- filesystem: File system access and permission risks
|
||||
|
||||
**Custom Deep Scan (workflow analysis):**
|
||||
- hardcoded_secrets: Scans all workflow node parameters for hardcoded API keys, tokens, passwords, and connection strings using 50+ regex patterns
|
||||
- unauthenticated_webhooks: Detects webhook nodes without authentication configured
|
||||
- error_handling: Identifies workflows without error handling or notification on failure
|
||||
- data_retention: Flags workflows with excessive data retention or no cleanup
|
||||
|
||||
The report is returned as actionable markdown with severity ratings, affected resources, and specific remediation steps referencing other MCP tools.`,
|
||||
parameters: {
|
||||
categories: {
|
||||
type: 'array of string',
|
||||
required: false,
|
||||
description: 'Built-in audit categories to check',
|
||||
default: ['credentials', 'database', 'nodes', 'instance', 'filesystem'],
|
||||
enum: ['credentials', 'database', 'nodes', 'instance', 'filesystem'],
|
||||
},
|
||||
includeCustomScan: {
|
||||
type: 'boolean',
|
||||
required: false,
|
||||
description: 'Run deep workflow scanning for secrets, webhooks, error handling',
|
||||
default: true,
|
||||
},
|
||||
daysAbandonedWorkflow: {
|
||||
type: 'number',
|
||||
required: false,
|
||||
description: 'Days threshold for abandoned workflow detection',
|
||||
default: 90,
|
||||
},
|
||||
customChecks: {
|
||||
type: 'array of string',
|
||||
required: false,
|
||||
description: 'Specific custom checks to run (defaults to all 4 if includeCustomScan is true)',
|
||||
default: ['hardcoded_secrets', 'unauthenticated_webhooks', 'error_handling', 'data_retention'],
|
||||
enum: ['hardcoded_secrets', 'unauthenticated_webhooks', 'error_handling', 'data_retention'],
|
||||
},
|
||||
},
|
||||
returns: `Markdown-formatted security audit report containing:
|
||||
- Summary table with finding counts by severity (critical, high, medium, low)
|
||||
- Findings grouped by workflow with per-workflow tables (ID, severity, finding, node, fix type)
|
||||
- Built-in audit section with n8n's own risk assessments (nodes, instance, credentials, database, filesystem)
|
||||
- Remediation Playbook aggregated by finding type: auto-fixable (secrets, webhooks), requires review (error handling, PII), requires user action (data retention, instance updates)
|
||||
- Tool chains for auto-fixing reference n8n_get_workflow, n8n_manage_credentials, n8n_update_partial_workflow`,
|
||||
examples: [
|
||||
'// Full audit with all checks\nn8n_audit_instance({})',
|
||||
'// Built-in audit only (no workflow scanning)\nn8n_audit_instance({includeCustomScan: false})',
|
||||
'// Only check for hardcoded secrets and unauthenticated webhooks\nn8n_audit_instance({customChecks: ["hardcoded_secrets", "unauthenticated_webhooks"]})',
|
||||
'// Only run built-in credential and instance checks\nn8n_audit_instance({categories: ["credentials", "instance"], includeCustomScan: false})',
|
||||
'// Adjust abandoned workflow threshold\nn8n_audit_instance({daysAbandonedWorkflow: 30})',
|
||||
],
|
||||
useCases: [
|
||||
'Regular security audits of n8n instances',
|
||||
'Detecting hardcoded secrets before they become a breach',
|
||||
'Identifying unauthenticated webhook endpoints exposed to the internet',
|
||||
'Compliance checks for data retention and error handling policies',
|
||||
'Pre-deployment security review of new workflows',
|
||||
'Remediation workflow: audit, review findings, fix with n8n_manage_credentials or n8n_update_partial_workflow',
|
||||
],
|
||||
performance: `Execution time depends on instance size:
|
||||
- Small instances (<20 workflows): 2-5s
|
||||
- Medium instances (20-100 workflows): 5-15s
|
||||
- Large instances (100+ workflows): 15-30s
|
||||
The built-in audit is a single API call. The custom scan fetches all workflows and analyzes each one.`,
|
||||
bestPractices: [
|
||||
'Run a full audit periodically (e.g., weekly) to catch new issues',
|
||||
'Use customChecks to focus on specific concerns when time is limited',
|
||||
'Address critical and high severity findings first',
|
||||
'After fixing findings, re-run the audit to verify remediation',
|
||||
'Combine with n8n_health_check for a complete instance health picture',
|
||||
'Use n8n_manage_credentials to rotate or replace exposed credentials',
|
||||
],
|
||||
pitfalls: [
|
||||
'Large instances with many workflows may take 30+ seconds to scan',
|
||||
'Built-in audit API may not be available on older n8n versions (pre-1.x)',
|
||||
'Custom scan analyzes stored workflow definitions only, not runtime values from expressions',
|
||||
'Requires N8N_API_URL and N8N_API_KEY to be configured',
|
||||
'Findings from the built-in audit depend on n8n version and may vary',
|
||||
],
|
||||
relatedTools: ['n8n_manage_credentials', 'n8n_update_partial_workflow', 'n8n_health_check'],
|
||||
}
|
||||
};
|
||||
@@ -12,3 +12,4 @@ export { n8nWorkflowVersionsDoc } from './n8n-workflow-versions';
|
||||
export { n8nDeployTemplateDoc } from './n8n-deploy-template';
|
||||
export { n8nManageDatatableDoc } from './n8n-manage-datatable';
|
||||
export { n8nGenerateWorkflowDoc } from './n8n-generate-workflow';
|
||||
export { n8nManageCredentialsDoc } from './n8n-manage-credentials';
|
||||
|
||||
106
src/mcp/tool-docs/workflow_management/n8n-manage-credentials.ts
Normal file
106
src/mcp/tool-docs/workflow_management/n8n-manage-credentials.ts
Normal file
@@ -0,0 +1,106 @@
|
||||
import { ToolDocumentation } from '../types';
|
||||
|
||||
export const n8nManageCredentialsDoc: ToolDocumentation = {
|
||||
name: 'n8n_manage_credentials',
|
||||
category: 'workflow_management',
|
||||
essentials: {
|
||||
description: 'CRUD operations for n8n credentials with schema discovery',
|
||||
keyParameters: ['action', 'type', 'name', 'data'],
|
||||
example: 'n8n_manage_credentials({action: "getSchema", type: "httpHeaderAuth"}) then n8n_manage_credentials({action: "create", name: "My Auth", type: "httpHeaderAuth", data: {name: "X-API-Key", value: "secret"}})',
|
||||
performance: 'Fast - single API call per action',
|
||||
tips: [
|
||||
'Always use getSchema first to discover required fields before creating credentials',
|
||||
'Credential data values are never logged for security',
|
||||
'Use with n8n_audit_instance to fix security findings',
|
||||
'Actions: list, get, create, update, delete, getSchema',
|
||||
]
|
||||
},
|
||||
full: {
|
||||
description: `Manage n8n credentials through a unified interface. Supports full lifecycle operations:
|
||||
|
||||
**Discovery:**
|
||||
- **getSchema**: Retrieve the schema for a credential type, showing all required and optional fields with their types and descriptions. Always call this before creating credentials to know the exact field names and formats.
|
||||
|
||||
**Read Operations:**
|
||||
- **list**: List all credentials with their names, types, and IDs. Does not return credential data values.
|
||||
- **get**: Get a specific credential by ID, including its metadata and data fields.
|
||||
|
||||
**Write Operations:**
|
||||
- **create**: Create a new credential with a name, type, and data fields. Requires name, type, and data.
|
||||
- **update**: Update an existing credential by ID. Can update name and/or data fields.
|
||||
- **delete**: Permanently delete a credential by ID.
|
||||
|
||||
**Security:** Credential data values (API keys, passwords, tokens) are never written to logs. The n8n API encrypts stored credential data at rest.`,
|
||||
parameters: {
|
||||
action: {
|
||||
type: 'string',
|
||||
required: true,
|
||||
description: 'Operation to perform on credentials',
|
||||
enum: ['list', 'get', 'create', 'update', 'delete', 'getSchema'],
|
||||
},
|
||||
id: {
|
||||
type: 'string',
|
||||
required: false,
|
||||
description: 'Credential ID (required for get, update, delete)',
|
||||
},
|
||||
name: {
|
||||
type: 'string',
|
||||
required: false,
|
||||
description: 'Credential display name (required for create, optional for update)',
|
||||
},
|
||||
type: {
|
||||
type: 'string',
|
||||
required: false,
|
||||
description: 'Credential type identifier, e.g. httpHeaderAuth, httpBasicAuth, oAuth2Api (required for create and getSchema)',
|
||||
examples: ['httpHeaderAuth', 'httpBasicAuth', 'oAuth2Api', 'slackApi', 'gmailOAuth2Api'],
|
||||
},
|
||||
data: {
|
||||
type: 'object',
|
||||
required: false,
|
||||
description: 'Credential data fields as key-value pairs. Use getSchema to discover required fields (required for create, optional for update)',
|
||||
},
|
||||
},
|
||||
returns: `Depends on action:
|
||||
- list: Array of credentials with id, name, type, createdAt, updatedAt
|
||||
- get: Full credential object with id, name, type, and data fields
|
||||
- create: Created credential object with id, name, type
|
||||
- update: Updated credential object
|
||||
- delete: Success confirmation message
|
||||
- getSchema: Schema object with field definitions including name, type, required status, description, and default values`,
|
||||
examples: [
|
||||
'// Discover schema before creating\nn8n_manage_credentials({action: "getSchema", type: "httpHeaderAuth"})',
|
||||
'// Create an HTTP header auth credential\nn8n_manage_credentials({action: "create", name: "My API Key", type: "httpHeaderAuth", data: {name: "X-API-Key", value: "sk-abc123"}})',
|
||||
'// List all credentials\nn8n_manage_credentials({action: "list"})',
|
||||
'// Get a specific credential\nn8n_manage_credentials({action: "get", id: "123"})',
|
||||
'// Update credential data\nn8n_manage_credentials({action: "update", id: "123", data: {value: "new-secret-value"}})',
|
||||
'// Rename a credential\nn8n_manage_credentials({action: "update", id: "123", name: "Renamed Credential"})',
|
||||
'// Delete a credential\nn8n_manage_credentials({action: "delete", id: "123"})',
|
||||
'// Create basic auth credential\nn8n_manage_credentials({action: "create", name: "Service Auth", type: "httpBasicAuth", data: {user: "admin", password: "secret"}})',
|
||||
],
|
||||
useCases: [
|
||||
'Provisioning credentials for new workflow integrations',
|
||||
'Rotating API keys and secrets on a schedule',
|
||||
'Remediating security findings from n8n_audit_instance',
|
||||
'Discovering available credential types and their required fields',
|
||||
'Bulk credential management across n8n instances',
|
||||
'Replacing hardcoded secrets with proper credential references',
|
||||
],
|
||||
performance: 'Fast response expected: single HTTP API call per action, typically <200ms.',
|
||||
bestPractices: [
|
||||
'Always call getSchema before create to discover required fields and their formats',
|
||||
'Use descriptive names that identify the service and purpose (e.g., "Slack - Production Bot")',
|
||||
'Rotate credentials regularly by updating data fields',
|
||||
'After creating credentials, reference them in workflows instead of hardcoding secrets',
|
||||
'Use n8n_audit_instance to find credentials that need rotation or cleanup',
|
||||
'Verify credential validity by testing the workflow after creation',
|
||||
],
|
||||
pitfalls: [
|
||||
'delete is permanent and cannot be undone - workflows using the credential will break',
|
||||
'Credential type must match exactly (case-sensitive) - use getSchema to verify',
|
||||
'OAuth2 credentials may require browser-based authorization flow that cannot be completed via API alone',
|
||||
'The list action does not return credential data values for security',
|
||||
'Requires N8N_API_URL and N8N_API_KEY to be configured',
|
||||
],
|
||||
relatedTools: ['n8n_audit_instance', 'n8n_create_workflow', 'n8n_update_partial_workflow', 'n8n_health_check'],
|
||||
}
|
||||
};
|
||||
@@ -4,7 +4,7 @@ export const n8nUpdatePartialWorkflowDoc: ToolDocumentation = {
|
||||
name: 'n8n_update_partial_workflow',
|
||||
category: 'workflow_management',
|
||||
essentials: {
|
||||
description: 'Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, moveNode, enable/disableNode, addConnection, removeConnection, rewireConnection, cleanStaleConnections, replaceConnections, updateSettings, updateName, add/removeTag, activateWorkflow, deactivateWorkflow, transferWorkflow. Supports smart parameters (branch, case) for multi-output nodes. Full support for AI connections (ai_languageModel, ai_tool, ai_memory, ai_embedding, ai_vectorStore, ai_document, ai_textSplitter, ai_outputParser).',
|
||||
description: 'Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, patchNodeField, moveNode, enable/disableNode, addConnection, removeConnection, rewireConnection, cleanStaleConnections, replaceConnections, updateSettings, updateName, add/removeTag, activateWorkflow, deactivateWorkflow, transferWorkflow. Supports smart parameters (branch, case) for multi-output nodes. Full support for AI connections (ai_languageModel, ai_tool, ai_memory, ai_embedding, ai_vectorStore, ai_document, ai_textSplitter, ai_outputParser).',
|
||||
keyParameters: ['id', 'operations', 'continueOnError'],
|
||||
example: 'n8n_update_partial_workflow({id: "wf_123", operations: [{type: "rewireConnection", source: "IF", from: "Old", to: "New", branch: "true"}]})',
|
||||
performance: 'Fast (50-200ms)',
|
||||
@@ -27,14 +27,15 @@ export const n8nUpdatePartialWorkflowDoc: ToolDocumentation = {
|
||||
]
|
||||
},
|
||||
full: {
|
||||
description: `Updates workflows using surgical diff operations instead of full replacement. Supports 17 operation types for precise modifications. Operations are validated and applied atomically by default - all succeed or none are applied.
|
||||
description: `Updates workflows using surgical diff operations instead of full replacement. Supports 18 operation types for precise modifications. Operations are validated and applied atomically by default - all succeed or none are applied.
|
||||
|
||||
## Available Operations:
|
||||
|
||||
### Node Operations (6 types):
|
||||
### Node Operations (7 types):
|
||||
- **addNode**: Add a new node with name, type, and position (required)
|
||||
- **removeNode**: Remove a node by ID or name
|
||||
- **updateNode**: Update node properties using dot notation (e.g., 'parameters.url')
|
||||
- **patchNodeField**: Surgically edit string fields using find/replace patches. Strict mode: errors if find string not found, errors if multiple matches (ambiguity) unless replaceAll is set. Supports replaceAll and regex flags.
|
||||
- **moveNode**: Change node position [x, y]
|
||||
- **enableNode**: Enable a disabled node
|
||||
- **disableNode**: Disable an active node
|
||||
@@ -335,6 +336,11 @@ n8n_update_partial_workflow({
|
||||
'// Validate before applying\nn8n_update_partial_workflow({id: "bcd", operations: [{type: "removeNode", nodeName: "Old Process"}], validateOnly: true})',
|
||||
'// Surgically edit code using __patch_find_replace (avoids replacing entire code block)\nn8n_update_partial_workflow({id: "pfr1", operations: [{type: "updateNode", nodeName: "Code", updates: {"parameters.jsCode": {"__patch_find_replace": [{"find": "const limit = 10;", "replace": "const limit = 50;"}]}}}]})',
|
||||
'// Multiple sequential patches on the same property\nn8n_update_partial_workflow({id: "pfr2", operations: [{type: "updateNode", nodeName: "Code", updates: {"parameters.jsCode": {"__patch_find_replace": [{"find": "api.old-domain.com", "replace": "api.new-domain.com"}, {"find": "Authorization: Bearer old_token", "replace": "Authorization: Bearer new_token"}]}}}]})',
|
||||
'\n// ============ PATCHNODEFIELD EXAMPLES (strict find/replace) ============',
|
||||
'// Surgical code edit with patchNodeField (errors if not found)\nn8n_update_partial_workflow({id: "pnf1", operations: [{type: "patchNodeField", nodeName: "Code", fieldPath: "parameters.jsCode", patches: [{find: "const limit = 10;", replace: "const limit = 50;"}]}]})',
|
||||
'// Replace all occurrences of a string\nn8n_update_partial_workflow({id: "pnf2", operations: [{type: "patchNodeField", nodeName: "Code", fieldPath: "parameters.jsCode", patches: [{find: "api.old.com", replace: "api.new.com", replaceAll: true}]}]})',
|
||||
'// Multiple sequential patches\nn8n_update_partial_workflow({id: "pnf3", operations: [{type: "patchNodeField", nodeName: "Set Email", fieldPath: "parameters.assignments.assignments.6.value", patches: [{find: "© 2025 n8n-mcp", replace: "© 2026 n8n-mcp"}, {find: "<p>Unsubscribe</p>", replace: ""}]}]})',
|
||||
'// Regex-based replacement\nn8n_update_partial_workflow({id: "pnf4", operations: [{type: "patchNodeField", nodeName: "Code", fieldPath: "parameters.jsCode", patches: [{find: "const\\\\s+limit\\\\s*=\\\\s*\\\\d+", replace: "const limit = 100", regex: true}]}]})',
|
||||
'\n// ============ AI CONNECTION EXAMPLES ============',
|
||||
'// Connect language model to AI Agent\nn8n_update_partial_workflow({id: "ai1", operations: [{type: "addConnection", source: "OpenAI Chat Model", target: "AI Agent", sourceOutput: "ai_languageModel"}]})',
|
||||
'// Connect tool to AI Agent\nn8n_update_partial_workflow({id: "ai2", operations: [{type: "addConnection", source: "HTTP Request Tool", target: "AI Agent", sourceOutput: "ai_tool"}]})',
|
||||
@@ -373,7 +379,10 @@ n8n_update_partial_workflow({
|
||||
'Configure Vector Store retrieval systems',
|
||||
'Swap language models in existing AI workflows',
|
||||
'Batch-update AI tool connections',
|
||||
'Transfer workflows between team projects (enterprise)'
|
||||
'Transfer workflows between team projects (enterprise)',
|
||||
'Surgical string edits in email templates, code, or JSON bodies (patchNodeField)',
|
||||
'Fix typos or update URLs in large HTML content without re-transmitting the full string',
|
||||
'Bulk find/replace across node field content (replaceAll flag)'
|
||||
],
|
||||
performance: 'Very fast - typically 50-200ms. Much faster than full updates as only changes are processed.',
|
||||
bestPractices: [
|
||||
@@ -396,7 +405,10 @@ n8n_update_partial_workflow({
|
||||
'To remove properties, set them to null in the updates object',
|
||||
'When migrating from deprecated properties, remove the old property and add the new one in the same operation',
|
||||
'Use null to resolve mutual exclusivity validation errors between properties',
|
||||
'Batch multiple property removals in a single updateNode operation for efficiency'
|
||||
'Batch multiple property removals in a single updateNode operation for efficiency',
|
||||
'Prefer patchNodeField over __patch_find_replace for strict error handling — patchNodeField errors on not-found and detects ambiguous matches',
|
||||
'Use replaceAll: true in patchNodeField when you want to replace all occurrences of a string',
|
||||
'Use regex: true in patchNodeField for pattern-based replacements (e.g., whitespace-insensitive matching)'
|
||||
],
|
||||
pitfalls: [
|
||||
'**REQUIRES N8N_API_URL and N8N_API_KEY environment variables** - will not work without n8n API access',
|
||||
@@ -419,6 +431,9 @@ n8n_update_partial_workflow({
|
||||
'**Corrupted workflows beyond repair**: Workflows in paradoxical states (API returns corrupt, API rejects updates) cannot be fixed via API - must be recreated',
|
||||
'**__patch_find_replace for code edits**: Instead of replacing entire code blocks, use `{"parameters.jsCode": {"__patch_find_replace": [{"find": "old text", "replace": "new text"}]}}` to surgically edit string properties',
|
||||
'__patch_find_replace replaces the FIRST occurrence of each find string. Patches are applied sequentially — order matters',
|
||||
'**patchNodeField is strict**: it ERRORS if the find string is not found (unlike __patch_find_replace which only warns)',
|
||||
'**patchNodeField detects ambiguity**: if find matches multiple times, it ERRORS unless replaceAll: true is set',
|
||||
'When using regex: true in patchNodeField, escape special regex characters (., *, +, etc.) if you want literal matching',
|
||||
'To remove a property, set it to null in the updates object',
|
||||
'When properties are mutually exclusive (e.g., continueOnFail and onError), setting only the new property will fail - you must remove the old one with null',
|
||||
'Removing a required property may cause validation errors - check node documentation first',
|
||||
|
||||
@@ -147,7 +147,7 @@ export const n8nManagementTools: ToolDefinition[] = [
|
||||
},
|
||||
{
|
||||
name: 'n8n_update_partial_workflow',
|
||||
description: `Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, moveNode, enable/disableNode, addConnection, removeConnection, updateSettings, updateName, add/removeTag, activate/deactivateWorkflow, transferWorkflow. See tools_documentation("n8n_update_partial_workflow", "full") for details.`,
|
||||
description: `Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, patchNodeField, moveNode, enable/disableNode, addConnection, removeConnection, updateSettings, updateName, add/removeTag, activate/deactivateWorkflow, transferWorkflow. See tools_documentation("n8n_update_partial_workflow", "full") for details.`,
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
additionalProperties: true, // Allow any extra properties Claude Desktop might add
|
||||
@@ -654,6 +654,27 @@ export const n8nManagementTools: ToolDefinition[] = [
|
||||
openWorldHint: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'n8n_manage_credentials',
|
||||
description: 'Manage n8n credentials. Actions: list, get, create, update, delete, getSchema. Use getSchema to discover required fields before creating. SECURITY: credential data values are never logged.',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
action: { type: 'string', enum: ['list', 'get', 'create', 'update', 'delete', 'getSchema'], description: 'Action to perform' },
|
||||
id: { type: 'string', description: 'Credential ID (required for get, update, delete)' },
|
||||
name: { type: 'string', description: 'Credential name (required for create)' },
|
||||
type: { type: 'string', description: 'Credential type e.g. httpHeaderAuth, httpBasicAuth, oAuth2Api (required for create, getSchema)' },
|
||||
data: { type: 'object', description: 'Credential data fields - use getSchema to discover required fields (required for create, optional for update)' },
|
||||
},
|
||||
required: ['action'],
|
||||
},
|
||||
annotations: {
|
||||
title: 'Manage Credentials',
|
||||
readOnlyHint: false,
|
||||
destructiveHint: false,
|
||||
openWorldHint: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'n8n_generate_workflow',
|
||||
description: 'Generate an n8n workflow from a natural language description using AI. ' +
|
||||
@@ -694,4 +715,44 @@ export const n8nManagementTools: ToolDefinition[] = [
|
||||
openWorldHint: true,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'n8n_audit_instance',
|
||||
description: `Security audit of n8n instance. Combines n8n's built-in audit API (credentials, database, nodes, instance, filesystem risks) with deep workflow scanning (hardcoded secrets via 50+ regex patterns, unauthenticated webhooks, error handling gaps, data retention risks). Returns actionable markdown report with remediation steps using n8n_manage_credentials and n8n_update_partial_workflow.`,
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
categories: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'string',
|
||||
enum: ['credentials', 'database', 'nodes', 'instance', 'filesystem'],
|
||||
},
|
||||
description: 'Built-in audit categories to check (default: all 5)',
|
||||
},
|
||||
includeCustomScan: {
|
||||
type: 'boolean',
|
||||
description: 'Run deep workflow scanning for secrets, webhooks, error handling (default: true)',
|
||||
},
|
||||
daysAbandonedWorkflow: {
|
||||
type: 'number',
|
||||
description: 'Days threshold for abandoned workflow detection (default: 90)',
|
||||
},
|
||||
customChecks: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'string',
|
||||
enum: ['hardcoded_secrets', 'unauthenticated_webhooks', 'error_handling', 'data_retention'],
|
||||
},
|
||||
description: 'Specific custom checks to run (default: all 4)',
|
||||
},
|
||||
},
|
||||
},
|
||||
annotations: {
|
||||
title: 'Audit Instance Security',
|
||||
readOnlyHint: true,
|
||||
destructiveHint: false,
|
||||
idempotentHint: true,
|
||||
openWorldHint: true,
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
459
src/services/audit-report-builder.ts
Normal file
459
src/services/audit-report-builder.ts
Normal file
@@ -0,0 +1,459 @@
|
||||
/**
|
||||
* Audit Report Builder
|
||||
*
|
||||
* Builds an actionable markdown security audit report that unifies
|
||||
* findings from both the n8n built-in audit endpoint and the custom
|
||||
* workflow security scanner. Produces a structured summary alongside
|
||||
* the markdown so callers can branch on severity counts.
|
||||
*/
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types – imported from the workflow security scanner
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
import type {
|
||||
AuditSeverity,
|
||||
RemediationType,
|
||||
AuditFinding,
|
||||
WorkflowSecurityReport,
|
||||
} from './workflow-security-scanner';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Public types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface AuditReportInput {
|
||||
builtinAudit: any; // Raw response from n8n POST /audit (object with report keys, or [] if empty)
|
||||
customReport: WorkflowSecurityReport | null;
|
||||
performance: {
|
||||
builtinAuditMs: number;
|
||||
workflowFetchMs: number;
|
||||
customScanMs: number;
|
||||
totalMs: number;
|
||||
};
|
||||
instanceUrl: string;
|
||||
warnings?: string[];
|
||||
}
|
||||
|
||||
export interface UnifiedAuditReport {
|
||||
markdown: string;
|
||||
summary: {
|
||||
critical: number;
|
||||
high: number;
|
||||
medium: number;
|
||||
low: number;
|
||||
totalFindings: number;
|
||||
workflowsScanned: number;
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Severity sort order — most severe first. */
|
||||
const SEVERITY_ORDER: Record<AuditSeverity, number> = {
|
||||
critical: 0,
|
||||
high: 1,
|
||||
medium: 2,
|
||||
low: 3,
|
||||
};
|
||||
|
||||
/** Map remediation type to a short fix label for the table column. */
|
||||
const FIX_LABEL: Record<RemediationType, string> = {
|
||||
auto_fixable: 'Auto-fix',
|
||||
user_input_needed: 'User input',
|
||||
user_action_needed: 'User action',
|
||||
review_recommended: 'Review',
|
||||
};
|
||||
|
||||
/** Returns true if the built-in audit response contains report data. */
|
||||
function isPopulatedAudit(builtinAudit: any): builtinAudit is Record<string, any> {
|
||||
if (Array.isArray(builtinAudit)) return false;
|
||||
return typeof builtinAudit === 'object' && builtinAudit !== null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Iterates over all (reportKey, section) pairs in the built-in audit response.
|
||||
* Handles both `{ sections: [...] }` and direct array formats.
|
||||
*/
|
||||
function forEachBuiltinSection(
|
||||
builtinAudit: Record<string, any>,
|
||||
callback: (reportKey: string, section: any) => void,
|
||||
): void {
|
||||
for (const reportKey of Object.keys(builtinAudit)) {
|
||||
const report = builtinAudit[reportKey];
|
||||
const sections = Array.isArray(report) ? report : (report?.sections ?? []);
|
||||
if (!Array.isArray(sections)) continue;
|
||||
for (const section of sections) {
|
||||
callback(reportKey, section);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/** Get the location array from a section (n8n uses both "location" and "locations"). */
|
||||
function getSectionLocations(section: any): any[] | null {
|
||||
const arr = section?.location ?? section?.locations;
|
||||
return Array.isArray(arr) ? arr : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Count location items across all sections of the built-in audit response.
|
||||
*/
|
||||
function countBuiltinLocations(builtinAudit: any): number {
|
||||
if (!isPopulatedAudit(builtinAudit)) return 0;
|
||||
|
||||
let count = 0;
|
||||
forEachBuiltinSection(builtinAudit, (_key, section) => {
|
||||
const locations = getSectionLocations(section);
|
||||
if (locations) count += locations.length;
|
||||
});
|
||||
return count;
|
||||
}
|
||||
|
||||
/**
|
||||
* Group findings by workflow, returning a Map keyed by workflowId.
|
||||
* Each value includes workflow metadata and sorted findings.
|
||||
*/
|
||||
interface WorkflowGroup {
|
||||
workflowId: string;
|
||||
workflowName: string;
|
||||
workflowActive: boolean;
|
||||
findings: AuditFinding[];
|
||||
/** Worst severity in the group (lower = more severe). */
|
||||
worstSeverity: number;
|
||||
}
|
||||
|
||||
function groupByWorkflow(findings: AuditFinding[]): WorkflowGroup[] {
|
||||
const map = new Map<string, WorkflowGroup>();
|
||||
|
||||
for (const f of findings) {
|
||||
const wfId = f.location.workflowId;
|
||||
let group = map.get(wfId);
|
||||
if (!group) {
|
||||
group = {
|
||||
workflowId: wfId,
|
||||
workflowName: f.location.workflowName,
|
||||
workflowActive: f.location.workflowActive ?? false,
|
||||
findings: [],
|
||||
worstSeverity: SEVERITY_ORDER[f.severity],
|
||||
};
|
||||
map.set(wfId, group);
|
||||
}
|
||||
group.findings.push(f);
|
||||
const sev = SEVERITY_ORDER[f.severity];
|
||||
if (sev < group.worstSeverity) {
|
||||
group.worstSeverity = sev;
|
||||
}
|
||||
}
|
||||
|
||||
// Sort workflows: worst severity first, then by name
|
||||
const groups = Array.from(map.values());
|
||||
groups.sort((a, b) => a.worstSeverity - b.worstSeverity || a.workflowName.localeCompare(b.workflowName));
|
||||
|
||||
// Sort findings within each workflow by severity
|
||||
for (const g of groups) {
|
||||
g.findings.sort((a, b) => SEVERITY_ORDER[a.severity] - SEVERITY_ORDER[b.severity]);
|
||||
}
|
||||
|
||||
return groups;
|
||||
}
|
||||
|
||||
/**
|
||||
* Render the built-in audit section. Handles both empty (no issues) and
|
||||
* populated responses with report keys.
|
||||
*/
|
||||
function renderBuiltinAudit(builtinAudit: any): string {
|
||||
if (!isPopulatedAudit(builtinAudit)) {
|
||||
return 'No issues found by n8n built-in audit.';
|
||||
}
|
||||
|
||||
const lines: string[] = [];
|
||||
let currentKey = '';
|
||||
|
||||
forEachBuiltinSection(builtinAudit, (reportKey, section) => {
|
||||
if (reportKey !== currentKey) {
|
||||
if (currentKey) lines.push('');
|
||||
lines.push(`### ${reportKey}`);
|
||||
currentKey = reportKey;
|
||||
}
|
||||
|
||||
const title = section.title || section.name || 'Unknown';
|
||||
const description = section.description || '';
|
||||
const recommendation = section.recommendation || '';
|
||||
|
||||
lines.push(`- **${title}:** ${description}`);
|
||||
if (recommendation) {
|
||||
lines.push(` - Recommendation: ${recommendation}`);
|
||||
}
|
||||
|
||||
const locations = getSectionLocations(section);
|
||||
if (locations && locations.length > 0) {
|
||||
lines.push(` - Affected: ${locations.length} items`);
|
||||
}
|
||||
|
||||
// Special handling for Instance Risk Report fields
|
||||
if (reportKey === 'Instance Risk Report') {
|
||||
if (Array.isArray(section.nextVersions) && section.nextVersions.length > 0) {
|
||||
const versionNames = section.nextVersions
|
||||
.map((v: any) => (typeof v === 'string' ? v : v.name || String(v)))
|
||||
.join(', ');
|
||||
lines.push(` - Available versions: ${versionNames}`);
|
||||
}
|
||||
|
||||
if (section.settings && typeof section.settings === 'object') {
|
||||
const entries = Object.entries(section.settings);
|
||||
if (entries.length > 0) {
|
||||
lines.push(' - Security settings:');
|
||||
for (const [key, value] of entries) {
|
||||
lines.push(` - ${key}: ${JSON.stringify(value)}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
if (lines.length === 0) {
|
||||
return 'No issues found by n8n built-in audit.';
|
||||
}
|
||||
|
||||
lines.push('');
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract actionable items from the built-in audit for the playbook section.
|
||||
*/
|
||||
interface BuiltinActionables {
|
||||
outdatedInstance: boolean;
|
||||
communityNodeCount: number;
|
||||
}
|
||||
|
||||
function extractBuiltinActionables(builtinAudit: any): BuiltinActionables {
|
||||
const result: BuiltinActionables = { outdatedInstance: false, communityNodeCount: 0 };
|
||||
if (!isPopulatedAudit(builtinAudit)) return result;
|
||||
|
||||
forEachBuiltinSection(builtinAudit, (_key, section) => {
|
||||
const title = (section.title || section.name || '').toLowerCase();
|
||||
|
||||
if (title.includes('outdated') || title.includes('update')) {
|
||||
result.outdatedInstance = true;
|
||||
}
|
||||
|
||||
if (title.includes('community') || title.includes('custom node')) {
|
||||
const locations = getSectionLocations(section);
|
||||
result.communityNodeCount += locations ? locations.length : 1;
|
||||
}
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Render the Remediation Playbook — aggregated by finding type with tool
|
||||
* flow described once per type.
|
||||
*/
|
||||
function renderRemediationPlaybook(
|
||||
findings: AuditFinding[],
|
||||
builtinAudit: any,
|
||||
): string {
|
||||
const lines: string[] = [];
|
||||
|
||||
// Count findings by category
|
||||
const byCat: Record<string, AuditFinding[]> = {};
|
||||
for (const f of findings) {
|
||||
if (!byCat[f.category]) byCat[f.category] = [];
|
||||
byCat[f.category].push(f);
|
||||
}
|
||||
|
||||
// Unique workflow count per category
|
||||
const uniqueWorkflows = (items: AuditFinding[]): number =>
|
||||
new Set(items.map(f => f.location.workflowId)).size;
|
||||
|
||||
// --- Auto-fixable by agent ---
|
||||
const autoFixCategories = ['hardcoded_secrets', 'unauthenticated_webhooks'];
|
||||
const hasAutoFix = autoFixCategories.some(cat => byCat[cat] && byCat[cat].length > 0);
|
||||
|
||||
if (hasAutoFix) {
|
||||
lines.push('### Auto-fixable by agent');
|
||||
lines.push('');
|
||||
|
||||
if (byCat['hardcoded_secrets']?.length) {
|
||||
const autoFixSecrets = byCat['hardcoded_secrets'].filter(f => f.remediationType === 'auto_fixable');
|
||||
if (autoFixSecrets.length > 0) {
|
||||
const wfCount = uniqueWorkflows(autoFixSecrets);
|
||||
lines.push(`**Hardcoded secrets** (${autoFixSecrets.length} across ${wfCount} workflow${wfCount !== 1 ? 's' : ''}):`);
|
||||
lines.push('Steps: `n8n_get_workflow` -> extract value -> `n8n_manage_credentials({action: "create"})` -> `n8n_update_partial_workflow({operations: [{type: "updateNode"}]})` to reference credential.');
|
||||
}
|
||||
lines.push('');
|
||||
}
|
||||
|
||||
if (byCat['unauthenticated_webhooks']?.length) {
|
||||
const items = byCat['unauthenticated_webhooks'];
|
||||
const wfCount = uniqueWorkflows(items);
|
||||
lines.push(`**Unauthenticated webhooks** (${items.length} across ${wfCount} workflow${wfCount !== 1 ? 's' : ''}):`);
|
||||
lines.push('Steps: `n8n_manage_credentials({action: "create", type: "httpHeaderAuth"})` with random secret -> `n8n_update_partial_workflow` to set `authentication: "headerAuth"` and assign credential.');
|
||||
lines.push('');
|
||||
}
|
||||
}
|
||||
|
||||
// --- Requires review ---
|
||||
const reviewCategories = ['error_handling'];
|
||||
const piiFindings = findings.filter(f => f.category === 'hardcoded_secrets' && f.remediationType === 'review_recommended');
|
||||
const hasReview = reviewCategories.some(cat => byCat[cat] && byCat[cat].length > 0) || piiFindings.length > 0;
|
||||
|
||||
if (hasReview) {
|
||||
lines.push('### Requires review');
|
||||
|
||||
if (byCat['error_handling']?.length) {
|
||||
const wfCount = uniqueWorkflows(byCat['error_handling']);
|
||||
lines.push(`**Error handling gaps** (${wfCount} workflow${wfCount !== 1 ? 's' : ''}): Add Error Trigger nodes or set continueOnFail on critical nodes.`);
|
||||
}
|
||||
|
||||
if (piiFindings.length > 0) {
|
||||
lines.push(`**PII in parameters** (${piiFindings.length} finding${piiFindings.length !== 1 ? 's' : ''}): Review whether hardcoded PII (emails, phones) is necessary or should use expressions.`);
|
||||
}
|
||||
|
||||
lines.push('');
|
||||
}
|
||||
|
||||
// --- Requires your action ---
|
||||
const retentionFindings = byCat['data_retention'] || [];
|
||||
const builtinActions = extractBuiltinActionables(builtinAudit);
|
||||
const hasUserAction = retentionFindings.length > 0 || builtinActions.outdatedInstance || builtinActions.communityNodeCount > 0;
|
||||
|
||||
if (hasUserAction) {
|
||||
lines.push('### Requires your action');
|
||||
|
||||
if (retentionFindings.length > 0) {
|
||||
const wfCount = uniqueWorkflows(retentionFindings);
|
||||
lines.push(`**Data retention** (${wfCount} workflow${wfCount !== 1 ? 's' : ''}): Configure execution data pruning in n8n Settings -> Executions.`);
|
||||
}
|
||||
|
||||
if (builtinActions.outdatedInstance) {
|
||||
lines.push('**Outdated instance**: Update n8n to latest version.');
|
||||
}
|
||||
|
||||
if (builtinActions.communityNodeCount > 0) {
|
||||
lines.push(`**Community nodes** (${builtinActions.communityNodeCount}): Review installed community packages.`);
|
||||
}
|
||||
|
||||
lines.push('');
|
||||
}
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Main export
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export function buildAuditReport(input: AuditReportInput): UnifiedAuditReport {
|
||||
const { builtinAudit, customReport, performance, instanceUrl, warnings } = input;
|
||||
|
||||
// --- Compute summary counts ---
|
||||
const customSummary = customReport?.summary ?? { critical: 0, high: 0, medium: 0, low: 0, total: 0 };
|
||||
const builtinCount = countBuiltinLocations(builtinAudit);
|
||||
|
||||
const summary: UnifiedAuditReport['summary'] = {
|
||||
critical: customSummary.critical,
|
||||
high: customSummary.high,
|
||||
medium: customSummary.medium,
|
||||
low: customSummary.low + builtinCount, // built-in issues counted as low by default
|
||||
totalFindings: customSummary.total + builtinCount,
|
||||
workflowsScanned: customReport?.workflowsScanned ?? 0,
|
||||
};
|
||||
|
||||
// --- Build markdown ---
|
||||
const md: string[] = [];
|
||||
|
||||
// Header
|
||||
md.push('# n8n Security Audit Report');
|
||||
md.push(`Generated: ${new Date().toISOString()} | Instance: ${instanceUrl}`);
|
||||
md.push('');
|
||||
|
||||
// Summary table
|
||||
md.push('## Summary');
|
||||
md.push('| Severity | Count |');
|
||||
md.push('|----------|-------|');
|
||||
md.push(`| Critical | ${summary.critical} |`);
|
||||
md.push(`| High | ${summary.high} |`);
|
||||
md.push(`| Medium | ${summary.medium} |`);
|
||||
md.push(`| Low | ${summary.low} |`);
|
||||
md.push(`| **Total** | **${summary.totalFindings}** |`);
|
||||
md.push('');
|
||||
md.push(
|
||||
`Workflows scanned: ${summary.workflowsScanned} | Scan duration: ${(performance.totalMs / 1000).toFixed(1)}s`,
|
||||
);
|
||||
md.push('');
|
||||
|
||||
// Warnings
|
||||
if (warnings && warnings.length > 0) {
|
||||
for (const w of warnings) {
|
||||
md.push(`- ${w}`);
|
||||
}
|
||||
md.push('');
|
||||
}
|
||||
|
||||
md.push('---');
|
||||
md.push('');
|
||||
|
||||
// --- Findings by Workflow ---
|
||||
if (customReport && customReport.findings.length > 0) {
|
||||
md.push('## Findings by Workflow');
|
||||
md.push('');
|
||||
|
||||
const workflowGroups = groupByWorkflow(customReport.findings);
|
||||
|
||||
for (const group of workflowGroups) {
|
||||
const activeTag = group.workflowActive ? ' [ACTIVE]' : '';
|
||||
md.push(`### "${group.workflowName}" (id: ${group.workflowId})${activeTag} — ${group.findings.length} finding${group.findings.length !== 1 ? 's' : ''}`);
|
||||
md.push('');
|
||||
md.push('| ID | Severity | Finding | Node | Fix |');
|
||||
md.push('|----|----------|---------|------|-----|');
|
||||
|
||||
for (const f of group.findings) {
|
||||
const node = f.location.nodeName || '\u2014';
|
||||
const fix = FIX_LABEL[f.remediationType];
|
||||
const sevLabel = f.severity.charAt(0).toUpperCase() + f.severity.slice(1);
|
||||
md.push(`| ${f.id} | ${sevLabel} | ${f.title} | ${node} | ${fix} |`);
|
||||
}
|
||||
|
||||
md.push('');
|
||||
}
|
||||
|
||||
md.push('---');
|
||||
md.push('');
|
||||
}
|
||||
|
||||
// --- Built-in audit ---
|
||||
md.push('## n8n Built-in Audit Results');
|
||||
md.push('');
|
||||
md.push(renderBuiltinAudit(builtinAudit));
|
||||
md.push('');
|
||||
md.push('---');
|
||||
md.push('');
|
||||
|
||||
// --- Remediation Playbook ---
|
||||
md.push('## Remediation Playbook');
|
||||
md.push('');
|
||||
const playbook = renderRemediationPlaybook(customReport?.findings ?? [], builtinAudit);
|
||||
if (playbook.trim().length > 0) {
|
||||
md.push(playbook);
|
||||
} else {
|
||||
md.push('No remediation actions needed.');
|
||||
md.push('');
|
||||
}
|
||||
md.push('---');
|
||||
md.push('');
|
||||
|
||||
// Performance footer
|
||||
md.push(
|
||||
`Scan performance: built-in ${performance.builtinAuditMs}ms | fetch ${performance.workflowFetchMs}ms | custom ${performance.customScanMs}ms`,
|
||||
);
|
||||
|
||||
return {
|
||||
markdown: md.join('\n'),
|
||||
summary,
|
||||
};
|
||||
}
|
||||
297
src/services/credential-scanner.ts
Normal file
297
src/services/credential-scanner.ts
Normal file
@@ -0,0 +1,297 @@
|
||||
/**
|
||||
* Regex-based credential and PII scanner for n8n workflows.
|
||||
*
|
||||
* TypeScript port of pii_prescreen.py. Catches API keys, secrets, and PII
|
||||
* with deterministic patterns. Covers 50+ service-specific key prefixes
|
||||
* plus generic PII patterns (email, phone, credit card).
|
||||
*
|
||||
* SECURITY: Raw secret values are never stored in detection results.
|
||||
* The maskSecret() function is called at scan time so only masked
|
||||
* snippets appear in the output.
|
||||
*/
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface SecretPattern {
|
||||
regex: RegExp;
|
||||
label: string;
|
||||
category: string;
|
||||
severity: 'critical' | 'high' | 'medium';
|
||||
}
|
||||
|
||||
export interface ScanDetection {
|
||||
label: string;
|
||||
category: string;
|
||||
severity: 'critical' | 'high' | 'medium';
|
||||
location: {
|
||||
workflowId: string;
|
||||
workflowName: string;
|
||||
nodeName?: string;
|
||||
nodeType?: string;
|
||||
};
|
||||
maskedSnippet?: string;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Skip fields - structural / template fields that should not be scanned
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const SKIP_FIELDS = new Set<string>([
|
||||
'expression',
|
||||
'id',
|
||||
'typeVersion',
|
||||
'position',
|
||||
'credentials',
|
||||
]);
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Secret patterns (instant reject)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const SECRET_PATTERNS: SecretPattern[] = [
|
||||
// ── AI / ML ──────────────────────────────────────────────────────────────
|
||||
{ regex: /sk-(?:proj-)?[A-Za-z0-9]{20,}/, label: 'openai_key', category: 'AI/ML', severity: 'critical' },
|
||||
{ regex: /sk-ant-[A-Za-z0-9_-]{20,}/, label: 'anthropic_key', category: 'AI/ML', severity: 'critical' },
|
||||
{ regex: /gsk_[a-zA-Z0-9]{48,}/, label: 'groq_key', category: 'AI/ML', severity: 'critical' },
|
||||
{ regex: /r8_[a-zA-Z0-9]{37}/, label: 'replicate_key', category: 'AI/ML', severity: 'critical' },
|
||||
{ regex: /hf_[a-zA-Z]{34}/, label: 'huggingface_key', category: 'AI/ML', severity: 'critical' },
|
||||
{ regex: /pplx-[a-zA-Z0-9]{48}/, label: 'perplexity_key', category: 'AI/ML', severity: 'critical' },
|
||||
|
||||
// ── Cloud / DevOps ───────────────────────────────────────────────────────
|
||||
{ regex: /AKIA[A-Z0-9]{16}/, label: 'aws_key', category: 'Cloud/DevOps', severity: 'critical' },
|
||||
{ regex: /AIza[A-Za-z0-9_-]{35}/, label: 'google_api_key', category: 'Cloud/DevOps', severity: 'critical' },
|
||||
{ regex: /dop_v1_[a-f0-9]{64}/, label: 'digitalocean_pat', category: 'Cloud/DevOps', severity: 'critical' },
|
||||
{ regex: /do[or]_v1_[a-f0-9]{64}/, label: 'digitalocean_token', category: 'Cloud/DevOps', severity: 'critical' },
|
||||
{ regex: /v(?:cp|ci|ck)_[a-zA-Z0-9]{24,}/, label: 'vercel_token', category: 'Cloud/DevOps', severity: 'critical' },
|
||||
{ regex: /nfp_[a-zA-Z0-9]{40,}/, label: 'netlify_pat', category: 'Cloud/DevOps', severity: 'critical' },
|
||||
{ regex: /HRKU-AA[0-9a-zA-Z_-]{58}/, label: 'heroku_key', category: 'Cloud/DevOps', severity: 'critical' },
|
||||
{ regex: /glpat-[\w-]{20}/, label: 'gitlab_pat', category: 'Cloud/DevOps', severity: 'critical' },
|
||||
{ regex: /npm_[a-z0-9]{36}/, label: 'npm_token', category: 'Cloud/DevOps', severity: 'critical' },
|
||||
|
||||
// ── GitHub ───────────────────────────────────────────────────────────────
|
||||
{ regex: /ghp_[A-Za-z0-9]{36,}/, label: 'github_pat', category: 'GitHub', severity: 'critical' },
|
||||
{ regex: /gho_[A-Za-z0-9]{36,}/, label: 'github_oauth', category: 'GitHub', severity: 'critical' },
|
||||
{ regex: /ghs_[A-Za-z0-9]{36,}/, label: 'github_server', category: 'GitHub', severity: 'critical' },
|
||||
{ regex: /ghr_[A-Za-z0-9]{36,}/, label: 'github_refresh', category: 'GitHub', severity: 'critical' },
|
||||
|
||||
// ── Auth tokens ──────────────────────────────────────────────────────────
|
||||
{ regex: /eyJ[A-Za-z0-9_-]{10,}\.eyJ[A-Za-z0-9_-]{10,}\.[A-Za-z0-9_-]{10,}/, label: 'jwt_token', category: 'Auth tokens', severity: 'critical' },
|
||||
{ regex: /sbp_[a-f0-9]{40,}/, label: 'supabase_secret', category: 'Auth tokens', severity: 'critical' },
|
||||
|
||||
// ── Communication ────────────────────────────────────────────────────────
|
||||
{ regex: /xox[bps]-[0-9]{10,}-[A-Za-z0-9-]+/, label: 'slack_token', category: 'Communication', severity: 'critical' },
|
||||
{ regex: /\b\d{8,10}:A[a-zA-Z0-9_-]{34}\b/, label: 'telegram_bot', category: 'Communication', severity: 'critical' },
|
||||
|
||||
// ── Payment ──────────────────────────────────────────────────────────────
|
||||
{ regex: /[sr]k_(?:live|test)_[A-Za-z0-9]{20,}/, label: 'stripe_key', category: 'Payment', severity: 'critical' },
|
||||
{ regex: /sq0(?:atp|csp)-[0-9A-Za-z_-]{22,}/, label: 'square_key', category: 'Payment', severity: 'critical' },
|
||||
{ regex: /rzp_(?:live|test)_[a-zA-Z0-9]{14,}/, label: 'razorpay_key', category: 'Payment', severity: 'critical' },
|
||||
|
||||
// ── Email / Marketing ────────────────────────────────────────────────────
|
||||
{ regex: /SG\.[A-Za-z0-9_-]{22}\.[A-Za-z0-9_-]{43}/, label: 'sendgrid_key', category: 'Email/Marketing', severity: 'critical' },
|
||||
{ regex: /key-[a-f0-9]{32}/, label: 'mailgun_key', category: 'Email/Marketing', severity: 'high' },
|
||||
{ regex: /xkeysib-[a-f0-9]{64}-[a-zA-Z0-9]{16}/, label: 'brevo_key', category: 'Email/Marketing', severity: 'critical' },
|
||||
{ regex: /(?<!\w)re_[a-zA-Z0-9_]{32,}/, label: 'resend_key', category: 'Email/Marketing', severity: 'critical' },
|
||||
{ regex: /[a-f0-9]{32}-us\d{1,2}\b/, label: 'mailchimp_key', category: 'Email/Marketing', severity: 'critical' },
|
||||
|
||||
// ── E-commerce ───────────────────────────────────────────────────────────
|
||||
{ regex: /shp(?:at|ca|pa|ss)_[a-fA-F0-9]{32,}/, label: 'shopify_token', category: 'E-commerce', severity: 'critical' },
|
||||
|
||||
// ── Productivity / CRM ───────────────────────────────────────────────────
|
||||
{ regex: /ntn_[0-9]{11}[A-Za-z0-9]{35}/, label: 'notion_token', category: 'Productivity/CRM', severity: 'critical' },
|
||||
{ regex: /secret_[a-zA-Z0-9]{43}\b/, label: 'notion_legacy', category: 'Productivity/CRM', severity: 'critical' },
|
||||
{ regex: /lin_api_[a-zA-Z0-9]{40}/, label: 'linear_key', category: 'Productivity/CRM', severity: 'critical' },
|
||||
{ regex: /CFPAT-[a-zA-Z0-9_-]{43,}/, label: 'contentful_pat', category: 'Productivity/CRM', severity: 'critical' },
|
||||
{ regex: /ATATT[a-zA-Z0-9_-]{50,}/, label: 'atlassian_token', category: 'Productivity/CRM', severity: 'critical' },
|
||||
{ regex: /pat-(?:na1|eu1)-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}/, label: 'hubspot_pat', category: 'Productivity/CRM', severity: 'critical' },
|
||||
|
||||
// ── Monitoring / Analytics ───────────────────────────────────────────────
|
||||
{ regex: /sntr[ysu]_[a-zA-Z0-9+/=]{40,}/, label: 'sentry_token', category: 'Monitoring/Analytics', severity: 'critical' },
|
||||
{ regex: /ph[cx]_[a-zA-Z0-9]{32,}/, label: 'posthog_key', category: 'Monitoring/Analytics', severity: 'critical' },
|
||||
{ regex: /gl(?:c|sa)_[A-Za-z0-9+/=_]{32,}/, label: 'grafana_key', category: 'Monitoring/Analytics', severity: 'critical' },
|
||||
{ regex: /NRAK-[A-Z0-9]{27}/, label: 'newrelic_key', category: 'Monitoring/Analytics', severity: 'critical' },
|
||||
|
||||
// ── Database ─────────────────────────────────────────────────────────────
|
||||
{ regex: /pscale_(?:tkn|pw|oauth)_[a-zA-Z0-9=._-]{32,}/, label: 'planetscale_key', category: 'Database', severity: 'critical' },
|
||||
{ regex: /dapi[a-f0-9]{32}/, label: 'databricks_key', category: 'Database', severity: 'critical' },
|
||||
|
||||
// ── Other services ───────────────────────────────────────────────────────
|
||||
{ regex: /SK[a-f0-9]{32}/, label: 'twilio_key', category: 'Other', severity: 'critical' },
|
||||
{ regex: /\bpat[A-Za-z0-9]{10,}\.[A-Za-z0-9]{20,}/, label: 'airtable_pat', category: 'Other', severity: 'critical' },
|
||||
{ regex: /apify_api_[A-Za-z0-9]{20,}/, label: 'apify_key', category: 'Other', severity: 'critical' },
|
||||
{ regex: /figd_[a-zA-Z0-9_-]{40,}/, label: 'figma_pat', category: 'Other', severity: 'critical' },
|
||||
{ regex: /PMAK-[a-f0-9]{24}-[a-f0-9]{34}/, label: 'postman_key', category: 'Other', severity: 'critical' },
|
||||
{ regex: /dp\.(?:pt|st|sa)\.[a-zA-Z0-9._-]{40,}/, label: 'doppler_token', category: 'Other', severity: 'critical' },
|
||||
|
||||
// ── Generic patterns (keep last - catch-all) ────────────────────────────
|
||||
{ regex: /-----BEGIN (?:RSA |EC |DSA )?PRIVATE KEY-----/, label: 'private_key', category: 'Generic', severity: 'critical' },
|
||||
{ regex: /Bearer\s+[A-Za-z0-9._-]{32,}/i, label: 'bearer_token', category: 'Generic', severity: 'high' },
|
||||
{ regex: /(?:https?|postgres|mysql|mongodb|redis|amqp):\/\/[^:"\s]+:[^@"\s]+@[^\s"]+/, label: 'url_with_auth', category: 'Generic', severity: 'critical' },
|
||||
];
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// PII patterns (instant reject)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const PII_PATTERNS: SecretPattern[] = [
|
||||
// Email addresses (but not template expressions like {{$json.email}})
|
||||
{ regex: /(?<!\{)\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b(?!\})/, label: 'email', category: 'PII', severity: 'medium' },
|
||||
// Phone numbers (international formats)
|
||||
{ regex: /(?<!\d)\+?\d{1,3}[-.\s]?\(?\d{3}\)?[-.\s]?\d{3}[-.\s]?\d{4}(?!\d)/, label: 'phone', category: 'PII', severity: 'medium' },
|
||||
// Credit card numbers
|
||||
{ regex: /\b\d{4}[-\s]?\d{4}[-\s]?\d{4}[-\s]?\d{4}\b/, label: 'credit_card', category: 'PII', severity: 'high' },
|
||||
];
|
||||
|
||||
// Combined patterns for internal use
|
||||
const ALL_PATTERNS: SecretPattern[] = [...SECRET_PATTERNS, ...PII_PATTERNS];
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// maskSecret - shows first 6 + last 4 chars, masks the rest with ****
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Masks a secret value, showing only the first 6 and last 4 characters.
|
||||
* Values shorter than 14 characters get fully masked to avoid leaking
|
||||
* most of the original content.
|
||||
*/
|
||||
export function maskSecret(value: string): string {
|
||||
if (value.length < 14) {
|
||||
return '****';
|
||||
}
|
||||
const head = value.slice(0, 6);
|
||||
const tail = value.slice(-4);
|
||||
return `${head}****${tail}`;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// collectStrings - recursively collect scannable string values
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function collectStrings(
|
||||
obj: unknown,
|
||||
parts: string[],
|
||||
depth: number = 0,
|
||||
): void {
|
||||
if (depth > 10) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (typeof obj === 'string') {
|
||||
// Skip pure expression strings like "={{ $json.email }}" or "{{ ... }}"
|
||||
if (obj.startsWith('=') || obj.startsWith('{{')) {
|
||||
return;
|
||||
}
|
||||
// Skip very short strings (booleans, ops like "get")
|
||||
if (obj.length > 8) {
|
||||
parts.push(obj);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (Array.isArray(obj)) {
|
||||
for (const item of obj) {
|
||||
collectStrings(item, parts, depth + 1);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (obj !== null && typeof obj === 'object') {
|
||||
for (const [key, val] of Object.entries(obj as Record<string, unknown>)) {
|
||||
if (SKIP_FIELDS.has(key)) {
|
||||
continue;
|
||||
}
|
||||
collectStrings(val, parts, depth + 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Workflow scanning input type (loose, to accept various workflow shapes)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface ScanWorkflowInput {
|
||||
id?: string;
|
||||
name: string;
|
||||
nodes: Array<{
|
||||
id?: string;
|
||||
name: string;
|
||||
type: string;
|
||||
parameters?: Record<string, unknown>;
|
||||
notes?: string;
|
||||
[key: string]: unknown;
|
||||
}>;
|
||||
settings?: Record<string, unknown>;
|
||||
staticData?: Record<string, unknown>;
|
||||
pinData?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// scanText - match collected strings against all patterns
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function scanText(
|
||||
parts: string[],
|
||||
location: ScanDetection['location'],
|
||||
detections: ScanDetection[],
|
||||
): void {
|
||||
const text = parts.join('\n');
|
||||
for (const pattern of ALL_PATTERNS) {
|
||||
const match = pattern.regex.exec(text);
|
||||
if (match) {
|
||||
detections.push({
|
||||
label: pattern.label,
|
||||
category: pattern.category,
|
||||
severity: pattern.severity,
|
||||
location,
|
||||
maskedSnippet: maskSecret(match[0]),
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// scanWorkflow - scan a workflow for secrets and PII
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Scans an n8n workflow for embedded secrets and PII.
|
||||
*
|
||||
* Scans per-node so detections include the specific node name/type.
|
||||
* Top-level fields (pinData, staticData, settings) are attributed to
|
||||
* the workflow level (no specific node).
|
||||
*
|
||||
* SECURITY: Raw secret values are never stored in the returned detections.
|
||||
* Only masked snippets (via maskSecret()) appear in the output.
|
||||
*/
|
||||
export function scanWorkflow(workflow: ScanWorkflowInput): ScanDetection[] {
|
||||
const detections: ScanDetection[] = [];
|
||||
const baseLocation = {
|
||||
workflowId: workflow.id ?? '',
|
||||
workflowName: workflow.name ?? '',
|
||||
};
|
||||
|
||||
// Scan each node individually for precise location reporting
|
||||
for (const node of workflow.nodes ?? []) {
|
||||
const parts: string[] = [];
|
||||
|
||||
if (node.name && node.name.length > 8) parts.push(node.name);
|
||||
if (node.notes && node.notes.length > 8) parts.push(node.notes);
|
||||
if (node.parameters) collectStrings(node.parameters, parts);
|
||||
|
||||
scanText(parts, { ...baseLocation, nodeName: node.name, nodeType: node.type }, detections);
|
||||
}
|
||||
|
||||
// Scan top-level fields: pinData, staticData, settings
|
||||
for (const key of ['pinData', 'staticData', 'settings'] as const) {
|
||||
const data = (workflow as unknown as Record<string, unknown>)[key];
|
||||
if (data != null && typeof data === 'object') {
|
||||
const parts: string[] = [];
|
||||
collectStrings(data, parts);
|
||||
scanText(parts, { ...baseLocation }, detections);
|
||||
}
|
||||
}
|
||||
|
||||
return detections;
|
||||
}
|
||||
@@ -77,9 +77,11 @@ export class N8nApiClient {
|
||||
// Request interceptor for logging
|
||||
this.client.interceptors.request.use(
|
||||
(config: InternalAxiosRequestConfig) => {
|
||||
// Redact request body for credential endpoints to prevent secret leakage
|
||||
const isSensitive = config.url?.includes('/credentials') && config.method !== 'get';
|
||||
logger.debug(`n8n API Request: ${config.method?.toUpperCase()} ${config.url}`, {
|
||||
params: config.params,
|
||||
data: config.data,
|
||||
data: isSensitive ? '[REDACTED]' : config.data,
|
||||
});
|
||||
return config;
|
||||
},
|
||||
@@ -133,13 +135,7 @@ export class N8nApiClient {
|
||||
* Internal method to fetch version once
|
||||
*/
|
||||
private async fetchVersionOnce(): Promise<N8nVersionInfo | null> {
|
||||
// Check if already cached globally
|
||||
let version = getCachedVersion(this.baseUrl);
|
||||
if (!version) {
|
||||
// Fetch from server
|
||||
version = await fetchN8nVersion(this.baseUrl);
|
||||
}
|
||||
return version;
|
||||
return getCachedVersion(this.baseUrl) ?? await fetchN8nVersion(this.baseUrl);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -309,6 +305,40 @@ export class N8nApiClient {
|
||||
}
|
||||
}
|
||||
|
||||
// Audit
|
||||
async generateAudit(options?: { categories?: string[]; daysAbandonedWorkflow?: number }): Promise<any> {
|
||||
try {
|
||||
const additionalOptions: Record<string, unknown> = {};
|
||||
if (options?.categories) additionalOptions.categories = options.categories;
|
||||
if (options?.daysAbandonedWorkflow !== undefined) additionalOptions.daysAbandonedWorkflow = options.daysAbandonedWorkflow;
|
||||
|
||||
const body = Object.keys(additionalOptions).length > 0 ? { additionalOptions } : {};
|
||||
const response = await this.client.post('/audit', body);
|
||||
return response.data;
|
||||
} catch (error) {
|
||||
throw handleN8nApiError(error);
|
||||
}
|
||||
}
|
||||
|
||||
// Fetch all workflows with pagination (for audit scanning)
|
||||
async listAllWorkflows(): Promise<Workflow[]> {
|
||||
const allWorkflows: Workflow[] = [];
|
||||
let cursor: string | undefined;
|
||||
const seenCursors = new Set<string>();
|
||||
const PAGE_SIZE = 100;
|
||||
const MAX_PAGES = 50; // Safety limit: 5000 workflows max
|
||||
|
||||
for (let page = 0; page < MAX_PAGES; page++) {
|
||||
const params: WorkflowListParams = { limit: PAGE_SIZE, cursor };
|
||||
const response = await this.listWorkflows(params);
|
||||
allWorkflows.push(...response.data);
|
||||
if (!response.nextCursor || seenCursors.has(response.nextCursor)) break;
|
||||
seenCursors.add(response.nextCursor);
|
||||
cursor = response.nextCursor;
|
||||
}
|
||||
return allWorkflows;
|
||||
}
|
||||
|
||||
// Execution Management
|
||||
async getExecution(id: string, includeData = false): Promise<Execution> {
|
||||
try {
|
||||
@@ -461,6 +491,15 @@ export class N8nApiClient {
|
||||
}
|
||||
}
|
||||
|
||||
async getCredentialSchema(typeName: string): Promise<any> {
|
||||
try {
|
||||
const response = await this.client.get(`/credentials/schema/${typeName}`);
|
||||
return response.data;
|
||||
} catch (error) {
|
||||
throw handleN8nApiError(error);
|
||||
}
|
||||
}
|
||||
|
||||
// Tag Management
|
||||
/**
|
||||
* Lists tags from n8n instance.
|
||||
|
||||
@@ -29,7 +29,8 @@ import {
|
||||
DeactivateWorkflowOperation,
|
||||
CleanStaleConnectionsOperation,
|
||||
ReplaceConnectionsOperation,
|
||||
TransferWorkflowOperation
|
||||
TransferWorkflowOperation,
|
||||
PatchNodeFieldOperation
|
||||
} from '../types/workflow-diff';
|
||||
import { Workflow, WorkflowNode, WorkflowConnection } from '../types/n8n-api';
|
||||
import { Logger } from '../utils/logger';
|
||||
@@ -39,6 +40,55 @@ import { isActivatableTrigger } from '../utils/node-type-utils';
|
||||
|
||||
const logger = new Logger({ prefix: '[WorkflowDiffEngine]' });
|
||||
|
||||
// Safety limits for patchNodeField operations
|
||||
const PATCH_LIMITS = {
|
||||
MAX_PATCHES: 50, // Max patches per operation
|
||||
MAX_REGEX_LENGTH: 500, // Max regex pattern length (chars)
|
||||
MAX_FIELD_SIZE_REGEX: 512 * 1024, // Max field size for regex operations (512KB)
|
||||
};
|
||||
|
||||
// Keys that must never appear in property paths (prototype pollution prevention)
|
||||
const DANGEROUS_PATH_KEYS = new Set(['__proto__', 'constructor', 'prototype']);
|
||||
|
||||
/**
|
||||
* Check if a regex pattern contains constructs known to cause catastrophic backtracking.
|
||||
* Detects nested quantifiers like (a+)+, (a*)+, (a+)*, (a|b+)+ etc.
|
||||
*/
|
||||
function isUnsafeRegex(pattern: string): boolean {
|
||||
// Detect nested quantifiers: a quantifier applied to a group that itself contains a quantifier
|
||||
// Examples: (a+)+, (a+)*, (.*)+, (\w+)*, (a|b+)+
|
||||
// This catches the most common ReDoS patterns
|
||||
const nestedQuantifier = /\([^)]*[+*][^)]*\)[+*{]/;
|
||||
if (nestedQuantifier.test(pattern)) return true;
|
||||
|
||||
// Detect overlapping alternations with quantifiers: (a|a)+, (\w|\d)+
|
||||
const overlappingAlternation = /\([^)]*\|[^)]*\)[+*{]/;
|
||||
// Only flag if alternation branches share characters (heuristic: both contain \w, ., or same literal)
|
||||
if (overlappingAlternation.test(pattern)) {
|
||||
const match = pattern.match(/\(([^)]*)\|([^)]*)\)[+*{]/);
|
||||
if (match) {
|
||||
const [, left, right] = match;
|
||||
// Flag if both branches use broad character classes
|
||||
const broadClasses = ['.', '\\w', '\\d', '\\s', '\\S', '\\W', '\\D', '[^'];
|
||||
const leftHasBroad = broadClasses.some(c => left.includes(c));
|
||||
const rightHasBroad = broadClasses.some(c => right.includes(c));
|
||||
if (leftHasBroad && rightHasBroad) return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
function countOccurrences(str: string, search: string): number {
|
||||
let count = 0;
|
||||
let pos = 0;
|
||||
while ((pos = str.indexOf(search, pos)) !== -1) {
|
||||
count++;
|
||||
pos += search.length;
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
/**
|
||||
* Not safe for concurrent use — create a new instance per request.
|
||||
* Instance state is reset at the start of each applyDiff() call.
|
||||
@@ -79,7 +129,7 @@ export class WorkflowDiffEngine {
|
||||
const workflowCopy = JSON.parse(JSON.stringify(workflow));
|
||||
|
||||
// Group operations by type for two-pass processing
|
||||
const nodeOperationTypes = ['addNode', 'removeNode', 'updateNode', 'moveNode', 'enableNode', 'disableNode'];
|
||||
const nodeOperationTypes = ['addNode', 'removeNode', 'updateNode', 'patchNodeField', 'moveNode', 'enableNode', 'disableNode'];
|
||||
const nodeOperations: Array<{ operation: WorkflowDiffOperation; index: number }> = [];
|
||||
const otherOperations: Array<{ operation: WorkflowDiffOperation; index: number }> = [];
|
||||
|
||||
@@ -296,6 +346,8 @@ export class WorkflowDiffEngine {
|
||||
return this.validateRemoveNode(workflow, operation);
|
||||
case 'updateNode':
|
||||
return this.validateUpdateNode(workflow, operation);
|
||||
case 'patchNodeField':
|
||||
return this.validatePatchNodeField(workflow, operation as PatchNodeFieldOperation);
|
||||
case 'moveNode':
|
||||
return this.validateMoveNode(workflow, operation);
|
||||
case 'enableNode':
|
||||
@@ -341,6 +393,9 @@ export class WorkflowDiffEngine {
|
||||
case 'updateNode':
|
||||
this.applyUpdateNode(workflow, operation);
|
||||
break;
|
||||
case 'patchNodeField':
|
||||
this.applyPatchNodeField(workflow, operation as PatchNodeFieldOperation);
|
||||
break;
|
||||
case 'moveNode':
|
||||
this.applyMoveNode(workflow, operation);
|
||||
break;
|
||||
@@ -498,6 +553,77 @@ export class WorkflowDiffEngine {
|
||||
return null;
|
||||
}
|
||||
|
||||
private validatePatchNodeField(workflow: Workflow, operation: PatchNodeFieldOperation): string | null {
|
||||
if (!operation.nodeId && !operation.nodeName) {
|
||||
return `patchNodeField requires either "nodeId" or "nodeName"`;
|
||||
}
|
||||
|
||||
if (!operation.fieldPath || typeof operation.fieldPath !== 'string') {
|
||||
return `patchNodeField requires a "fieldPath" string (e.g., "parameters.jsCode")`;
|
||||
}
|
||||
|
||||
// Prototype pollution protection
|
||||
const pathSegments = operation.fieldPath.split('.');
|
||||
if (pathSegments.some(k => DANGEROUS_PATH_KEYS.has(k))) {
|
||||
return `patchNodeField: fieldPath "${operation.fieldPath}" contains a forbidden key (__proto__, constructor, or prototype)`;
|
||||
}
|
||||
|
||||
if (!Array.isArray(operation.patches) || operation.patches.length === 0) {
|
||||
return `patchNodeField requires a non-empty "patches" array of {find, replace} objects`;
|
||||
}
|
||||
|
||||
// Resource limit: max patches per operation
|
||||
if (operation.patches.length > PATCH_LIMITS.MAX_PATCHES) {
|
||||
return `patchNodeField: too many patches (${operation.patches.length}). Maximum is ${PATCH_LIMITS.MAX_PATCHES} per operation. Split into multiple operations if needed.`;
|
||||
}
|
||||
|
||||
for (let i = 0; i < operation.patches.length; i++) {
|
||||
const patch = operation.patches[i];
|
||||
if (!patch || typeof patch.find !== 'string' || typeof patch.replace !== 'string') {
|
||||
return `Invalid patch entry at index ${i}: each entry must have "find" (string) and "replace" (string)`;
|
||||
}
|
||||
if (patch.find.length === 0) {
|
||||
return `Invalid patch entry at index ${i}: "find" must not be empty`;
|
||||
}
|
||||
if (patch.regex) {
|
||||
// Resource limit: max regex pattern length
|
||||
if (patch.find.length > PATCH_LIMITS.MAX_REGEX_LENGTH) {
|
||||
return `Regex pattern at patch index ${i} is too long (${patch.find.length} chars). Maximum is ${PATCH_LIMITS.MAX_REGEX_LENGTH} characters.`;
|
||||
}
|
||||
try {
|
||||
new RegExp(patch.find);
|
||||
} catch (e) {
|
||||
return `Invalid regex pattern at patch index ${i}: ${e instanceof Error ? e.message : 'invalid regex'}`;
|
||||
}
|
||||
// ReDoS protection: reject patterns with nested quantifiers
|
||||
if (isUnsafeRegex(patch.find)) {
|
||||
return `Potentially unsafe regex pattern at patch index ${i}: nested quantifiers or overlapping alternations can cause excessive backtracking. Simplify the pattern or use literal matching (regex: false).`;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const node = this.findNode(workflow, operation.nodeId, operation.nodeName);
|
||||
if (!node) {
|
||||
return this.formatNodeNotFoundError(workflow, operation.nodeId || operation.nodeName || '', 'patchNodeField');
|
||||
}
|
||||
|
||||
const currentValue = this.getNestedProperty(node, operation.fieldPath);
|
||||
if (currentValue === undefined) {
|
||||
return `Cannot apply patchNodeField to "${operation.fieldPath}": property does not exist on node "${node.name}"`;
|
||||
}
|
||||
if (typeof currentValue !== 'string') {
|
||||
return `Cannot apply patchNodeField to "${operation.fieldPath}": current value is ${typeof currentValue}, expected string`;
|
||||
}
|
||||
|
||||
// Resource limit: cap field size for regex operations
|
||||
const hasRegex = operation.patches.some(p => p.regex);
|
||||
if (hasRegex && typeof currentValue === 'string' && currentValue.length > PATCH_LIMITS.MAX_FIELD_SIZE_REGEX) {
|
||||
return `Field "${operation.fieldPath}" is too large for regex operations (${Math.round(currentValue.length / 1024)}KB). Maximum is ${PATCH_LIMITS.MAX_FIELD_SIZE_REGEX / 1024}KB. Use literal matching (regex: false) for large fields.`;
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
private validateMoveNode(workflow: Workflow, operation: MoveNodeOperation): string | null {
|
||||
const node = this.findNode(workflow, operation.nodeId, operation.nodeName);
|
||||
if (!node) {
|
||||
@@ -775,10 +901,74 @@ export class WorkflowDiffEngine {
|
||||
Object.assign(node, sanitized);
|
||||
}
|
||||
|
||||
private applyPatchNodeField(workflow: Workflow, operation: PatchNodeFieldOperation): void {
|
||||
const node = this.findNode(workflow, operation.nodeId, operation.nodeName);
|
||||
if (!node) return;
|
||||
|
||||
this.modifiedNodeIds.add(node.id);
|
||||
|
||||
let current = this.getNestedProperty(node, operation.fieldPath) as string;
|
||||
|
||||
for (let i = 0; i < operation.patches.length; i++) {
|
||||
const patch = operation.patches[i];
|
||||
|
||||
if (patch.regex) {
|
||||
const globalRegex = new RegExp(patch.find, 'g');
|
||||
const matches = current.match(globalRegex);
|
||||
|
||||
if (!matches || matches.length === 0) {
|
||||
throw new Error(
|
||||
`patchNodeField: regex pattern "${patch.find}" not found in "${operation.fieldPath}" (patch index ${i}). ` +
|
||||
`Use n8n_get_workflow to inspect the current value.`
|
||||
);
|
||||
}
|
||||
|
||||
if (matches.length > 1 && !patch.replaceAll) {
|
||||
throw new Error(
|
||||
`patchNodeField: regex pattern "${patch.find}" matches ${matches.length} times in "${operation.fieldPath}" (patch index ${i}). ` +
|
||||
`Set "replaceAll": true to replace all occurrences, or refine the pattern to match exactly once.`
|
||||
);
|
||||
}
|
||||
|
||||
const regex = patch.replaceAll ? globalRegex : new RegExp(patch.find);
|
||||
current = current.replace(regex, patch.replace);
|
||||
} else {
|
||||
const occurrences = countOccurrences(current, patch.find);
|
||||
|
||||
if (occurrences === 0) {
|
||||
throw new Error(
|
||||
`patchNodeField: "${patch.find.substring(0, 80)}" not found in "${operation.fieldPath}" (patch index ${i}). ` +
|
||||
`Ensure the find string exactly matches the current content (including whitespace and newlines). ` +
|
||||
`Use n8n_get_workflow to inspect the current value.`
|
||||
);
|
||||
}
|
||||
|
||||
if (occurrences > 1 && !patch.replaceAll) {
|
||||
throw new Error(
|
||||
`patchNodeField: "${patch.find.substring(0, 80)}" found ${occurrences} times in "${operation.fieldPath}" (patch index ${i}). ` +
|
||||
`Set "replaceAll": true to replace all occurrences, or use a more specific find string that matches exactly once.`
|
||||
);
|
||||
}
|
||||
|
||||
if (patch.replaceAll) {
|
||||
current = current.split(patch.find).join(patch.replace);
|
||||
} else {
|
||||
current = current.replace(patch.find, patch.replace);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
this.setNestedProperty(node, operation.fieldPath, current);
|
||||
|
||||
// Sanitize node after updates
|
||||
const sanitized = sanitizeNode(node);
|
||||
Object.assign(node, sanitized);
|
||||
}
|
||||
|
||||
private applyMoveNode(workflow: Workflow, operation: MoveNodeOperation): void {
|
||||
const node = this.findNode(workflow, operation.nodeId, operation.nodeName);
|
||||
if (!node) return;
|
||||
|
||||
|
||||
node.position = operation.position;
|
||||
}
|
||||
|
||||
@@ -1320,6 +1510,7 @@ export class WorkflowDiffEngine {
|
||||
const keys = path.split('.');
|
||||
let current = obj;
|
||||
for (const key of keys) {
|
||||
if (DANGEROUS_PATH_KEYS.has(key)) return undefined;
|
||||
if (current == null || typeof current !== 'object') return undefined;
|
||||
current = current[key];
|
||||
}
|
||||
@@ -1330,6 +1521,11 @@ export class WorkflowDiffEngine {
|
||||
const keys = path.split('.');
|
||||
let current = obj;
|
||||
|
||||
// Prototype pollution protection
|
||||
if (keys.some(k => DANGEROUS_PATH_KEYS.has(k))) {
|
||||
throw new Error(`Invalid property path: "${path}" contains a forbidden key`);
|
||||
}
|
||||
|
||||
for (let i = 0; i < keys.length - 1; i++) {
|
||||
const key = keys[i];
|
||||
if (!(key in current) || typeof current[key] !== 'object' || current[key] === null) {
|
||||
|
||||
347
src/services/workflow-security-scanner.ts
Normal file
347
src/services/workflow-security-scanner.ts
Normal file
@@ -0,0 +1,347 @@
|
||||
/**
|
||||
* Workflow security scanner that orchestrates 4 security checks on n8n workflows:
|
||||
* 1. Hardcoded secrets (via credential-scanner)
|
||||
* 2. Unauthenticated webhooks
|
||||
* 3. Error handling gaps
|
||||
* 4. Data retention settings
|
||||
*/
|
||||
|
||||
import { scanWorkflow, type ScanDetection } from './credential-scanner';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export type AuditSeverity = 'critical' | 'high' | 'medium' | 'low';
|
||||
export type RemediationType = 'auto_fixable' | 'user_input_needed' | 'user_action_needed' | 'review_recommended';
|
||||
export type CustomCheckType = 'hardcoded_secrets' | 'unauthenticated_webhooks' | 'error_handling' | 'data_retention';
|
||||
|
||||
export interface AuditFinding {
|
||||
id: string;
|
||||
severity: AuditSeverity;
|
||||
category: CustomCheckType;
|
||||
title: string;
|
||||
description: string;
|
||||
recommendation: string;
|
||||
remediationType: RemediationType;
|
||||
remediation?: {
|
||||
tool: string;
|
||||
args: Record<string, unknown>;
|
||||
description: string;
|
||||
}[];
|
||||
location: {
|
||||
workflowId: string;
|
||||
workflowName: string;
|
||||
workflowActive?: boolean;
|
||||
nodeName?: string;
|
||||
nodeType?: string;
|
||||
};
|
||||
}
|
||||
|
||||
export interface WorkflowSecurityReport {
|
||||
findings: AuditFinding[];
|
||||
workflowsScanned: number;
|
||||
scanDurationMs: number;
|
||||
summary: {
|
||||
critical: number;
|
||||
high: number;
|
||||
medium: number;
|
||||
low: number;
|
||||
total: number;
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Workflow input type (loose, to accept various workflow shapes)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface WorkflowInput {
|
||||
id?: string;
|
||||
name: string;
|
||||
nodes: Array<{
|
||||
id?: string;
|
||||
name: string;
|
||||
type: string;
|
||||
parameters?: Record<string, unknown>;
|
||||
notes?: string;
|
||||
continueOnFail?: boolean;
|
||||
onError?: string;
|
||||
[key: string]: unknown;
|
||||
}>;
|
||||
active?: boolean;
|
||||
settings?: Record<string, unknown>;
|
||||
staticData?: Record<string, unknown>;
|
||||
connections?: unknown;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Check 1: Hardcoded secrets
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function checkHardcodedSecrets(workflow: WorkflowInput): AuditFinding[] {
|
||||
const detections: ScanDetection[] = scanWorkflow({
|
||||
id: workflow.id,
|
||||
name: workflow.name,
|
||||
nodes: workflow.nodes,
|
||||
settings: workflow.settings,
|
||||
staticData: workflow.staticData,
|
||||
});
|
||||
|
||||
return detections.map((detection, index): AuditFinding => {
|
||||
const workflowId = workflow.id ?? '';
|
||||
const nodeName = detection.location.nodeName ?? '';
|
||||
const isPii = detection.category.toLowerCase() === 'pii';
|
||||
|
||||
return {
|
||||
id: `CRED-${String(index + 1).padStart(3, '0')}`,
|
||||
severity: detection.severity as AuditSeverity,
|
||||
category: 'hardcoded_secrets',
|
||||
title: `Hardcoded ${detection.label} detected`,
|
||||
description: `Found a hardcoded ${detection.label} (${detection.category}) in ${nodeName ? `node "${nodeName}"` : 'workflow-level settings'}. Masked value: ${detection.maskedSnippet ?? 'N/A'}.`,
|
||||
recommendation: isPii
|
||||
? 'Review whether this PII is necessary in the workflow. If it is test data or a placeholder, consider using n8n expressions or environment variables instead of hardcoded values.'
|
||||
: 'Move this secret into n8n credentials. The agent can extract the hardcoded value from the workflow, create a credential, and update the node automatically.',
|
||||
remediationType: isPii ? 'review_recommended' : 'auto_fixable',
|
||||
remediation: isPii
|
||||
? []
|
||||
: [
|
||||
{
|
||||
tool: 'n8n_get_workflow',
|
||||
args: { id: workflowId },
|
||||
description: `Fetch workflow to extract the hardcoded ${detection.label} from node "${nodeName}"`,
|
||||
},
|
||||
{
|
||||
tool: 'n8n_manage_credentials',
|
||||
args: { action: 'create', type: 'httpHeaderAuth' },
|
||||
description: `Create credential with the extracted value (choose appropriate type for ${detection.label})`,
|
||||
},
|
||||
{
|
||||
tool: 'n8n_update_partial_workflow',
|
||||
args: { id: workflowId, operations: [{ type: 'updateNode', nodeName }] },
|
||||
description: `Update node to use credential and remove hardcoded value`,
|
||||
},
|
||||
],
|
||||
location: {
|
||||
workflowId,
|
||||
workflowName: workflow.name,
|
||||
workflowActive: workflow.active,
|
||||
nodeName: detection.location.nodeName,
|
||||
nodeType: detection.location.nodeType,
|
||||
},
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Check 2: Unauthenticated webhooks
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function checkUnauthenticatedWebhooks(workflow: WorkflowInput): AuditFinding[] {
|
||||
const findings: AuditFinding[] = [];
|
||||
let sequence = 0;
|
||||
|
||||
for (const node of workflow.nodes) {
|
||||
const nodeTypeLower = (node.type ?? '').toLowerCase();
|
||||
// respondToWebhook is a response helper, not a trigger — skip it
|
||||
if (nodeTypeLower.includes('respondtowebhook')) {
|
||||
continue;
|
||||
}
|
||||
if (!nodeTypeLower.includes('webhook') && !nodeTypeLower.includes('formtrigger')) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const auth = node.parameters?.authentication;
|
||||
|
||||
// Skip nodes that already have authentication configured
|
||||
if (typeof auth === 'string' && auth !== '' && auth !== 'none') {
|
||||
continue;
|
||||
}
|
||||
|
||||
sequence++;
|
||||
const workflowId = workflow.id ?? '';
|
||||
const isActive = workflow.active === true;
|
||||
|
||||
findings.push({
|
||||
id: `WEBHOOK-${String(sequence).padStart(3, '0')}`,
|
||||
severity: isActive ? 'high' : 'medium',
|
||||
category: 'unauthenticated_webhooks',
|
||||
title: `Unauthenticated webhook: "${node.name}"`,
|
||||
description: `Webhook node "${node.name}" (${node.type}) has no authentication configured.${isActive ? ' This workflow is active and publicly accessible.' : ''} Anyone with the webhook URL can trigger this workflow.`,
|
||||
recommendation: 'Add authentication to the webhook node. Header-based authentication with a random secret is the simplest approach.',
|
||||
remediationType: 'auto_fixable',
|
||||
remediation: [
|
||||
{
|
||||
tool: 'n8n_manage_credentials',
|
||||
args: { action: 'create', type: 'httpHeaderAuth' },
|
||||
description: `Create httpHeaderAuth credential with a generated random secret`,
|
||||
},
|
||||
{
|
||||
tool: 'n8n_update_partial_workflow',
|
||||
args: { id: workflowId, operations: [{ type: 'updateNode', nodeName: node.name }] },
|
||||
description: `Set authentication to "headerAuth" and assign the credential`,
|
||||
},
|
||||
],
|
||||
location: {
|
||||
workflowId,
|
||||
workflowName: workflow.name,
|
||||
workflowActive: isActive,
|
||||
nodeName: node.name,
|
||||
nodeType: node.type,
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
return findings;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Check 3: Error handling gaps
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function checkErrorHandlingGaps(workflow: WorkflowInput): AuditFinding[] {
|
||||
// Only flag workflows with 3+ nodes
|
||||
if (workflow.nodes.length < 3) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const hasContinueOnFail = workflow.nodes.some(
|
||||
(node) => node.continueOnFail === true,
|
||||
);
|
||||
|
||||
const hasOnErrorHandling = workflow.nodes.some(
|
||||
(node) => typeof node.onError === 'string' && node.onError !== 'stopWorkflow',
|
||||
);
|
||||
|
||||
const hasErrorTrigger = workflow.nodes.some(
|
||||
(node) => (node.type ?? '').toLowerCase() === 'n8n-nodes-base.errortrigger',
|
||||
);
|
||||
|
||||
if (hasContinueOnFail || hasOnErrorHandling || hasErrorTrigger) {
|
||||
return [];
|
||||
}
|
||||
|
||||
return [
|
||||
{
|
||||
id: 'ERR-001',
|
||||
severity: 'medium',
|
||||
category: 'error_handling',
|
||||
title: `No error handling in workflow "${workflow.name}"`,
|
||||
description: `Workflow "${workflow.name}" has ${workflow.nodes.length} nodes but no error handling configured. There are no nodes with continueOnFail enabled, no custom onError behavior, and no Error Trigger node.`,
|
||||
recommendation: 'Add error handling to prevent silent failures. Consider adding an Error Trigger node for global error notifications, or set continueOnFail on critical nodes that should not block the workflow.',
|
||||
remediationType: 'review_recommended',
|
||||
location: {
|
||||
workflowId: workflow.id ?? '',
|
||||
workflowName: workflow.name,
|
||||
workflowActive: workflow.active,
|
||||
},
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Check 4: Data retention settings
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function checkDataRetentionSettings(workflow: WorkflowInput): AuditFinding[] {
|
||||
const settings = workflow.settings;
|
||||
if (!settings) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const savesAllData =
|
||||
settings.saveDataErrorExecution === 'all' &&
|
||||
settings.saveDataSuccessExecution === 'all';
|
||||
|
||||
if (!savesAllData) {
|
||||
return [];
|
||||
}
|
||||
|
||||
return [
|
||||
{
|
||||
id: 'RETENTION-001',
|
||||
severity: 'low',
|
||||
category: 'data_retention',
|
||||
title: `Excessive data retention in workflow "${workflow.name}"`,
|
||||
description: `Workflow "${workflow.name}" is configured to save execution data for both successful and failed executions. This may store sensitive data in the n8n database longer than necessary.`,
|
||||
recommendation: 'Review data retention settings. Consider setting saveDataSuccessExecution to "none" for workflows that process sensitive data, or configure execution data pruning at the instance level.',
|
||||
remediationType: 'user_action_needed',
|
||||
location: {
|
||||
workflowId: workflow.id ?? '',
|
||||
workflowName: workflow.name,
|
||||
workflowActive: workflow.active,
|
||||
},
|
||||
},
|
||||
];
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Check dispatcher
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const CHECK_MAP: Record<CustomCheckType, (workflow: WorkflowInput) => AuditFinding[]> = {
|
||||
hardcoded_secrets: checkHardcodedSecrets,
|
||||
unauthenticated_webhooks: checkUnauthenticatedWebhooks,
|
||||
error_handling: checkErrorHandlingGaps,
|
||||
data_retention: checkDataRetentionSettings,
|
||||
};
|
||||
|
||||
const ALL_CHECKS = Object.keys(CHECK_MAP) as CustomCheckType[];
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Main export
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Scans one or more n8n workflows for security issues.
|
||||
*
|
||||
* Runs up to 4 checks: hardcoded secrets, unauthenticated webhooks,
|
||||
* error handling gaps, and data retention settings.
|
||||
*
|
||||
* @param workflows - Array of workflow objects to scan
|
||||
* @param checks - Optional subset of checks to run (defaults to all 4)
|
||||
* @returns A security report with all findings and summary counts
|
||||
*/
|
||||
export function scanWorkflows(
|
||||
workflows: Array<{
|
||||
id?: string;
|
||||
name: string;
|
||||
nodes: any[];
|
||||
active?: boolean;
|
||||
settings?: any;
|
||||
staticData?: any;
|
||||
connections?: any;
|
||||
}>,
|
||||
checks?: CustomCheckType[],
|
||||
): WorkflowSecurityReport {
|
||||
const startTime = Date.now();
|
||||
const checksToRun = checks ?? ALL_CHECKS;
|
||||
const allFindings: AuditFinding[] = [];
|
||||
|
||||
for (const workflow of workflows) {
|
||||
for (const checkType of checksToRun) {
|
||||
const findings = CHECK_MAP[checkType](workflow);
|
||||
allFindings.push(...findings);
|
||||
}
|
||||
}
|
||||
|
||||
const scanDurationMs = Date.now() - startTime;
|
||||
|
||||
const summary = {
|
||||
critical: 0,
|
||||
high: 0,
|
||||
medium: 0,
|
||||
low: 0,
|
||||
total: allFindings.length,
|
||||
};
|
||||
|
||||
for (const finding of allFindings) {
|
||||
summary[finding.severity]++;
|
||||
}
|
||||
|
||||
return {
|
||||
findings: allFindings,
|
||||
workflowsScanned: workflows.length,
|
||||
scanDurationMs,
|
||||
summary,
|
||||
};
|
||||
}
|
||||
@@ -55,6 +55,19 @@ export interface DisableNodeOperation extends DiffOperation {
|
||||
nodeName?: string;
|
||||
}
|
||||
|
||||
export interface PatchNodeFieldOperation extends DiffOperation {
|
||||
type: 'patchNodeField';
|
||||
nodeId?: string;
|
||||
nodeName?: string;
|
||||
fieldPath: string; // Dot-notation path, e.g. "parameters.jsCode"
|
||||
patches: Array<{
|
||||
find: string;
|
||||
replace: string;
|
||||
replaceAll?: boolean; // Default: false. Replace all occurrences.
|
||||
regex?: boolean; // Default: false. Treat find as a regex pattern.
|
||||
}>;
|
||||
}
|
||||
|
||||
// Connection Operations
|
||||
export interface AddConnectionOperation extends DiffOperation {
|
||||
type: 'addConnection';
|
||||
@@ -153,6 +166,7 @@ export type WorkflowDiffOperation =
|
||||
| AddNodeOperation
|
||||
| RemoveNodeOperation
|
||||
| UpdateNodeOperation
|
||||
| PatchNodeFieldOperation
|
||||
| MoveNodeOperation
|
||||
| EnableNodeOperation
|
||||
| DisableNodeOperation
|
||||
@@ -208,10 +222,10 @@ export interface NodeReference {
|
||||
}
|
||||
|
||||
// Utility functions type guards
|
||||
export function isNodeOperation(op: WorkflowDiffOperation): op is
|
||||
AddNodeOperation | RemoveNodeOperation | UpdateNodeOperation |
|
||||
export function isNodeOperation(op: WorkflowDiffOperation): op is
|
||||
AddNodeOperation | RemoveNodeOperation | UpdateNodeOperation | PatchNodeFieldOperation |
|
||||
MoveNodeOperation | EnableNodeOperation | DisableNodeOperation {
|
||||
return ['addNode', 'removeNode', 'updateNode', 'moveNode', 'enableNode', 'disableNode'].includes(op.type);
|
||||
return ['addNode', 'removeNode', 'updateNode', 'patchNodeField', 'moveNode', 'enableNode', 'disableNode'].includes(op.type);
|
||||
}
|
||||
|
||||
export function isConnectionOperation(op: WorkflowDiffOperation): op is
|
||||
|
||||
@@ -219,9 +219,23 @@ describe('HTTP Server n8n Mode', () => {
|
||||
mcp: {
|
||||
method: 'POST',
|
||||
path: '/mcp',
|
||||
description: 'Main MCP JSON-RPC endpoint',
|
||||
description: 'Main MCP JSON-RPC endpoint (StreamableHTTP)',
|
||||
authentication: 'Bearer token required'
|
||||
},
|
||||
sse: {
|
||||
method: 'GET',
|
||||
path: '/sse',
|
||||
description: 'DEPRECATED: SSE stream for legacy clients. Migrate to StreamableHTTP (POST /mcp).',
|
||||
authentication: 'Bearer token required',
|
||||
deprecated: true
|
||||
},
|
||||
messages: {
|
||||
method: 'POST',
|
||||
path: '/messages',
|
||||
description: 'DEPRECATED: Message delivery for SSE sessions. Migrate to StreamableHTTP (POST /mcp).',
|
||||
authentication: 'Bearer token required',
|
||||
deprecated: true
|
||||
},
|
||||
health: {
|
||||
method: 'GET',
|
||||
path: '/health',
|
||||
|
||||
@@ -59,11 +59,24 @@ vi.mock('@modelcontextprotocol/sdk/server/streamableHttp.js', () => ({
|
||||
})
|
||||
}));
|
||||
|
||||
vi.mock('@modelcontextprotocol/sdk/server/sse.js', () => ({
|
||||
SSEServerTransport: vi.fn().mockImplementation(() => ({
|
||||
close: vi.fn().mockResolvedValue(undefined)
|
||||
}))
|
||||
}));
|
||||
vi.mock('@modelcontextprotocol/sdk/server/sse.js', () => {
|
||||
class MockSSEServerTransport {
|
||||
sessionId: string;
|
||||
onclose: (() => void) | null = null;
|
||||
onerror: ((error: Error) => void) | null = null;
|
||||
close = vi.fn().mockResolvedValue(undefined);
|
||||
handlePostMessage = vi.fn().mockImplementation(async (_req: any, res: any) => {
|
||||
res.writeHead(202);
|
||||
res.end('Accepted');
|
||||
});
|
||||
start = vi.fn().mockResolvedValue(undefined);
|
||||
|
||||
constructor(_endpoint: string, _res: any) {
|
||||
this.sessionId = 'sse-' + Math.random().toString(36).substring(2, 11);
|
||||
}
|
||||
}
|
||||
return { SSEServerTransport: MockSSEServerTransport };
|
||||
});
|
||||
|
||||
vi.mock('../../src/mcp/server', () => ({
|
||||
N8NDocumentationMCPServer: vi.fn().mockImplementation(() => ({
|
||||
@@ -1100,24 +1113,16 @@ describe('HTTP Server Session Management', () => {
|
||||
'session-2': { lastAccess: new Date(), createdAt: new Date() }
|
||||
};
|
||||
|
||||
// Set up legacy session for SSE compatibility
|
||||
const mockLegacyTransport = { close: vi.fn().mockResolvedValue(undefined) };
|
||||
(server as any).session = {
|
||||
transport: mockLegacyTransport
|
||||
};
|
||||
|
||||
await server.shutdown();
|
||||
|
||||
// All transports should be closed
|
||||
expect(mockTransport1.close).toHaveBeenCalled();
|
||||
expect(mockTransport2.close).toHaveBeenCalled();
|
||||
expect(mockLegacyTransport.close).toHaveBeenCalled();
|
||||
|
||||
// All data structures should be cleared
|
||||
expect(Object.keys((server as any).transports)).toHaveLength(0);
|
||||
expect(Object.keys((server as any).servers)).toHaveLength(0);
|
||||
expect(Object.keys((server as any).sessionMetadata)).toHaveLength(0);
|
||||
expect((server as any).session).toBe(null);
|
||||
});
|
||||
|
||||
it('should handle transport close errors during shutdown', async () => {
|
||||
@@ -1169,22 +1174,21 @@ describe('HTTP Server Session Management', () => {
|
||||
expect(Array.isArray(sessionInfo.sessions!.sessionIds)).toBe(true);
|
||||
});
|
||||
|
||||
it('should show legacy SSE session when present', async () => {
|
||||
it('should show active when transports exist', async () => {
|
||||
server = new SingleSessionHTTPServer();
|
||||
|
||||
// Mock legacy session
|
||||
const mockSession = {
|
||||
sessionId: 'sse-session-123',
|
||||
// Add a transport to simulate an active session
|
||||
(server as any).transports['session-123'] = { close: vi.fn() };
|
||||
(server as any).sessionMetadata['session-123'] = {
|
||||
lastAccess: new Date(),
|
||||
isSSE: true
|
||||
createdAt: new Date()
|
||||
};
|
||||
(server as any).session = mockSession;
|
||||
|
||||
const sessionInfo = server.getSessionInfo();
|
||||
|
||||
expect(sessionInfo.active).toBe(true);
|
||||
expect(sessionInfo.sessionId).toBe('sse-session-123');
|
||||
expect(sessionInfo.age).toBeGreaterThanOrEqual(0);
|
||||
expect(sessionInfo.sessions!.total).toBe(1);
|
||||
expect(sessionInfo.sessions!.sessionIds).toContain('session-123');
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
340
tests/unit/services/audit-report-builder.test.ts
Normal file
340
tests/unit/services/audit-report-builder.test.ts
Normal file
@@ -0,0 +1,340 @@
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import {
|
||||
buildAuditReport,
|
||||
type AuditReportInput,
|
||||
type UnifiedAuditReport,
|
||||
} from '@/services/audit-report-builder';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const DEFAULT_PERFORMANCE = {
|
||||
builtinAuditMs: 100,
|
||||
workflowFetchMs: 50,
|
||||
customScanMs: 200,
|
||||
totalMs: 350,
|
||||
};
|
||||
|
||||
function makeInput(overrides: Partial<AuditReportInput> = {}): AuditReportInput {
|
||||
return {
|
||||
builtinAudit: [],
|
||||
customReport: null,
|
||||
performance: DEFAULT_PERFORMANCE,
|
||||
instanceUrl: 'https://n8n.example.com',
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
function makeFinding(overrides: Record<string, unknown> = {}) {
|
||||
return {
|
||||
id: 'CRED-001',
|
||||
severity: 'critical' as const,
|
||||
category: 'hardcoded_secrets',
|
||||
title: 'Hardcoded openai_key detected',
|
||||
description: 'Found a hardcoded openai_key in node "HTTP Request".',
|
||||
recommendation: 'Move this secret into n8n credentials.',
|
||||
remediationType: 'auto_fixable' as const,
|
||||
remediation: [
|
||||
{
|
||||
tool: 'n8n_manage_credentials',
|
||||
args: { action: 'create' },
|
||||
description: 'Create credential',
|
||||
},
|
||||
],
|
||||
location: {
|
||||
workflowId: 'wf-1',
|
||||
workflowName: 'Test Workflow',
|
||||
workflowActive: true,
|
||||
nodeName: 'HTTP Request',
|
||||
nodeType: 'n8n-nodes-base.httpRequest',
|
||||
},
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
function makeCustomReport(findings: any[], workflowsScanned = 1) {
|
||||
const summary = { critical: 0, high: 0, medium: 0, low: 0, total: findings.length };
|
||||
for (const f of findings) {
|
||||
summary[f.severity as keyof typeof summary]++;
|
||||
}
|
||||
return {
|
||||
findings,
|
||||
workflowsScanned,
|
||||
scanDurationMs: 150,
|
||||
summary,
|
||||
};
|
||||
}
|
||||
|
||||
// ===========================================================================
|
||||
// Tests
|
||||
// ===========================================================================
|
||||
|
||||
describe('audit-report-builder', () => {
|
||||
describe('empty reports', () => {
|
||||
it('should produce "No issues found" when built-in audit is empty and no custom findings', () => {
|
||||
const input = makeInput({ builtinAudit: [], customReport: null });
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
expect(result.markdown).toContain('No issues found');
|
||||
expect(result.summary.totalFindings).toBe(0);
|
||||
expect(result.summary.critical).toBe(0);
|
||||
expect(result.summary.high).toBe(0);
|
||||
expect(result.summary.medium).toBe(0);
|
||||
expect(result.summary.low).toBe(0);
|
||||
});
|
||||
|
||||
it('should produce "No issues found" when built-in audit is null-like', () => {
|
||||
const input = makeInput({ builtinAudit: null });
|
||||
const result = buildAuditReport(input);
|
||||
expect(result.markdown).toContain('No issues found');
|
||||
});
|
||||
});
|
||||
|
||||
describe('built-in audit rendering', () => {
|
||||
it('should render built-in audit with Nodes Risk Report', () => {
|
||||
// Real n8n API uses { risk: "nodes", sections: [...] } format
|
||||
const builtinAudit = {
|
||||
'Nodes Risk Report': {
|
||||
risk: 'nodes',
|
||||
sections: [
|
||||
{
|
||||
title: 'Insecure node detected',
|
||||
description: 'Node X uses deprecated API',
|
||||
recommendation: 'Update to latest version',
|
||||
location: [{ id: 'node-1' }, { id: 'node-2' }],
|
||||
},
|
||||
],
|
||||
},
|
||||
};
|
||||
|
||||
const input = makeInput({ builtinAudit });
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
expect(result.markdown).toContain('Nodes Risk Report');
|
||||
expect(result.markdown).toContain('Insecure node detected');
|
||||
expect(result.markdown).toContain('deprecated API');
|
||||
expect(result.markdown).toContain('Affected: 2 items');
|
||||
// Built-in locations are counted as low severity
|
||||
expect(result.summary.low).toBe(2);
|
||||
expect(result.summary.totalFindings).toBe(2);
|
||||
});
|
||||
|
||||
it('should render Instance Risk Report with version and settings info', () => {
|
||||
const builtinAudit = {
|
||||
'Instance Risk Report': {
|
||||
risk: 'instance',
|
||||
sections: [
|
||||
{
|
||||
title: 'Outdated instance',
|
||||
description: 'Running an old version',
|
||||
recommendation: 'Update n8n',
|
||||
nextVersions: [{ name: '1.20.0' }, { name: '1.21.0' }],
|
||||
settings: {
|
||||
authenticationMethod: 'none',
|
||||
publicApiDisabled: false,
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
};
|
||||
|
||||
const input = makeInput({ builtinAudit });
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
expect(result.markdown).toContain('Instance Risk Report');
|
||||
expect(result.markdown).toContain('Available versions: 1.20.0, 1.21.0');
|
||||
expect(result.markdown).toContain('authenticationMethod');
|
||||
});
|
||||
});
|
||||
|
||||
describe('grouped by workflow', () => {
|
||||
it('should group findings by workflow with table format', () => {
|
||||
const findings = [
|
||||
makeFinding({ id: 'CRED-001', severity: 'critical', title: 'Critical issue' }),
|
||||
makeFinding({ id: 'ERR-001', severity: 'medium', title: 'Medium issue', category: 'error_handling' }),
|
||||
];
|
||||
|
||||
const input = makeInput({ customReport: makeCustomReport(findings) });
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
// Should have a workflow heading
|
||||
expect(result.markdown).toContain('Test Workflow');
|
||||
// Should have a table with findings
|
||||
expect(result.markdown).toContain('| ID | Severity | Finding | Node | Fix |');
|
||||
expect(result.markdown).toContain('CRED-001');
|
||||
expect(result.markdown).toContain('ERR-001');
|
||||
});
|
||||
|
||||
it('should sort findings within workflow by severity', () => {
|
||||
const findings = [
|
||||
makeFinding({ id: 'LOW-001', severity: 'low', title: 'Low issue' }),
|
||||
makeFinding({ id: 'CRIT-001', severity: 'critical', title: 'Critical issue' }),
|
||||
];
|
||||
|
||||
const input = makeInput({ customReport: makeCustomReport(findings) });
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
const critIdx = result.markdown.indexOf('CRIT-001');
|
||||
const lowIdx = result.markdown.indexOf('LOW-001');
|
||||
expect(critIdx).toBeLessThan(lowIdx);
|
||||
});
|
||||
|
||||
it('should sort workflows by worst severity first', () => {
|
||||
const findings = [
|
||||
makeFinding({ id: 'LOW-001', severity: 'low', title: 'Low issue', location: { workflowId: 'wf-2', workflowName: 'Safe Workflow', nodeName: 'Set', nodeType: 'n8n-nodes-base.set' } }),
|
||||
makeFinding({ id: 'CRIT-001', severity: 'critical', title: 'Critical issue', location: { workflowId: 'wf-1', workflowName: 'Danger Workflow', nodeName: 'HTTP', nodeType: 'n8n-nodes-base.httpRequest' } }),
|
||||
];
|
||||
|
||||
const input = makeInput({ customReport: makeCustomReport(findings, 2) });
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
const dangerIdx = result.markdown.indexOf('Danger Workflow');
|
||||
const safeIdx = result.markdown.indexOf('Safe Workflow');
|
||||
expect(dangerIdx).toBeLessThan(safeIdx);
|
||||
});
|
||||
});
|
||||
|
||||
describe('remediation playbook', () => {
|
||||
it('should show auto-fixable section for secrets and webhooks', () => {
|
||||
const findings = [
|
||||
makeFinding({ remediationType: 'auto_fixable', category: 'hardcoded_secrets' }),
|
||||
makeFinding({ id: 'WEBHOOK-001', severity: 'medium', remediationType: 'auto_fixable', category: 'unauthenticated_webhooks', title: 'Unauthenticated webhook' }),
|
||||
];
|
||||
|
||||
const input = makeInput({ customReport: makeCustomReport(findings) });
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
expect(result.markdown).toContain('Auto-fixable by agent');
|
||||
expect(result.markdown).toContain('Hardcoded secrets');
|
||||
expect(result.markdown).toContain('Unauthenticated webhooks');
|
||||
expect(result.markdown).toContain('n8n_manage_credentials');
|
||||
});
|
||||
|
||||
it('should show review section for error handling and PII', () => {
|
||||
const findings = [
|
||||
makeFinding({ id: 'ERR-001', severity: 'medium', remediationType: 'review_recommended', category: 'error_handling', title: 'No error handling' }),
|
||||
makeFinding({ id: 'PII-001', severity: 'medium', remediationType: 'review_recommended', category: 'hardcoded_secrets', title: 'PII found' }),
|
||||
];
|
||||
|
||||
const input = makeInput({ customReport: makeCustomReport(findings) });
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
expect(result.markdown).toContain('Requires review');
|
||||
expect(result.markdown).toContain('Error handling gaps');
|
||||
expect(result.markdown).toContain('PII in parameters');
|
||||
});
|
||||
|
||||
it('should show user action section for data retention', () => {
|
||||
const findings = [
|
||||
makeFinding({ id: 'RET-001', severity: 'low', remediationType: 'user_action_needed', category: 'data_retention', title: 'Excessive retention' }),
|
||||
];
|
||||
|
||||
const input = makeInput({ customReport: makeCustomReport(findings) });
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
expect(result.markdown).toContain('Requires your action');
|
||||
expect(result.markdown).toContain('Data retention');
|
||||
});
|
||||
|
||||
it('should surface built-in audit actionables in playbook', () => {
|
||||
const builtinAudit = {
|
||||
'Instance Risk Report': {
|
||||
risk: 'instance',
|
||||
sections: [
|
||||
{ title: 'Outdated instance', description: 'Old version', recommendation: 'Update' },
|
||||
],
|
||||
},
|
||||
'Nodes Risk Report': {
|
||||
risk: 'nodes',
|
||||
sections: [
|
||||
{ title: 'Community nodes', description: 'Unvetted', recommendation: 'Review', location: [{ id: '1' }, { id: '2' }] },
|
||||
],
|
||||
},
|
||||
};
|
||||
|
||||
const input = makeInput({ builtinAudit });
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
expect(result.markdown).toContain('Outdated instance');
|
||||
expect(result.markdown).toContain('Community nodes');
|
||||
});
|
||||
});
|
||||
|
||||
describe('warnings', () => {
|
||||
it('should include warnings in the report when provided', () => {
|
||||
const input = makeInput({
|
||||
warnings: [
|
||||
'Could not fetch 2 workflows due to permissions',
|
||||
'Built-in audit endpoint returned partial results',
|
||||
],
|
||||
});
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
expect(result.markdown).toContain('Could not fetch 2 workflows');
|
||||
expect(result.markdown).toContain('partial results');
|
||||
});
|
||||
|
||||
it('should not include warnings when none are provided', () => {
|
||||
const input = makeInput({ warnings: undefined });
|
||||
const result = buildAuditReport(input);
|
||||
expect(result.markdown).not.toContain('Warning');
|
||||
});
|
||||
});
|
||||
|
||||
describe('performance timing', () => {
|
||||
it('should include scan performance metrics in the report', () => {
|
||||
const input = makeInput({
|
||||
performance: {
|
||||
builtinAuditMs: 120,
|
||||
workflowFetchMs: 80,
|
||||
customScanMs: 250,
|
||||
totalMs: 450,
|
||||
},
|
||||
});
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
expect(result.markdown).toContain('120ms');
|
||||
expect(result.markdown).toContain('80ms');
|
||||
expect(result.markdown).toContain('250ms');
|
||||
});
|
||||
});
|
||||
|
||||
describe('summary counts', () => {
|
||||
it('should aggregate counts across both built-in and custom sources', () => {
|
||||
const builtinAudit = {
|
||||
'Nodes Risk Report': {
|
||||
risk: 'nodes',
|
||||
sections: [
|
||||
{
|
||||
title: 'Issue',
|
||||
description: 'Desc',
|
||||
location: [{ id: '1' }, { id: '2' }, { id: '3' }],
|
||||
},
|
||||
],
|
||||
},
|
||||
};
|
||||
|
||||
const findings = [
|
||||
makeFinding({ severity: 'critical' }),
|
||||
makeFinding({ id: 'CRED-002', severity: 'high' }),
|
||||
makeFinding({ id: 'CRED-003', severity: 'medium' }),
|
||||
];
|
||||
|
||||
const input = makeInput({
|
||||
builtinAudit,
|
||||
customReport: makeCustomReport(findings, 5),
|
||||
});
|
||||
|
||||
const result = buildAuditReport(input);
|
||||
|
||||
expect(result.summary.critical).toBe(1);
|
||||
expect(result.summary.high).toBe(1);
|
||||
expect(result.summary.medium).toBe(1);
|
||||
// 3 built-in locations counted as low
|
||||
expect(result.summary.low).toBe(3);
|
||||
expect(result.summary.totalFindings).toBe(6);
|
||||
expect(result.summary.workflowsScanned).toBe(5);
|
||||
});
|
||||
});
|
||||
});
|
||||
558
tests/unit/services/credential-scanner.test.ts
Normal file
558
tests/unit/services/credential-scanner.test.ts
Normal file
@@ -0,0 +1,558 @@
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import {
|
||||
scanWorkflow,
|
||||
maskSecret,
|
||||
SECRET_PATTERNS,
|
||||
PII_PATTERNS,
|
||||
type ScanDetection,
|
||||
} from '@/services/credential-scanner';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Minimal workflow wrapper for single-node tests. */
|
||||
function makeWorkflow(
|
||||
nodeParams: Record<string, unknown>,
|
||||
opts?: {
|
||||
nodeName?: string;
|
||||
nodeType?: string;
|
||||
workflowId?: string;
|
||||
workflowName?: string;
|
||||
pinData?: Record<string, unknown>;
|
||||
staticData?: Record<string, unknown>;
|
||||
settings?: Record<string, unknown>;
|
||||
},
|
||||
) {
|
||||
return {
|
||||
id: opts?.workflowId ?? 'wf-1',
|
||||
name: opts?.workflowName ?? 'Test Workflow',
|
||||
nodes: [
|
||||
{
|
||||
name: opts?.nodeName ?? 'HTTP Request',
|
||||
type: opts?.nodeType ?? 'n8n-nodes-base.httpRequest',
|
||||
parameters: nodeParams,
|
||||
},
|
||||
],
|
||||
pinData: opts?.pinData,
|
||||
staticData: opts?.staticData,
|
||||
settings: opts?.settings,
|
||||
};
|
||||
}
|
||||
|
||||
/** Helper that returns the first detection label, or null. */
|
||||
function firstLabel(detections: ScanDetection[]): string | null {
|
||||
return detections.length > 0 ? detections[0].label : null;
|
||||
}
|
||||
|
||||
// ===========================================================================
|
||||
// Pattern matching — true positives
|
||||
// ===========================================================================
|
||||
|
||||
describe('credential-scanner', () => {
|
||||
describe('pattern matching — true positives', () => {
|
||||
it('should detect OpenAI key (sk-proj- prefix)', () => {
|
||||
const wf = makeWorkflow({ apiKey: 'sk-proj-abc123def456ghi789jkl0' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('openai_key');
|
||||
});
|
||||
|
||||
it('should detect OpenAI key (sk- prefix without proj)', () => {
|
||||
const wf = makeWorkflow({ apiKey: 'sk-abcdefghij1234567890abcdefghij' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('openai_key');
|
||||
});
|
||||
|
||||
it('should detect AWS access key', () => {
|
||||
const wf = makeWorkflow({ accessKeyId: 'AKIA1234567890ABCDEF' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('aws_key');
|
||||
});
|
||||
|
||||
it('should detect GitHub PAT (ghp_ prefix)', () => {
|
||||
const wf = makeWorkflow({ token: 'ghp_1234567890abcdefghijklmnopqrstuvwxyz' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('github_pat');
|
||||
});
|
||||
|
||||
it('should detect Stripe secret key', () => {
|
||||
const wf = makeWorkflow({ stripeKey: 'sk_live_1234567890abcdef12345' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('stripe_key');
|
||||
});
|
||||
|
||||
it('should detect JWT token', () => {
|
||||
const jwt =
|
||||
'eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiIxMjM0NTY3ODkwIn0.dozjgNryP4J3jVmNHl0w5N_XgL0n3I9PlFUP0THsR8U';
|
||||
const wf = makeWorkflow({ token: jwt });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('jwt_token');
|
||||
});
|
||||
|
||||
it('should detect Slack bot token', () => {
|
||||
const wf = makeWorkflow({ token: 'xoxb-1234567890-abcdefghij' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('slack_token');
|
||||
});
|
||||
|
||||
it('should detect SendGrid API key', () => {
|
||||
const key =
|
||||
'SG.abcdefghijklmnopqrstuv.abcdefghijklmnopqrstuvwxyz0123456789abcdefg';
|
||||
const wf = makeWorkflow({ apiKey: key });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('sendgrid_key');
|
||||
});
|
||||
|
||||
it('should detect private key header', () => {
|
||||
const wf = makeWorkflow({
|
||||
privateKey: '-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQ...',
|
||||
});
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('private_key');
|
||||
});
|
||||
|
||||
it('should detect Bearer token', () => {
|
||||
const wf = makeWorkflow({
|
||||
header: 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9abcdef',
|
||||
});
|
||||
const detections = scanWorkflow(wf);
|
||||
// Could match bearer_token or jwt_token; at minimum one detection exists
|
||||
const labels = detections.map((d) => d.label);
|
||||
expect(labels).toContain('bearer_token');
|
||||
});
|
||||
|
||||
it('should detect URL with embedded credentials', () => {
|
||||
const wf = makeWorkflow({
|
||||
connectionString: 'postgres://admin:secret_password@db.example.com:5432/mydb',
|
||||
});
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('url_with_auth');
|
||||
});
|
||||
|
||||
it('should detect Anthropic key', () => {
|
||||
const wf = makeWorkflow({ apiKey: 'sk-ant-abcdefghijklmnopqrstuvwxyz1234' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('anthropic_key');
|
||||
});
|
||||
|
||||
it('should detect GitHub OAuth token (gho_ prefix)', () => {
|
||||
const wf = makeWorkflow({ token: 'gho_1234567890abcdefghijklmnopqrstuvwxyz' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('github_oauth');
|
||||
});
|
||||
|
||||
it('should detect Stripe restricted key (rk_live)', () => {
|
||||
const wf = makeWorkflow({ stripeKey: 'rk_live_1234567890abcdef12345' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('stripe_key');
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// PII patterns — true positives
|
||||
// ===========================================================================
|
||||
|
||||
describe('PII pattern matching — true positives', () => {
|
||||
it('should detect email address', () => {
|
||||
const wf = makeWorkflow({ recipient: 'john.doe@example.com' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('email');
|
||||
});
|
||||
|
||||
it('should detect credit card number with spaces', () => {
|
||||
const wf = makeWorkflow({ cardNumber: '4111 1111 1111 1111' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('credit_card');
|
||||
});
|
||||
|
||||
it('should detect credit card number with dashes', () => {
|
||||
const wf = makeWorkflow({ cardNumber: '4111-1111-1111-1111' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(firstLabel(detections)).toBe('credit_card');
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// True negatives — strings that should NOT be detected
|
||||
// ===========================================================================
|
||||
|
||||
describe('true negatives', () => {
|
||||
it('should not flag a short string that looks like a key prefix', () => {
|
||||
const wf = makeWorkflow({ key: 'sk-abc' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should not flag normal URLs without embedded auth', () => {
|
||||
const wf = makeWorkflow({ url: 'https://example.com/api/v1/path' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should not flag a safe short string', () => {
|
||||
const wf = makeWorkflow({ value: 'hello world, this is a normal string' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should not flag strings shorter than 9 characters', () => {
|
||||
// collectStrings skips strings with length <= 8
|
||||
const wf = makeWorkflow({ key: '12345678' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Expression skipping
|
||||
// ===========================================================================
|
||||
|
||||
describe('expression skipping', () => {
|
||||
it('should skip strings starting with = even if they contain a key pattern', () => {
|
||||
const wf = makeWorkflow({
|
||||
apiKey: '={{ $json.apiKey }}',
|
||||
header: '={{ "sk-proj-" + $json.secret123456789 }}',
|
||||
});
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should skip strings starting with {{ even if they contain a key pattern', () => {
|
||||
const wf = makeWorkflow({
|
||||
token: '{{ $json.token }}',
|
||||
auth: '{{ "Bearer " + $json.accessToken12345678 }}',
|
||||
});
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should skip mixed expression and literal if expression comes first', () => {
|
||||
const wf = makeWorkflow({
|
||||
mixed: '={{ "AKIA" + "1234567890ABCDEF" }}',
|
||||
});
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Field skipping
|
||||
// ===========================================================================
|
||||
|
||||
describe('field skipping', () => {
|
||||
it('should not scan values under the credentials key', () => {
|
||||
const wf = makeWorkflow({
|
||||
credentials: {
|
||||
httpHeaderAuth: {
|
||||
id: 'cred-123',
|
||||
name: 'sk-proj-abc123def456ghi789jkl0',
|
||||
},
|
||||
},
|
||||
url: 'https://api.example.com',
|
||||
});
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should not scan values under the expression key', () => {
|
||||
const wf = makeWorkflow({
|
||||
expression: 'sk-proj-abc123def456ghi789jkl0',
|
||||
url: 'https://api.example.com',
|
||||
});
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should not scan values under the id key', () => {
|
||||
const wf = makeWorkflow({
|
||||
id: 'AKIA1234567890ABCDEF',
|
||||
url: 'https://api.example.com',
|
||||
});
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Depth limit
|
||||
// ===========================================================================
|
||||
|
||||
describe('depth limit', () => {
|
||||
it('should stop traversing structures nested deeper than 10 levels', () => {
|
||||
// Build a nested structure 12 levels deep with a secret at the bottom
|
||||
let nested: Record<string, unknown> = {
|
||||
secret: 'sk-proj-abc123def456ghi789jkl0',
|
||||
};
|
||||
for (let i = 0; i < 12; i++) {
|
||||
nested = { level: nested };
|
||||
}
|
||||
|
||||
const wf = makeWorkflow(nested);
|
||||
const detections = scanWorkflow(wf);
|
||||
// The secret is beyond depth 10, so it should not be found
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should detect secrets at exactly depth 10', () => {
|
||||
// Build a structure that puts the secret at depth 10 from the
|
||||
// parameters level. collectStrings is called with depth=0 for
|
||||
// node.parameters, so 10 nesting levels should still be traversed.
|
||||
let nested: Record<string, unknown> = {
|
||||
secret: 'sk-proj-abc123def456ghi789jkl0',
|
||||
};
|
||||
for (let i = 0; i < 9; i++) {
|
||||
nested = { level: nested };
|
||||
}
|
||||
|
||||
const wf = makeWorkflow(nested);
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections.length).toBeGreaterThanOrEqual(1);
|
||||
expect(firstLabel(detections)).toBe('openai_key');
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// maskSecret()
|
||||
// ===========================================================================
|
||||
|
||||
describe('maskSecret()', () => {
|
||||
it('should mask a long value showing first 6 and last 4 characters', () => {
|
||||
const result = maskSecret('sk-proj-abc123def456ghi789jkl0');
|
||||
expect(result).toBe('sk-pro****jkl0');
|
||||
});
|
||||
|
||||
it('should mask a 14-character value with head and tail', () => {
|
||||
// Exactly at boundary: 14 chars >= 14, so head+tail format
|
||||
const result = maskSecret('abcdefghijklmn');
|
||||
expect(result).toBe('abcdef****klmn');
|
||||
});
|
||||
|
||||
it('should fully mask a value shorter than 14 characters', () => {
|
||||
expect(maskSecret('1234567890')).toBe('****');
|
||||
expect(maskSecret('short')).toBe('****');
|
||||
expect(maskSecret('a')).toBe('****');
|
||||
expect(maskSecret('abcdefghijk')).toBe('****'); // 11 chars
|
||||
expect(maskSecret('abcdefghijklm')).toBe('****'); // 13 chars
|
||||
});
|
||||
|
||||
it('should handle empty string', () => {
|
||||
expect(maskSecret('')).toBe('****');
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Full workflow scan — realistic workflow JSON
|
||||
// ===========================================================================
|
||||
|
||||
describe('full workflow scan', () => {
|
||||
it('should detect a hardcoded key in a realistic HTTP Request node', () => {
|
||||
const workflow = {
|
||||
id: 'wf-42',
|
||||
name: 'Send Slack Message',
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook Trigger',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: {
|
||||
path: '/incoming',
|
||||
method: 'POST',
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'HTTP Request',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
parameters: {
|
||||
url: 'https://api.openai.com/v1/chat/completions',
|
||||
method: 'POST',
|
||||
headers: {
|
||||
values: [
|
||||
{
|
||||
name: 'Authorization',
|
||||
value: 'Bearer sk-proj-RealKeyThatShouldNotBeHere1234567890',
|
||||
},
|
||||
],
|
||||
},
|
||||
body: {
|
||||
json: {
|
||||
model: 'gpt-4',
|
||||
messages: [{ role: 'user', content: 'Hello' }],
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Slack',
|
||||
type: 'n8n-nodes-base.slack',
|
||||
parameters: {
|
||||
channel: '#general',
|
||||
text: 'Response received',
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const detections = scanWorkflow(workflow);
|
||||
expect(detections.length).toBeGreaterThanOrEqual(1);
|
||||
|
||||
const openaiDetection = detections.find((d) => d.label === 'openai_key');
|
||||
expect(openaiDetection).toBeDefined();
|
||||
expect(openaiDetection!.location.workflowId).toBe('wf-42');
|
||||
expect(openaiDetection!.location.workflowName).toBe('Send Slack Message');
|
||||
expect(openaiDetection!.location.nodeName).toBe('HTTP Request');
|
||||
expect(openaiDetection!.location.nodeType).toBe('n8n-nodes-base.httpRequest');
|
||||
// maskedSnippet should not contain the full key
|
||||
expect(openaiDetection!.maskedSnippet).toContain('****');
|
||||
});
|
||||
|
||||
it('should return empty detections for a clean workflow', () => {
|
||||
const workflow = {
|
||||
id: 'wf-clean',
|
||||
name: 'Clean Workflow',
|
||||
nodes: [
|
||||
{
|
||||
name: 'Manual Trigger',
|
||||
type: 'n8n-nodes-base.manualTrigger',
|
||||
parameters: {},
|
||||
},
|
||||
{
|
||||
name: 'Set',
|
||||
type: 'n8n-nodes-base.set',
|
||||
parameters: {
|
||||
values: {
|
||||
string: [{ name: 'greeting', value: 'Hello World this is safe' }],
|
||||
},
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const detections = scanWorkflow(workflow);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// pinData / staticData / settings scanning
|
||||
// ===========================================================================
|
||||
|
||||
describe('pinData / staticData / settings scanning', () => {
|
||||
it('should detect secrets embedded in pinData', () => {
|
||||
const wf = makeWorkflow(
|
||||
{ url: 'https://example.com' },
|
||||
{
|
||||
pinData: {
|
||||
'HTTP Request': [
|
||||
{ json: { apiKey: 'sk-proj-abc123def456ghi789jkl0' } },
|
||||
],
|
||||
},
|
||||
},
|
||||
);
|
||||
const detections = scanWorkflow(wf);
|
||||
const pinDetection = detections.find(
|
||||
(d) => d.label === 'openai_key' && d.location.nodeName === undefined,
|
||||
);
|
||||
expect(pinDetection).toBeDefined();
|
||||
});
|
||||
|
||||
it('should detect secrets embedded in staticData', () => {
|
||||
const wf = makeWorkflow(
|
||||
{ url: 'https://example.com' },
|
||||
{
|
||||
staticData: {
|
||||
lastProcessed: {
|
||||
token: 'ghp_1234567890abcdefghijklmnopqrstuvwxyz',
|
||||
},
|
||||
},
|
||||
},
|
||||
);
|
||||
const detections = scanWorkflow(wf);
|
||||
const staticDetection = detections.find(
|
||||
(d) => d.label === 'github_pat' && d.location.nodeName === undefined,
|
||||
);
|
||||
expect(staticDetection).toBeDefined();
|
||||
});
|
||||
|
||||
it('should detect secrets in workflow settings', () => {
|
||||
const wf = makeWorkflow(
|
||||
{ url: 'https://example.com' },
|
||||
{
|
||||
settings: {
|
||||
webhookSecret: 'sk_live_1234567890abcdef12345',
|
||||
},
|
||||
},
|
||||
);
|
||||
const detections = scanWorkflow(wf);
|
||||
const settingsDetection = detections.find(
|
||||
(d) => d.label === 'stripe_key' && d.location.nodeName === undefined,
|
||||
);
|
||||
expect(settingsDetection).toBeDefined();
|
||||
});
|
||||
|
||||
it('should not flag pinData / staticData / settings when they are empty', () => {
|
||||
const wf = makeWorkflow(
|
||||
{ url: 'https://example.com' },
|
||||
{
|
||||
pinData: {},
|
||||
staticData: {},
|
||||
settings: {},
|
||||
},
|
||||
);
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Detection metadata
|
||||
// ===========================================================================
|
||||
|
||||
describe('detection metadata', () => {
|
||||
it('should include category and severity on each detection', () => {
|
||||
const wf = makeWorkflow({ key: 'AKIA1234567890ABCDEF' });
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections).toHaveLength(1);
|
||||
expect(detections[0].category).toBe('Cloud/DevOps');
|
||||
expect(detections[0].severity).toBe('critical');
|
||||
});
|
||||
|
||||
it('should set workflowId to empty string when id is missing', () => {
|
||||
const wf = {
|
||||
name: 'No ID Workflow',
|
||||
nodes: [
|
||||
{
|
||||
name: 'HTTP Request',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
parameters: { key: 'AKIA1234567890ABCDEF' },
|
||||
},
|
||||
],
|
||||
};
|
||||
const detections = scanWorkflow(wf);
|
||||
expect(detections[0].location.workflowId).toBe('');
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Pattern completeness sanity check
|
||||
// ===========================================================================
|
||||
|
||||
describe('pattern definitions', () => {
|
||||
it('should have at least 40 secret patterns defined', () => {
|
||||
expect(SECRET_PATTERNS.length).toBeGreaterThanOrEqual(40);
|
||||
});
|
||||
|
||||
it('should have PII patterns for email, phone, and credit card', () => {
|
||||
const labels = PII_PATTERNS.map((p) => p.label);
|
||||
expect(labels).toContain('email');
|
||||
expect(labels).toContain('phone');
|
||||
expect(labels).toContain('credit_card');
|
||||
});
|
||||
|
||||
it('should have every pattern with a non-empty label and category', () => {
|
||||
for (const p of [...SECRET_PATTERNS, ...PII_PATTERNS]) {
|
||||
expect(p.label).toBeTruthy();
|
||||
expect(p.category).toBeTruthy();
|
||||
expect(['critical', 'high', 'medium']).toContain(p.severity);
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -428,6 +428,22 @@ describe('WorkflowDiffEngine', () => {
|
||||
expect(result.errors![0].message).toContain('Correct structure:');
|
||||
});
|
||||
|
||||
it('should reject prototype pollution via update path', async () => {
|
||||
const result = await diffEngine.applyDiff(baseWorkflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'updateNode' as const,
|
||||
nodeId: 'http-1',
|
||||
updates: {
|
||||
'__proto__.polluted': 'malicious'
|
||||
}
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('forbidden key');
|
||||
});
|
||||
|
||||
it('should apply __patch_find_replace to string properties (#642)', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
@@ -581,6 +597,520 @@ describe('WorkflowDiffEngine', () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe('PatchNodeField Operation', () => {
|
||||
it('should apply single find/replace patch', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;\nreturn x + 2;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: 'x + 2', replace: 'x + 3' }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
const codeNode = result.workflow.nodes.find((n: any) => n.name === 'Code');
|
||||
expect(codeNode?.parameters.jsCode).toBe('const x = 1;\nreturn x + 3;');
|
||||
});
|
||||
|
||||
it('should error when find string not found', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: 'nonexistent text', replace: 'something' }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('not found');
|
||||
});
|
||||
|
||||
it('should error on ambiguous match (multiple occurrences)', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const a = 1;\nconst b = 1;\nconst c = 1;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: 'const', replace: 'let' }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('3 times');
|
||||
expect(result.errors?.[0]?.message).toContain('replaceAll');
|
||||
});
|
||||
|
||||
it('should replace all occurrences with replaceAll flag', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const a = 1;\nconst b = 2;\nconst c = 3;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: 'const', replace: 'let', replaceAll: true }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
const codeNode = result.workflow.nodes.find((n: any) => n.name === 'Code');
|
||||
expect(codeNode?.parameters.jsCode).toBe('let a = 1;\nlet b = 2;\nlet c = 3;');
|
||||
});
|
||||
|
||||
it('should apply multiple sequential patches', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const a = 1;\nconst b = 2;\nreturn a + b;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [
|
||||
{ find: 'const a = 1', replace: 'const a = 10' },
|
||||
{ find: 'const b = 2', replace: 'const b = 20' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
const codeNode = result.workflow.nodes.find((n: any) => n.name === 'Code');
|
||||
expect(codeNode?.parameters.jsCode).toBe('const a = 10;\nconst b = 20;\nreturn a + b;');
|
||||
});
|
||||
|
||||
it('should support regex pattern matching', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const limit = 42;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: 'const limit = \\d+', replace: 'const limit = 100', regex: true }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
const codeNode = result.workflow.nodes.find((n: any) => n.name === 'Code');
|
||||
expect(codeNode?.parameters.jsCode).toBe('const limit = 100;');
|
||||
});
|
||||
|
||||
it('should support regex with replaceAll', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'item1 = 10;\nitem2 = 20;\nitem3 = 30;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: 'item\\d+', replace: 'val', regex: true, replaceAll: true }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
const codeNode = result.workflow.nodes.find((n: any) => n.name === 'Code');
|
||||
expect(codeNode?.parameters.jsCode).toBe('val = 10;\nval = 20;\nval = 30;');
|
||||
});
|
||||
|
||||
it('should error on ambiguous regex match without replaceAll', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'item1 = 10;\nitem2 = 20;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: 'item\\d+', replace: 'val', regex: true }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('2 times');
|
||||
});
|
||||
|
||||
it('should reject invalid regex pattern in validation', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: '(unclosed', replace: 'x', regex: true }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('Invalid regex');
|
||||
});
|
||||
|
||||
it('should error on non-existent field', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.nonExistent',
|
||||
patches: [{ find: 'x', replace: 'y' }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('does not exist');
|
||||
});
|
||||
|
||||
it('should error on non-string field', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { retryCount: 3 }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.retryCount',
|
||||
patches: [{ find: '3', replace: '5' }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('expected string');
|
||||
});
|
||||
|
||||
it('should error on missing node', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'NonExistent',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: 'x', replace: 'y' }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('not found');
|
||||
});
|
||||
|
||||
it('should reject empty patches array', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: []
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('non-empty');
|
||||
});
|
||||
|
||||
it('should reject empty find string', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: '', replace: 'y' }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('must not be empty');
|
||||
});
|
||||
|
||||
it('should work with nested fieldPath using dot notation', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'set-1',
|
||||
name: 'Set',
|
||||
type: 'n8n-nodes-base.set',
|
||||
typeVersion: 3,
|
||||
position: [900, 300],
|
||||
parameters: {
|
||||
options: {
|
||||
template: '<p>Hello World</p>'
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Set',
|
||||
fieldPath: 'parameters.options.template',
|
||||
patches: [{ find: 'Hello World', replace: 'Goodbye World' }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
const setNode = result.workflow.nodes.find((n: any) => n.name === 'Set');
|
||||
expect(setNode?.parameters.options.template).toBe('<p>Goodbye World</p>');
|
||||
});
|
||||
|
||||
it('should reject prototype pollution via fieldPath', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: '__proto__.polluted',
|
||||
patches: [{ find: 'x', replace: 'y' }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('forbidden key');
|
||||
});
|
||||
|
||||
it('should reject unsafe regex patterns (ReDoS)', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: '(a+)+$', replace: 'safe', regex: true }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('unsafe regex');
|
||||
});
|
||||
|
||||
it('should reject too many patches', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;' }
|
||||
});
|
||||
|
||||
const patches = Array.from({ length: 51 }, (_, i) => ({
|
||||
find: `pattern${i}`,
|
||||
replace: `replacement${i}`
|
||||
}));
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('too many patches');
|
||||
});
|
||||
|
||||
it('should reject overly long regex patterns', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeName: 'Code',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: 'a'.repeat(501), replace: 'b', regex: true }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.errors?.[0]?.message).toContain('too long');
|
||||
});
|
||||
|
||||
it('should work with nodeId reference', async () => {
|
||||
const workflow = JSON.parse(JSON.stringify(baseWorkflow));
|
||||
workflow.nodes.push({
|
||||
id: 'code-1',
|
||||
name: 'Code',
|
||||
type: 'n8n-nodes-base.code',
|
||||
typeVersion: 1,
|
||||
position: [900, 300],
|
||||
parameters: { jsCode: 'const x = 1;' }
|
||||
});
|
||||
|
||||
const result = await diffEngine.applyDiff(workflow, {
|
||||
id: 'test',
|
||||
operations: [{
|
||||
type: 'patchNodeField' as const,
|
||||
nodeId: 'code-1',
|
||||
fieldPath: 'parameters.jsCode',
|
||||
patches: [{ find: 'const x = 1', replace: 'const x = 2' }]
|
||||
}]
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
const codeNode = result.workflow.nodes.find((n: any) => n.id === 'code-1');
|
||||
expect(codeNode?.parameters.jsCode).toBe('const x = 2;');
|
||||
});
|
||||
});
|
||||
|
||||
describe('MoveNode Operation', () => {
|
||||
it('should move node to new position', async () => {
|
||||
const operation: MoveNodeOperation = {
|
||||
|
||||
487
tests/unit/services/workflow-security-scanner.test.ts
Normal file
487
tests/unit/services/workflow-security-scanner.test.ts
Normal file
@@ -0,0 +1,487 @@
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import {
|
||||
scanWorkflows,
|
||||
type WorkflowSecurityReport,
|
||||
type AuditFinding,
|
||||
type CustomCheckType,
|
||||
} from '@/services/workflow-security-scanner';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function makeWorkflow(overrides: Record<string, unknown> = {}) {
|
||||
return {
|
||||
id: 'wf-1',
|
||||
name: 'Test Workflow',
|
||||
active: false,
|
||||
nodes: [] as any[],
|
||||
settings: {},
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
/** Shortcut to scan a single workflow and return its report. */
|
||||
function scanOne(
|
||||
workflow: Record<string, unknown>,
|
||||
checks?: CustomCheckType[],
|
||||
): WorkflowSecurityReport {
|
||||
return scanWorkflows([workflow as any], checks);
|
||||
}
|
||||
|
||||
/** Return findings for a given category. */
|
||||
function findingsOf(report: WorkflowSecurityReport, category: CustomCheckType): AuditFinding[] {
|
||||
return report.findings.filter((f) => f.category === category);
|
||||
}
|
||||
|
||||
// ===========================================================================
|
||||
// Check 1: Hardcoded secrets
|
||||
// ===========================================================================
|
||||
|
||||
describe('workflow-security-scanner', () => {
|
||||
describe('hardcoded secrets check', () => {
|
||||
it('should detect a hardcoded secret in node parameters', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'HTTP Request',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
parameters: {
|
||||
url: 'https://api.example.com',
|
||||
headers: {
|
||||
values: [{ name: 'Authorization', value: 'sk-proj-RealKey1234567890abcdef' }],
|
||||
},
|
||||
},
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['hardcoded_secrets']);
|
||||
const secrets = findingsOf(report, 'hardcoded_secrets');
|
||||
expect(secrets.length).toBeGreaterThanOrEqual(1);
|
||||
expect(secrets[0].title).toContain('openai_key');
|
||||
expect(secrets[0].id).toMatch(/^CRED-\d{3}$/);
|
||||
});
|
||||
|
||||
it('should mark PII detections as review_recommended, not auto_fixable', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'Send Email',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
parameters: { body: { json: { to: 'john.doe@example.com' } } },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['hardcoded_secrets']);
|
||||
const piiFindings = findingsOf(report, 'hardcoded_secrets').filter(
|
||||
(f) => f.title.toLowerCase().includes('email'),
|
||||
);
|
||||
expect(piiFindings.length).toBeGreaterThanOrEqual(1);
|
||||
expect(piiFindings[0].remediationType).toBe('review_recommended');
|
||||
expect(piiFindings[0].remediation).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should return no findings for a clean workflow', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'Set',
|
||||
type: 'n8n-nodes-base.set',
|
||||
parameters: { values: { string: [{ name: 'greeting', value: 'hello world is safe' }] } },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['hardcoded_secrets']);
|
||||
expect(findingsOf(report, 'hardcoded_secrets')).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Check 2: Unauthenticated webhooks
|
||||
// ===========================================================================
|
||||
|
||||
describe('unauthenticated webhooks check', () => {
|
||||
it('should flag a webhook with authentication set to "none"', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: { path: '/hook', authentication: 'none' },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['unauthenticated_webhooks']);
|
||||
const webhooks = findingsOf(report, 'unauthenticated_webhooks');
|
||||
expect(webhooks).toHaveLength(1);
|
||||
expect(webhooks[0].title).toContain('Webhook');
|
||||
});
|
||||
|
||||
it('should flag a webhook with no authentication parameter at all', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: { path: '/hook' },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['unauthenticated_webhooks']);
|
||||
expect(findingsOf(report, 'unauthenticated_webhooks')).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('should NOT flag a webhook with headerAuth configured', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: { path: '/hook', authentication: 'headerAuth' },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['unauthenticated_webhooks']);
|
||||
expect(findingsOf(report, 'unauthenticated_webhooks')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should NOT flag a webhook with basicAuth configured', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: { path: '/hook', authentication: 'basicAuth' },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['unauthenticated_webhooks']);
|
||||
expect(findingsOf(report, 'unauthenticated_webhooks')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should assign severity high when the workflow is active', () => {
|
||||
const wf = makeWorkflow({
|
||||
active: true,
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: { path: '/hook', authentication: 'none' },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['unauthenticated_webhooks']);
|
||||
const findings = findingsOf(report, 'unauthenticated_webhooks');
|
||||
expect(findings[0].severity).toBe('high');
|
||||
expect(findings[0].description).toContain('active');
|
||||
});
|
||||
|
||||
it('should assign severity medium when the workflow is inactive', () => {
|
||||
const wf = makeWorkflow({
|
||||
active: false,
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: { path: '/hook', authentication: 'none' },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['unauthenticated_webhooks']);
|
||||
expect(findingsOf(report, 'unauthenticated_webhooks')[0].severity).toBe('medium');
|
||||
});
|
||||
|
||||
it('should NOT flag respondToWebhook nodes (they are response helpers, not triggers)', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'Respond to Webhook',
|
||||
type: 'n8n-nodes-base.respondToWebhook',
|
||||
parameters: { respondWith: 'text', responseBody: 'OK' },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['unauthenticated_webhooks']);
|
||||
expect(findingsOf(report, 'unauthenticated_webhooks')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should also detect formTrigger nodes', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'Form Trigger',
|
||||
type: 'n8n-nodes-base.formTrigger',
|
||||
parameters: { path: '/form' },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['unauthenticated_webhooks']);
|
||||
expect(findingsOf(report, 'unauthenticated_webhooks')).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('should include remediation steps with auto_fixable type', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: { path: '/hook' },
|
||||
},
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['unauthenticated_webhooks']);
|
||||
const finding = findingsOf(report, 'unauthenticated_webhooks')[0];
|
||||
expect(finding.remediationType).toBe('auto_fixable');
|
||||
expect(finding.remediation).toBeDefined();
|
||||
expect(finding.remediation!.length).toBeGreaterThanOrEqual(1);
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Check 3: Error handling gaps
|
||||
// ===========================================================================
|
||||
|
||||
describe('error handling gaps check', () => {
|
||||
it('should flag a workflow with 3+ nodes and no error handling', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{ name: 'Trigger', type: 'n8n-nodes-base.manualTrigger', parameters: {} },
|
||||
{ name: 'Step 1', type: 'n8n-nodes-base.set', parameters: {} },
|
||||
{ name: 'Step 2', type: 'n8n-nodes-base.httpRequest', parameters: {} },
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['error_handling']);
|
||||
const findings = findingsOf(report, 'error_handling');
|
||||
expect(findings).toHaveLength(1);
|
||||
expect(findings[0].id).toBe('ERR-001');
|
||||
expect(findings[0].severity).toBe('medium');
|
||||
});
|
||||
|
||||
it('should NOT flag a workflow with continueOnFail enabled', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{ name: 'Trigger', type: 'n8n-nodes-base.manualTrigger', parameters: {} },
|
||||
{ name: 'Step 1', type: 'n8n-nodes-base.set', parameters: {}, continueOnFail: true },
|
||||
{ name: 'Step 2', type: 'n8n-nodes-base.httpRequest', parameters: {} },
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['error_handling']);
|
||||
expect(findingsOf(report, 'error_handling')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should NOT flag a workflow with onError set to continueErrorOutput', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{ name: 'Trigger', type: 'n8n-nodes-base.manualTrigger', parameters: {} },
|
||||
{ name: 'Step 1', type: 'n8n-nodes-base.set', parameters: {}, onError: 'continueErrorOutput' },
|
||||
{ name: 'Step 2', type: 'n8n-nodes-base.httpRequest', parameters: {} },
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['error_handling']);
|
||||
expect(findingsOf(report, 'error_handling')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should NOT flag a workflow with an errorTrigger node', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{ name: 'Trigger', type: 'n8n-nodes-base.manualTrigger', parameters: {} },
|
||||
{ name: 'Step 1', type: 'n8n-nodes-base.set', parameters: {} },
|
||||
{ name: 'Error Handler', type: 'n8n-nodes-base.errorTrigger', parameters: {} },
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['error_handling']);
|
||||
expect(findingsOf(report, 'error_handling')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should NOT flag a workflow with fewer than 3 nodes', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{ name: 'Trigger', type: 'n8n-nodes-base.manualTrigger', parameters: {} },
|
||||
{ name: 'Step 1', type: 'n8n-nodes-base.set', parameters: {} },
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['error_handling']);
|
||||
expect(findingsOf(report, 'error_handling')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should NOT flag onError=stopWorkflow as valid error handling', () => {
|
||||
// stopWorkflow is the default and does NOT count as custom error handling
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{ name: 'Trigger', type: 'n8n-nodes-base.manualTrigger', parameters: {} },
|
||||
{ name: 'Step 1', type: 'n8n-nodes-base.set', parameters: {}, onError: 'stopWorkflow' },
|
||||
{ name: 'Step 2', type: 'n8n-nodes-base.httpRequest', parameters: {} },
|
||||
],
|
||||
});
|
||||
const report = scanOne(wf, ['error_handling']);
|
||||
expect(findingsOf(report, 'error_handling')).toHaveLength(1);
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Check 4: Data retention settings
|
||||
// ===========================================================================
|
||||
|
||||
describe('data retention settings check', () => {
|
||||
it('should flag when both save settings are set to all', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [{ name: 'Trigger', type: 'n8n-nodes-base.manualTrigger', parameters: {} }],
|
||||
settings: {
|
||||
saveDataErrorExecution: 'all',
|
||||
saveDataSuccessExecution: 'all',
|
||||
},
|
||||
});
|
||||
const report = scanOne(wf, ['data_retention']);
|
||||
const findings = findingsOf(report, 'data_retention');
|
||||
expect(findings).toHaveLength(1);
|
||||
expect(findings[0].id).toBe('RETENTION-001');
|
||||
expect(findings[0].severity).toBe('low');
|
||||
});
|
||||
|
||||
it('should NOT flag when only error execution is set to all', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [{ name: 'Trigger', type: 'n8n-nodes-base.manualTrigger', parameters: {} }],
|
||||
settings: {
|
||||
saveDataErrorExecution: 'all',
|
||||
saveDataSuccessExecution: 'none',
|
||||
},
|
||||
});
|
||||
const report = scanOne(wf, ['data_retention']);
|
||||
expect(findingsOf(report, 'data_retention')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should NOT flag when only success execution is set to all', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [{ name: 'Trigger', type: 'n8n-nodes-base.manualTrigger', parameters: {} }],
|
||||
settings: {
|
||||
saveDataErrorExecution: 'none',
|
||||
saveDataSuccessExecution: 'all',
|
||||
},
|
||||
});
|
||||
const report = scanOne(wf, ['data_retention']);
|
||||
expect(findingsOf(report, 'data_retention')).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should NOT flag when no settings are present', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [{ name: 'Trigger', type: 'n8n-nodes-base.manualTrigger', parameters: {} }],
|
||||
});
|
||||
const report = scanOne(wf, ['data_retention']);
|
||||
expect(findingsOf(report, 'data_retention')).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Selective checks (customChecks filter)
|
||||
// ===========================================================================
|
||||
|
||||
describe('selective checks', () => {
|
||||
it('should only run the requested checks', () => {
|
||||
const wf = makeWorkflow({
|
||||
active: true,
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: { path: '/hook', authentication: 'none' },
|
||||
},
|
||||
{ name: 'Step 1', type: 'n8n-nodes-base.set', parameters: {} },
|
||||
{
|
||||
name: 'HTTP Request',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
parameters: {
|
||||
headers: { values: [{ name: 'Auth', value: 'sk-proj-RealKey1234567890abcdef' }] },
|
||||
},
|
||||
},
|
||||
],
|
||||
settings: { saveDataErrorExecution: 'all', saveDataSuccessExecution: 'all' },
|
||||
});
|
||||
|
||||
// Only run webhook check
|
||||
const report = scanOne(wf, ['unauthenticated_webhooks']);
|
||||
const categories = new Set(report.findings.map((f) => f.category));
|
||||
expect(categories.has('unauthenticated_webhooks')).toBe(true);
|
||||
expect(categories.has('hardcoded_secrets')).toBe(false);
|
||||
expect(categories.has('error_handling')).toBe(false);
|
||||
expect(categories.has('data_retention')).toBe(false);
|
||||
});
|
||||
|
||||
it('should run all checks when no filter is provided', () => {
|
||||
const wf = makeWorkflow({
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: { path: '/hook' },
|
||||
},
|
||||
{ name: 'Step 1', type: 'n8n-nodes-base.set', parameters: {} },
|
||||
{
|
||||
name: 'HTTP Request',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
parameters: {
|
||||
headers: { values: [{ name: 'Auth', value: 'sk-proj-RealKey1234567890abcdef' }] },
|
||||
},
|
||||
},
|
||||
],
|
||||
settings: { saveDataErrorExecution: 'all', saveDataSuccessExecution: 'all' },
|
||||
});
|
||||
|
||||
const report = scanWorkflows([wf as any]);
|
||||
const categories = new Set(report.findings.map((f) => f.category));
|
||||
// Should have findings from at least webhook and secrets checks
|
||||
expect(categories.has('unauthenticated_webhooks')).toBe(true);
|
||||
expect(categories.has('hardcoded_secrets')).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
// ===========================================================================
|
||||
// Summary counts
|
||||
// ===========================================================================
|
||||
|
||||
describe('summary counts', () => {
|
||||
it('should correctly aggregate severity counts', () => {
|
||||
const wf = makeWorkflow({
|
||||
active: true,
|
||||
nodes: [
|
||||
{
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
parameters: { path: '/hook', authentication: 'none' },
|
||||
},
|
||||
{ name: 'Step 1', type: 'n8n-nodes-base.set', parameters: {} },
|
||||
{
|
||||
name: 'HTTP Request',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
parameters: {
|
||||
headers: { values: [{ name: 'Auth', value: 'sk-proj-RealKey1234567890abcdef' }] },
|
||||
},
|
||||
},
|
||||
],
|
||||
settings: { saveDataErrorExecution: 'all', saveDataSuccessExecution: 'all' },
|
||||
});
|
||||
|
||||
const report = scanOne(wf);
|
||||
|
||||
expect(report.summary.total).toBe(report.findings.length);
|
||||
expect(
|
||||
report.summary.critical +
|
||||
report.summary.high +
|
||||
report.summary.medium +
|
||||
report.summary.low,
|
||||
).toBe(report.summary.total);
|
||||
});
|
||||
|
||||
it('should report correct workflowsScanned count', () => {
|
||||
const wf1 = makeWorkflow({ id: 'wf-1', name: 'WF1', nodes: [] });
|
||||
const wf2 = makeWorkflow({ id: 'wf-2', name: 'WF2', nodes: [] });
|
||||
const report = scanWorkflows([wf1, wf2] as any[]);
|
||||
expect(report.workflowsScanned).toBe(2);
|
||||
});
|
||||
|
||||
it('should track scan duration in milliseconds', () => {
|
||||
const wf = makeWorkflow({ nodes: [] });
|
||||
const report = scanOne(wf);
|
||||
expect(report.scanDurationMs).toBeGreaterThanOrEqual(0);
|
||||
});
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user