mirror of
https://github.com/czlonkowski/n8n-mcp.git
synced 2026-01-30 22:42:04 +00:00
Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
481d74c249 | ||
|
|
6f21a717cd | ||
|
|
75b55776f2 | ||
|
|
fa04ece8ea | ||
|
|
acfffbb0f2 | ||
|
|
3b2be46119 | ||
|
|
671c175d71 | ||
|
|
09e69df5a7 | ||
|
|
f150802bed | ||
|
|
5960d2826e | ||
|
|
78abda601a | ||
|
|
247c8d74af |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -130,3 +130,6 @@ n8n-mcp-wrapper.sh
|
||||
|
||||
# MCP configuration files
|
||||
.mcp.json
|
||||
|
||||
# Telemetry configuration (user-specific)
|
||||
~/.n8n-mcp/
|
||||
|
||||
50
CHANGELOG.md
Normal file
50
CHANGELOG.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [2.14.0] - 2025-09-26
|
||||
|
||||
### Added
|
||||
- Anonymous telemetry system with Supabase integration to understand usage patterns
|
||||
- Tracks active users with deterministic anonymous IDs
|
||||
- Records MCP tool usage frequency and error rates
|
||||
- Captures sanitized workflow structures on successful validation
|
||||
- Monitors common error patterns for improvement insights
|
||||
- Zero-configuration design with opt-out support via N8N_MCP_TELEMETRY_DISABLED environment variable
|
||||
|
||||
- Enhanced telemetry tracking methods:
|
||||
- `trackSearchQuery` - Records search patterns and result counts
|
||||
- `trackValidationDetails` - Captures validation errors and warnings
|
||||
- `trackToolSequence` - Tracks AI agent tool usage sequences
|
||||
- `trackNodeConfiguration` - Records common node configuration patterns
|
||||
- `trackPerformanceMetric` - Monitors operation performance
|
||||
|
||||
- Privacy-focused workflow sanitization:
|
||||
- Removes all sensitive data (URLs, API keys, credentials)
|
||||
- Generates workflow hashes for deduplication
|
||||
- Preserves only structural information
|
||||
|
||||
- Comprehensive test coverage for telemetry components (91%+ coverage)
|
||||
|
||||
### Fixed
|
||||
- Fixed TypeErrors in `get_node_info`, `get_node_essentials`, and `get_node_documentation` tools that were affecting 50% of calls
|
||||
- Added null safety checks for undefined node properties
|
||||
- Fixed multi-process telemetry issues with immediate flush strategy
|
||||
- Resolved RLS policy and permission issues with Supabase
|
||||
|
||||
### Changed
|
||||
- Updated Docker configuration to include Supabase client for telemetry support
|
||||
- Enhanced workflow validation tools to track validated workflows
|
||||
- Improved error handling with proper null coalescing operators
|
||||
|
||||
### Documentation
|
||||
- Added PRIVACY.md with comprehensive privacy policy
|
||||
- Added telemetry configuration instructions to README
|
||||
- Updated CLAUDE.md with telemetry system architecture
|
||||
|
||||
## Previous Versions
|
||||
|
||||
For changes in previous versions, please refer to the git history and release notes.
|
||||
@@ -15,7 +15,7 @@ RUN --mount=type=cache,target=/root/.npm \
|
||||
npm install --no-save typescript@^5.8.3 @types/node@^22.15.30 @types/express@^5.0.3 \
|
||||
@modelcontextprotocol/sdk@^1.12.1 dotenv@^16.5.0 express@^5.1.0 axios@^1.10.0 \
|
||||
n8n-workflow@^1.96.0 uuid@^11.0.5 @types/uuid@^10.0.0 \
|
||||
openai@^4.77.0 zod@^3.24.1 lru-cache@^11.2.1
|
||||
openai@^4.77.0 zod@^3.24.1 lru-cache@^11.2.1 @supabase/supabase-js@^2.57.4
|
||||
|
||||
# Copy source and build
|
||||
COPY src ./src
|
||||
@@ -74,6 +74,10 @@ USER nodejs
|
||||
# Set Docker environment flag
|
||||
ENV IS_DOCKER=true
|
||||
|
||||
# Telemetry: Anonymous usage statistics are ENABLED by default
|
||||
# To opt-out, uncomment the following line:
|
||||
# ENV N8N_MCP_TELEMETRY_DISABLED=true
|
||||
|
||||
# Expose HTTP port
|
||||
EXPOSE 3000
|
||||
|
||||
|
||||
69
PRIVACY.md
Normal file
69
PRIVACY.md
Normal file
@@ -0,0 +1,69 @@
|
||||
# Privacy Policy for n8n-mcp Telemetry
|
||||
|
||||
## Overview
|
||||
n8n-mcp collects anonymous usage statistics to help improve the tool. This data collection is designed to respect user privacy while providing valuable insights into how the tool is used.
|
||||
|
||||
## What We Collect
|
||||
- **Anonymous User ID**: A hashed identifier derived from your machine characteristics (no personal information)
|
||||
- **Tool Usage**: Which MCP tools are used and their performance metrics
|
||||
- **Workflow Patterns**: Sanitized workflow structures (all sensitive data removed)
|
||||
- **Error Types**: Categories of errors encountered (no error messages with user data)
|
||||
- **System Information**: Platform, architecture, Node.js version, and n8n-mcp version
|
||||
|
||||
## What We DON'T Collect
|
||||
- Personal information or usernames
|
||||
- API keys, tokens, or credentials
|
||||
- URLs, endpoints, or hostnames
|
||||
- Email addresses or contact information
|
||||
- File paths or directory structures
|
||||
- Actual workflow data or parameters
|
||||
- Database connection strings
|
||||
- Any authentication information
|
||||
|
||||
## Data Sanitization
|
||||
All collected data undergoes automatic sanitization:
|
||||
- URLs are replaced with `[URL]` or `[REDACTED]`
|
||||
- Long alphanumeric strings (potential keys) are replaced with `[KEY]`
|
||||
- Email addresses are replaced with `[EMAIL]`
|
||||
- Authentication-related fields are completely removed
|
||||
|
||||
## Data Storage
|
||||
- Data is stored securely using Supabase
|
||||
- Anonymous users have write-only access (cannot read data back)
|
||||
- Row Level Security (RLS) policies prevent data access by anonymous users
|
||||
|
||||
## Opt-Out
|
||||
You can disable telemetry at any time:
|
||||
```bash
|
||||
npx n8n-mcp telemetry disable
|
||||
```
|
||||
|
||||
To re-enable:
|
||||
```bash
|
||||
npx n8n-mcp telemetry enable
|
||||
```
|
||||
|
||||
To check status:
|
||||
```bash
|
||||
npx n8n-mcp telemetry status
|
||||
```
|
||||
|
||||
## Data Usage
|
||||
Collected data is used solely to:
|
||||
- Understand which features are most used
|
||||
- Identify common error patterns
|
||||
- Improve tool performance and reliability
|
||||
- Guide development priorities
|
||||
|
||||
## Data Retention
|
||||
- Data is retained for analysis purposes
|
||||
- No personal identification is possible from the collected data
|
||||
|
||||
## Changes to This Policy
|
||||
We may update this privacy policy from time to time. Updates will be reflected in this document.
|
||||
|
||||
## Contact
|
||||
For questions about telemetry or privacy, please open an issue on GitHub:
|
||||
https://github.com/czlonkowski/n8n-mcp/issues
|
||||
|
||||
Last updated: 2025-09-25
|
||||
45
README.md
45
README.md
@@ -211,6 +211,51 @@ Add to Claude Desktop config:
|
||||
|
||||
**Restart Claude Desktop after updating configuration** - That's it! 🎉
|
||||
|
||||
## 🔐 Privacy & Telemetry
|
||||
|
||||
n8n-mcp collects anonymous usage statistics to improve the tool. [View our privacy policy](./PRIVACY.md).
|
||||
|
||||
### Opting Out
|
||||
|
||||
**For npx users:**
|
||||
```bash
|
||||
npx n8n-mcp telemetry disable
|
||||
```
|
||||
|
||||
**For Docker users:**
|
||||
Add the following environment variable to your Docker configuration:
|
||||
```json
|
||||
"-e", "N8N_MCP_TELEMETRY_DISABLED=true"
|
||||
```
|
||||
|
||||
Example in Claude Desktop config:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"n8n-mcp": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"-i",
|
||||
"--rm",
|
||||
"--init",
|
||||
"-e", "MCP_MODE=stdio",
|
||||
"-e", "LOG_LEVEL=error",
|
||||
"-e", "N8N_MCP_TELEMETRY_DISABLED=true",
|
||||
"ghcr.io/czlonkowski/n8n-mcp:latest"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**For docker-compose users:**
|
||||
Set in your environment file or docker-compose.yml:
|
||||
```yaml
|
||||
environment:
|
||||
N8N_MCP_TELEMETRY_DISABLED: "true"
|
||||
```
|
||||
|
||||
## 💖 Support This Project
|
||||
|
||||
<div align="center">
|
||||
|
||||
@@ -23,7 +23,11 @@ services:
|
||||
# Database
|
||||
NODE_DB_PATH: ${NODE_DB_PATH:-/app/data/nodes.db}
|
||||
REBUILD_ON_START: ${REBUILD_ON_START:-false}
|
||||
|
||||
|
||||
# Telemetry: Anonymous usage statistics are ENABLED by default
|
||||
# To opt-out, uncomment and set to 'true':
|
||||
# N8N_MCP_TELEMETRY_DISABLED: ${N8N_MCP_TELEMETRY_DISABLED:-true}
|
||||
|
||||
# Optional: n8n API configuration (enables 16 additional management tools)
|
||||
# Uncomment and configure to enable n8n workflow management
|
||||
# N8N_API_URL: ${N8N_API_URL}
|
||||
|
||||
@@ -14,8 +14,6 @@ args = ["n8n-mcp"]
|
||||
env = { "MCP_MODE" = "stdio", "LOG_LEVEL" = "error", "DISABLE_CONSOLE_OUTPUT" = "true" }
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Full configuration (with n8n management tools):
|
||||
```toml
|
||||
[mcp_servers.n8n]
|
||||
|
||||
113
package-lock.json
generated
113
package-lock.json
generated
@@ -1,16 +1,17 @@
|
||||
{
|
||||
"name": "n8n-mcp",
|
||||
"version": "2.12.1",
|
||||
"version": "2.13.2",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "n8n-mcp",
|
||||
"version": "2.12.1",
|
||||
"version": "2.13.2",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.13.2",
|
||||
"@n8n/n8n-nodes-langchain": "^1.111.1",
|
||||
"@supabase/supabase-js": "^2.57.4",
|
||||
"dotenv": "^16.5.0",
|
||||
"express": "^5.1.0",
|
||||
"lru-cache": "^11.2.1",
|
||||
@@ -12328,6 +12329,68 @@
|
||||
"@opentelemetry/semantic-conventions": "^1.28.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@n8n/n8n-nodes-langchain/node_modules/@supabase/auth-js": {
|
||||
"version": "2.69.1",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/auth-js/-/auth-js-2.69.1.tgz",
|
||||
"integrity": "sha512-FILtt5WjCNzmReeRLq5wRs3iShwmnWgBvxHfqapC/VoljJl+W8hDAyFmf1NVw3zH+ZjZ05AKxiKxVeb0HNWRMQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/node-fetch": "^2.6.14"
|
||||
}
|
||||
},
|
||||
"node_modules/@n8n/n8n-nodes-langchain/node_modules/@supabase/functions-js": {
|
||||
"version": "2.4.4",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/functions-js/-/functions-js-2.4.4.tgz",
|
||||
"integrity": "sha512-WL2p6r4AXNGwop7iwvul2BvOtuJ1YQy8EbOd0dhG1oN1q8el/BIRSFCFnWAMM/vJJlHWLi4ad22sKbKr9mvjoA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/node-fetch": "^2.6.14"
|
||||
}
|
||||
},
|
||||
"node_modules/@n8n/n8n-nodes-langchain/node_modules/@supabase/postgrest-js": {
|
||||
"version": "1.19.4",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/postgrest-js/-/postgrest-js-1.19.4.tgz",
|
||||
"integrity": "sha512-O4soKqKtZIW3olqmbXXbKugUtByD2jPa8kL2m2c1oozAO11uCcGrRhkZL0kVxjBLrXHE0mdSkFsMj7jDSfyNpw==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/node-fetch": "^2.6.14"
|
||||
}
|
||||
},
|
||||
"node_modules/@n8n/n8n-nodes-langchain/node_modules/@supabase/realtime-js": {
|
||||
"version": "2.11.9",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/realtime-js/-/realtime-js-2.11.9.tgz",
|
||||
"integrity": "sha512-fLseWq8tEPCO85x3TrV9Hqvk7H4SGOqnFQ223NPJSsxjSYn0EmzU1lvYO6wbA0fc8DE94beCAiiWvGvo4g33lQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/node-fetch": "^2.6.13",
|
||||
"@types/phoenix": "^1.6.6",
|
||||
"@types/ws": "^8.18.1",
|
||||
"ws": "^8.18.2"
|
||||
}
|
||||
},
|
||||
"node_modules/@n8n/n8n-nodes-langchain/node_modules/@supabase/storage-js": {
|
||||
"version": "2.7.1",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/storage-js/-/storage-js-2.7.1.tgz",
|
||||
"integrity": "sha512-asYHcyDR1fKqrMpytAS1zjyEfvxuOIp1CIXX7ji4lHHcJKqyk+sLl/Vxgm4sN6u8zvuUtae9e4kDxQP2qrwWBA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/node-fetch": "^2.6.14"
|
||||
}
|
||||
},
|
||||
"node_modules/@n8n/n8n-nodes-langchain/node_modules/@supabase/supabase-js": {
|
||||
"version": "2.49.9",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/supabase-js/-/supabase-js-2.49.9.tgz",
|
||||
"integrity": "sha512-lB2A2X8k1aWAqvlpO4uZOdfvSuZ2s0fCMwJ1Vq6tjWsi3F+au5lMbVVn92G0pG8gfmis33d64Plkm6eSDs6jRA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/auth-js": "2.69.1",
|
||||
"@supabase/functions-js": "2.4.4",
|
||||
"@supabase/node-fetch": "2.6.15",
|
||||
"@supabase/postgrest-js": "1.19.4",
|
||||
"@supabase/realtime-js": "2.11.9",
|
||||
"@supabase/storage-js": "2.7.1"
|
||||
}
|
||||
},
|
||||
"node_modules/@n8n/n8n-nodes-langchain/node_modules/@types/connect": {
|
||||
"version": "3.4.36",
|
||||
"resolved": "https://registry.npmjs.org/@types/connect/-/connect-3.4.36.tgz",
|
||||
@@ -15647,18 +15710,18 @@
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@supabase/auth-js": {
|
||||
"version": "2.69.1",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/auth-js/-/auth-js-2.69.1.tgz",
|
||||
"integrity": "sha512-FILtt5WjCNzmReeRLq5wRs3iShwmnWgBvxHfqapC/VoljJl+W8hDAyFmf1NVw3zH+ZjZ05AKxiKxVeb0HNWRMQ==",
|
||||
"version": "2.71.1",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/auth-js/-/auth-js-2.71.1.tgz",
|
||||
"integrity": "sha512-mMIQHBRc+SKpZFRB2qtupuzulaUhFYupNyxqDj5Jp/LyPvcWvjaJzZzObv6URtL/O6lPxkanASnotGtNpS3H2Q==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/node-fetch": "^2.6.14"
|
||||
}
|
||||
},
|
||||
"node_modules/@supabase/functions-js": {
|
||||
"version": "2.4.4",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/functions-js/-/functions-js-2.4.4.tgz",
|
||||
"integrity": "sha512-WL2p6r4AXNGwop7iwvul2BvOtuJ1YQy8EbOd0dhG1oN1q8el/BIRSFCFnWAMM/vJJlHWLi4ad22sKbKr9mvjoA==",
|
||||
"version": "2.4.6",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/functions-js/-/functions-js-2.4.6.tgz",
|
||||
"integrity": "sha512-bhjZ7rmxAibjgmzTmQBxJU6ZIBCCJTc3Uwgvdi4FewueUTAGO5hxZT1Sj6tiD+0dSXf9XI87BDdJrg12z8Uaew==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/node-fetch": "^2.6.14"
|
||||
@@ -15677,18 +15740,18 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@supabase/postgrest-js": {
|
||||
"version": "1.19.4",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/postgrest-js/-/postgrest-js-1.19.4.tgz",
|
||||
"integrity": "sha512-O4soKqKtZIW3olqmbXXbKugUtByD2jPa8kL2m2c1oozAO11uCcGrRhkZL0kVxjBLrXHE0mdSkFsMj7jDSfyNpw==",
|
||||
"version": "1.21.4",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/postgrest-js/-/postgrest-js-1.21.4.tgz",
|
||||
"integrity": "sha512-TxZCIjxk6/dP9abAi89VQbWWMBbybpGWyvmIzTd79OeravM13OjR/YEYeyUOPcM1C3QyvXkvPZhUfItvmhY1IQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/node-fetch": "^2.6.14"
|
||||
}
|
||||
},
|
||||
"node_modules/@supabase/realtime-js": {
|
||||
"version": "2.11.9",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/realtime-js/-/realtime-js-2.11.9.tgz",
|
||||
"integrity": "sha512-fLseWq8tEPCO85x3TrV9Hqvk7H4SGOqnFQ223NPJSsxjSYn0EmzU1lvYO6wbA0fc8DE94beCAiiWvGvo4g33lQ==",
|
||||
"version": "2.15.5",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/realtime-js/-/realtime-js-2.15.5.tgz",
|
||||
"integrity": "sha512-/Rs5Vqu9jejRD8ZeuaWXebdkH+J7V6VySbCZ/zQM93Ta5y3mAmocjioa/nzlB6qvFmyylUgKVS1KpE212t30OA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/node-fetch": "^2.6.13",
|
||||
@@ -15698,26 +15761,26 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@supabase/storage-js": {
|
||||
"version": "2.7.1",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/storage-js/-/storage-js-2.7.1.tgz",
|
||||
"integrity": "sha512-asYHcyDR1fKqrMpytAS1zjyEfvxuOIp1CIXX7ji4lHHcJKqyk+sLl/Vxgm4sN6u8zvuUtae9e4kDxQP2qrwWBA==",
|
||||
"version": "2.12.1",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/storage-js/-/storage-js-2.12.1.tgz",
|
||||
"integrity": "sha512-QWg3HV6Db2J81VQx0PqLq0JDBn4Q8B1FYn1kYcbla8+d5WDmTdwwMr+EJAxNOSs9W4mhKMv+EYCpCrTFlTj4VQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/node-fetch": "^2.6.14"
|
||||
}
|
||||
},
|
||||
"node_modules/@supabase/supabase-js": {
|
||||
"version": "2.49.9",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/supabase-js/-/supabase-js-2.49.9.tgz",
|
||||
"integrity": "sha512-lB2A2X8k1aWAqvlpO4uZOdfvSuZ2s0fCMwJ1Vq6tjWsi3F+au5lMbVVn92G0pG8gfmis33d64Plkm6eSDs6jRA==",
|
||||
"version": "2.57.4",
|
||||
"resolved": "https://registry.npmjs.org/@supabase/supabase-js/-/supabase-js-2.57.4.tgz",
|
||||
"integrity": "sha512-LcbTzFhHYdwfQ7TRPfol0z04rLEyHabpGYANME6wkQ/kLtKNmI+Vy+WEM8HxeOZAtByUFxoUTTLwhXmrh+CcVw==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@supabase/auth-js": "2.69.1",
|
||||
"@supabase/functions-js": "2.4.4",
|
||||
"@supabase/auth-js": "2.71.1",
|
||||
"@supabase/functions-js": "2.4.6",
|
||||
"@supabase/node-fetch": "2.6.15",
|
||||
"@supabase/postgrest-js": "1.19.4",
|
||||
"@supabase/realtime-js": "2.11.9",
|
||||
"@supabase/storage-js": "2.7.1"
|
||||
"@supabase/postgrest-js": "1.21.4",
|
||||
"@supabase/realtime-js": "2.15.5",
|
||||
"@supabase/storage-js": "2.12.1"
|
||||
}
|
||||
},
|
||||
"node_modules/@supercharge/promise-pool": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "n8n-mcp",
|
||||
"version": "2.13.2",
|
||||
"version": "2.14.0",
|
||||
"description": "Integration between n8n workflow automation and Model Context Protocol (MCP)",
|
||||
"main": "dist/index.js",
|
||||
"bin": {
|
||||
@@ -129,6 +129,7 @@
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.13.2",
|
||||
"@n8n/n8n-nodes-langchain": "^1.111.1",
|
||||
"@supabase/supabase-js": "^2.57.4",
|
||||
"dotenv": "^16.5.0",
|
||||
"express": "^5.1.0",
|
||||
"lru-cache": "^11.2.1",
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
{
|
||||
"name": "n8n-mcp-runtime",
|
||||
"version": "2.13.2",
|
||||
"version": "2.14.0",
|
||||
"description": "n8n MCP Server Runtime Dependencies Only",
|
||||
"private": true,
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.13.2",
|
||||
"@supabase/supabase-js": "^2.57.4",
|
||||
"express": "^5.1.0",
|
||||
"dotenv": "^16.5.0",
|
||||
"lru-cache": "^11.2.1",
|
||||
|
||||
118
scripts/test-telemetry-debug.ts
Normal file
118
scripts/test-telemetry-debug.ts
Normal file
@@ -0,0 +1,118 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Debug script for telemetry integration
|
||||
* Tests direct Supabase connection
|
||||
*/
|
||||
|
||||
import { createClient } from '@supabase/supabase-js';
|
||||
import dotenv from 'dotenv';
|
||||
|
||||
// Load environment variables
|
||||
dotenv.config();
|
||||
|
||||
async function debugTelemetry() {
|
||||
console.log('🔍 Debugging Telemetry Integration\n');
|
||||
|
||||
const supabaseUrl = process.env.SUPABASE_URL;
|
||||
const supabaseAnonKey = process.env.SUPABASE_ANON_KEY;
|
||||
|
||||
if (!supabaseUrl || !supabaseAnonKey) {
|
||||
console.error('❌ Missing SUPABASE_URL or SUPABASE_ANON_KEY');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log('Environment:');
|
||||
console.log(' URL:', supabaseUrl);
|
||||
console.log(' Key:', supabaseAnonKey.substring(0, 30) + '...');
|
||||
|
||||
// Create Supabase client
|
||||
const supabase = createClient(supabaseUrl, supabaseAnonKey, {
|
||||
auth: {
|
||||
persistSession: false,
|
||||
autoRefreshToken: false,
|
||||
}
|
||||
});
|
||||
|
||||
// Test 1: Direct insert to telemetry_events
|
||||
console.log('\n📝 Test 1: Direct insert to telemetry_events...');
|
||||
const testEvent = {
|
||||
user_id: 'test-user-123',
|
||||
event: 'test_event',
|
||||
properties: {
|
||||
test: true,
|
||||
timestamp: new Date().toISOString()
|
||||
}
|
||||
};
|
||||
|
||||
const { data: eventData, error: eventError } = await supabase
|
||||
.from('telemetry_events')
|
||||
.insert([testEvent])
|
||||
.select();
|
||||
|
||||
if (eventError) {
|
||||
console.error('❌ Event insert failed:', eventError);
|
||||
} else {
|
||||
console.log('✅ Event inserted successfully:', eventData);
|
||||
}
|
||||
|
||||
// Test 2: Direct insert to telemetry_workflows
|
||||
console.log('\n📝 Test 2: Direct insert to telemetry_workflows...');
|
||||
const testWorkflow = {
|
||||
user_id: 'test-user-123',
|
||||
workflow_hash: 'test-hash-' + Date.now(),
|
||||
node_count: 3,
|
||||
node_types: ['webhook', 'http', 'slack'],
|
||||
has_trigger: true,
|
||||
has_webhook: true,
|
||||
complexity: 'simple',
|
||||
sanitized_workflow: {
|
||||
nodes: [],
|
||||
connections: {}
|
||||
}
|
||||
};
|
||||
|
||||
const { data: workflowData, error: workflowError } = await supabase
|
||||
.from('telemetry_workflows')
|
||||
.insert([testWorkflow])
|
||||
.select();
|
||||
|
||||
if (workflowError) {
|
||||
console.error('❌ Workflow insert failed:', workflowError);
|
||||
} else {
|
||||
console.log('✅ Workflow inserted successfully:', workflowData);
|
||||
}
|
||||
|
||||
// Test 3: Try to read data (should fail with anon key due to RLS)
|
||||
console.log('\n📖 Test 3: Attempting to read data (should fail due to RLS)...');
|
||||
const { data: readData, error: readError } = await supabase
|
||||
.from('telemetry_events')
|
||||
.select('*')
|
||||
.limit(1);
|
||||
|
||||
if (readError) {
|
||||
console.log('✅ Read correctly blocked by RLS:', readError.message);
|
||||
} else {
|
||||
console.log('⚠️ Unexpected: Read succeeded (RLS may not be working):', readData);
|
||||
}
|
||||
|
||||
// Test 4: Check table existence
|
||||
console.log('\n🔍 Test 4: Verifying tables exist...');
|
||||
const { data: tables, error: tablesError } = await supabase
|
||||
.rpc('get_tables', { schema_name: 'public' })
|
||||
.select('*');
|
||||
|
||||
if (tablesError) {
|
||||
// This is expected - the RPC function might not exist
|
||||
console.log('ℹ️ Cannot list tables (RPC function not available)');
|
||||
} else {
|
||||
console.log('Tables found:', tables);
|
||||
}
|
||||
|
||||
console.log('\n✨ Debug completed! Check your Supabase dashboard for the test data.');
|
||||
console.log('Dashboard: https://supabase.com/dashboard/project/ydyufsohxdfpopqbubwk/editor');
|
||||
}
|
||||
|
||||
debugTelemetry().catch(error => {
|
||||
console.error('❌ Debug failed:', error);
|
||||
process.exit(1);
|
||||
});
|
||||
46
scripts/test-telemetry-direct.ts
Normal file
46
scripts/test-telemetry-direct.ts
Normal file
@@ -0,0 +1,46 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Direct telemetry test with hardcoded credentials
|
||||
*/
|
||||
|
||||
import { createClient } from '@supabase/supabase-js';
|
||||
|
||||
const TELEMETRY_BACKEND = {
|
||||
URL: 'https://ydyufsohxdfpopqbubwk.supabase.co',
|
||||
ANON_KEY: 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6InlkeXVmc29oeGRmcG9wcWJ1YndrIiwicm9sZSI6ImFub24iLCJpYXQiOjE3Mzc2MzAxMDgsImV4cCI6MjA1MzIwNjEwOH0.LsUTx9OsNtnqg-jxXaJPc84aBHVDehHiMaFoF2Ir8s0'
|
||||
};
|
||||
|
||||
async function testDirect() {
|
||||
console.log('🧪 Direct Telemetry Test\n');
|
||||
|
||||
const supabase = createClient(TELEMETRY_BACKEND.URL, TELEMETRY_BACKEND.ANON_KEY, {
|
||||
auth: {
|
||||
persistSession: false,
|
||||
autoRefreshToken: false,
|
||||
}
|
||||
});
|
||||
|
||||
const testEvent = {
|
||||
user_id: 'direct-test-' + Date.now(),
|
||||
event: 'direct_test',
|
||||
properties: {
|
||||
source: 'test-telemetry-direct.ts',
|
||||
timestamp: new Date().toISOString()
|
||||
}
|
||||
};
|
||||
|
||||
console.log('Sending event:', testEvent);
|
||||
|
||||
const { data, error } = await supabase
|
||||
.from('telemetry_events')
|
||||
.insert([testEvent]);
|
||||
|
||||
if (error) {
|
||||
console.error('❌ Failed:', error);
|
||||
} else {
|
||||
console.log('✅ Success! Event sent directly to Supabase');
|
||||
console.log('Response:', data);
|
||||
}
|
||||
}
|
||||
|
||||
testDirect().catch(console.error);
|
||||
62
scripts/test-telemetry-env.ts
Normal file
62
scripts/test-telemetry-env.ts
Normal file
@@ -0,0 +1,62 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Test telemetry environment variable override
|
||||
*/
|
||||
|
||||
import { TelemetryConfigManager } from '../src/telemetry/config-manager';
|
||||
import { telemetry } from '../src/telemetry/telemetry-manager';
|
||||
|
||||
async function testEnvOverride() {
|
||||
console.log('🧪 Testing Telemetry Environment Variable Override\n');
|
||||
|
||||
const configManager = TelemetryConfigManager.getInstance();
|
||||
|
||||
// Test 1: Check current status without env var
|
||||
console.log('Test 1: Without environment variable');
|
||||
console.log('Is Enabled:', configManager.isEnabled());
|
||||
console.log('Status:', configManager.getStatus());
|
||||
|
||||
// Test 2: Set environment variable and check again
|
||||
console.log('\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n');
|
||||
console.log('Test 2: With N8N_MCP_TELEMETRY_DISABLED=true');
|
||||
process.env.N8N_MCP_TELEMETRY_DISABLED = 'true';
|
||||
|
||||
// Force reload by creating new instance (for testing)
|
||||
const newConfigManager = TelemetryConfigManager.getInstance();
|
||||
console.log('Is Enabled:', newConfigManager.isEnabled());
|
||||
console.log('Status:', newConfigManager.getStatus());
|
||||
|
||||
// Test 3: Try tracking with env disabled
|
||||
console.log('\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n');
|
||||
console.log('Test 3: Attempting to track with telemetry disabled');
|
||||
telemetry.trackToolUsage('test_tool', true, 100);
|
||||
console.log('Tool usage tracking attempted (should be ignored)');
|
||||
|
||||
// Test 4: Alternative env vars
|
||||
console.log('\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n');
|
||||
console.log('Test 4: Alternative environment variables');
|
||||
|
||||
delete process.env.N8N_MCP_TELEMETRY_DISABLED;
|
||||
process.env.TELEMETRY_DISABLED = 'true';
|
||||
console.log('With TELEMETRY_DISABLED=true:', newConfigManager.isEnabled());
|
||||
|
||||
delete process.env.TELEMETRY_DISABLED;
|
||||
process.env.DISABLE_TELEMETRY = 'true';
|
||||
console.log('With DISABLE_TELEMETRY=true:', newConfigManager.isEnabled());
|
||||
|
||||
// Test 5: Env var takes precedence over config
|
||||
console.log('\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n');
|
||||
console.log('Test 5: Environment variable precedence');
|
||||
|
||||
// Enable via config
|
||||
newConfigManager.enable();
|
||||
console.log('After enabling via config:', newConfigManager.isEnabled());
|
||||
|
||||
// But env var should still override
|
||||
process.env.N8N_MCP_TELEMETRY_DISABLED = 'true';
|
||||
console.log('With env var set (should override config):', newConfigManager.isEnabled());
|
||||
|
||||
console.log('\n✅ All tests completed!');
|
||||
}
|
||||
|
||||
testEnvOverride().catch(console.error);
|
||||
94
scripts/test-telemetry-integration.ts
Normal file
94
scripts/test-telemetry-integration.ts
Normal file
@@ -0,0 +1,94 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Integration test for the telemetry manager
|
||||
*/
|
||||
|
||||
import { telemetry } from '../src/telemetry/telemetry-manager';
|
||||
|
||||
async function testIntegration() {
|
||||
console.log('🧪 Testing Telemetry Manager Integration\n');
|
||||
|
||||
// Check status
|
||||
console.log('Status:', telemetry.getStatus());
|
||||
|
||||
// Track session start
|
||||
console.log('\nTracking session start...');
|
||||
telemetry.trackSessionStart();
|
||||
|
||||
// Track tool usage
|
||||
console.log('Tracking tool usage...');
|
||||
telemetry.trackToolUsage('search_nodes', true, 150);
|
||||
telemetry.trackToolUsage('get_node_info', true, 75);
|
||||
telemetry.trackToolUsage('validate_workflow', false, 200);
|
||||
|
||||
// Track errors
|
||||
console.log('Tracking errors...');
|
||||
telemetry.trackError('ValidationError', 'workflow_validation', 'validate_workflow');
|
||||
|
||||
// Track a test workflow
|
||||
console.log('Tracking workflow creation...');
|
||||
const testWorkflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
name: 'Webhook',
|
||||
position: [0, 0],
|
||||
parameters: {
|
||||
path: '/test-webhook',
|
||||
httpMethod: 'POST'
|
||||
}
|
||||
},
|
||||
{
|
||||
id: '2',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
name: 'HTTP Request',
|
||||
position: [250, 0],
|
||||
parameters: {
|
||||
url: 'https://api.example.com/endpoint',
|
||||
method: 'POST',
|
||||
authentication: 'genericCredentialType',
|
||||
genericAuthType: 'httpHeaderAuth',
|
||||
sendHeaders: true,
|
||||
headerParameters: {
|
||||
parameters: [
|
||||
{
|
||||
name: 'Authorization',
|
||||
value: 'Bearer sk-1234567890abcdef'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
id: '3',
|
||||
type: 'n8n-nodes-base.slack',
|
||||
name: 'Slack',
|
||||
position: [500, 0],
|
||||
parameters: {
|
||||
channel: '#notifications',
|
||||
text: 'Workflow completed!'
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {
|
||||
'1': {
|
||||
main: [[{ node: '2', type: 'main', index: 0 }]]
|
||||
},
|
||||
'2': {
|
||||
main: [[{ node: '3', type: 'main', index: 0 }]]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
telemetry.trackWorkflowCreation(testWorkflow, true);
|
||||
|
||||
// Force flush
|
||||
console.log('\nFlushing telemetry data...');
|
||||
await telemetry.flush();
|
||||
|
||||
console.log('\n✅ Telemetry integration test completed!');
|
||||
console.log('Check your Supabase dashboard for the telemetry data.');
|
||||
}
|
||||
|
||||
testIntegration().catch(console.error);
|
||||
68
scripts/test-telemetry-no-select.ts
Normal file
68
scripts/test-telemetry-no-select.ts
Normal file
@@ -0,0 +1,68 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Test telemetry without requesting data back
|
||||
*/
|
||||
|
||||
import { createClient } from '@supabase/supabase-js';
|
||||
import dotenv from 'dotenv';
|
||||
|
||||
dotenv.config();
|
||||
|
||||
async function testNoSelect() {
|
||||
const supabaseUrl = process.env.SUPABASE_URL!;
|
||||
const supabaseAnonKey = process.env.SUPABASE_ANON_KEY!;
|
||||
|
||||
console.log('🧪 Telemetry Test (No Select)\n');
|
||||
|
||||
const supabase = createClient(supabaseUrl, supabaseAnonKey, {
|
||||
auth: {
|
||||
persistSession: false,
|
||||
autoRefreshToken: false,
|
||||
}
|
||||
});
|
||||
|
||||
// Insert WITHOUT .select() - just fire and forget
|
||||
const testData = {
|
||||
user_id: 'test-' + Date.now(),
|
||||
event: 'test_event',
|
||||
properties: { test: true }
|
||||
};
|
||||
|
||||
console.log('Inserting:', testData);
|
||||
|
||||
const { error } = await supabase
|
||||
.from('telemetry_events')
|
||||
.insert([testData]); // No .select() here!
|
||||
|
||||
if (error) {
|
||||
console.error('❌ Failed:', error);
|
||||
} else {
|
||||
console.log('✅ Success! Data inserted (no response data)');
|
||||
}
|
||||
|
||||
// Test workflow insert too
|
||||
const testWorkflow = {
|
||||
user_id: 'test-' + Date.now(),
|
||||
workflow_hash: 'hash-' + Date.now(),
|
||||
node_count: 3,
|
||||
node_types: ['webhook', 'http', 'slack'],
|
||||
has_trigger: true,
|
||||
has_webhook: true,
|
||||
complexity: 'simple',
|
||||
sanitized_workflow: { nodes: [], connections: {} }
|
||||
};
|
||||
|
||||
console.log('\nInserting workflow:', testWorkflow);
|
||||
|
||||
const { error: workflowError } = await supabase
|
||||
.from('telemetry_workflows')
|
||||
.insert([testWorkflow]); // No .select() here!
|
||||
|
||||
if (workflowError) {
|
||||
console.error('❌ Workflow failed:', workflowError);
|
||||
} else {
|
||||
console.log('✅ Workflow inserted successfully!');
|
||||
}
|
||||
}
|
||||
|
||||
testNoSelect().catch(console.error);
|
||||
87
scripts/test-telemetry-security.ts
Normal file
87
scripts/test-telemetry-security.ts
Normal file
@@ -0,0 +1,87 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Test that RLS properly protects data
|
||||
*/
|
||||
|
||||
import { createClient } from '@supabase/supabase-js';
|
||||
import dotenv from 'dotenv';
|
||||
|
||||
dotenv.config();
|
||||
|
||||
async function testSecurity() {
|
||||
const supabaseUrl = process.env.SUPABASE_URL!;
|
||||
const supabaseAnonKey = process.env.SUPABASE_ANON_KEY!;
|
||||
|
||||
console.log('🔒 Testing Telemetry Security (RLS)\n');
|
||||
|
||||
const supabase = createClient(supabaseUrl, supabaseAnonKey, {
|
||||
auth: {
|
||||
persistSession: false,
|
||||
autoRefreshToken: false,
|
||||
}
|
||||
});
|
||||
|
||||
// Test 1: Verify anon can INSERT
|
||||
console.log('Test 1: Anonymous INSERT (should succeed)...');
|
||||
const testData = {
|
||||
user_id: 'security-test-' + Date.now(),
|
||||
event: 'security_test',
|
||||
properties: { test: true }
|
||||
};
|
||||
|
||||
const { error: insertError } = await supabase
|
||||
.from('telemetry_events')
|
||||
.insert([testData]);
|
||||
|
||||
if (insertError) {
|
||||
console.error('❌ Insert failed:', insertError.message);
|
||||
} else {
|
||||
console.log('✅ Insert succeeded (as expected)');
|
||||
}
|
||||
|
||||
// Test 2: Verify anon CANNOT SELECT
|
||||
console.log('\nTest 2: Anonymous SELECT (should fail)...');
|
||||
const { data, error: selectError } = await supabase
|
||||
.from('telemetry_events')
|
||||
.select('*')
|
||||
.limit(1);
|
||||
|
||||
if (selectError) {
|
||||
console.log('✅ Select blocked by RLS (as expected):', selectError.message);
|
||||
} else if (data && data.length > 0) {
|
||||
console.error('❌ SECURITY ISSUE: Anon can read data!', data);
|
||||
} else if (data && data.length === 0) {
|
||||
console.log('⚠️ Select returned empty array (might be RLS working)');
|
||||
}
|
||||
|
||||
// Test 3: Verify anon CANNOT UPDATE
|
||||
console.log('\nTest 3: Anonymous UPDATE (should fail)...');
|
||||
const { error: updateError } = await supabase
|
||||
.from('telemetry_events')
|
||||
.update({ event: 'hacked' })
|
||||
.eq('user_id', 'test');
|
||||
|
||||
if (updateError) {
|
||||
console.log('✅ Update blocked (as expected):', updateError.message);
|
||||
} else {
|
||||
console.error('❌ SECURITY ISSUE: Anon can update data!');
|
||||
}
|
||||
|
||||
// Test 4: Verify anon CANNOT DELETE
|
||||
console.log('\nTest 4: Anonymous DELETE (should fail)...');
|
||||
const { error: deleteError } = await supabase
|
||||
.from('telemetry_events')
|
||||
.delete()
|
||||
.eq('user_id', 'test');
|
||||
|
||||
if (deleteError) {
|
||||
console.log('✅ Delete blocked (as expected):', deleteError.message);
|
||||
} else {
|
||||
console.error('❌ SECURITY ISSUE: Anon can delete data!');
|
||||
}
|
||||
|
||||
console.log('\n✨ Security test completed!');
|
||||
console.log('Summary: Anonymous users can INSERT (for telemetry) but cannot READ/UPDATE/DELETE');
|
||||
}
|
||||
|
||||
testSecurity().catch(console.error);
|
||||
45
scripts/test-telemetry-simple.ts
Normal file
45
scripts/test-telemetry-simple.ts
Normal file
@@ -0,0 +1,45 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Simple test to verify telemetry works
|
||||
*/
|
||||
|
||||
import { createClient } from '@supabase/supabase-js';
|
||||
import dotenv from 'dotenv';
|
||||
|
||||
dotenv.config();
|
||||
|
||||
async function testSimple() {
|
||||
const supabaseUrl = process.env.SUPABASE_URL!;
|
||||
const supabaseAnonKey = process.env.SUPABASE_ANON_KEY!;
|
||||
|
||||
console.log('🧪 Simple Telemetry Test\n');
|
||||
|
||||
const supabase = createClient(supabaseUrl, supabaseAnonKey, {
|
||||
auth: {
|
||||
persistSession: false,
|
||||
autoRefreshToken: false,
|
||||
}
|
||||
});
|
||||
|
||||
// Simple insert
|
||||
const testData = {
|
||||
user_id: 'simple-test-' + Date.now(),
|
||||
event: 'test_event',
|
||||
properties: { test: true }
|
||||
};
|
||||
|
||||
console.log('Inserting:', testData);
|
||||
|
||||
const { data, error } = await supabase
|
||||
.from('telemetry_events')
|
||||
.insert([testData])
|
||||
.select();
|
||||
|
||||
if (error) {
|
||||
console.error('❌ Failed:', error);
|
||||
} else {
|
||||
console.log('✅ Success! Inserted:', data);
|
||||
}
|
||||
}
|
||||
|
||||
testSimple().catch(console.error);
|
||||
55
scripts/test-workflow-insert.ts
Normal file
55
scripts/test-workflow-insert.ts
Normal file
@@ -0,0 +1,55 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Test direct workflow insert to Supabase
|
||||
*/
|
||||
|
||||
import { createClient } from '@supabase/supabase-js';
|
||||
|
||||
const TELEMETRY_BACKEND = {
|
||||
URL: 'https://ydyufsohxdfpopqbubwk.supabase.co',
|
||||
ANON_KEY: 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6InlkeXVmc29oeGRmcG9wcWJ1YndrIiwicm9sZSI6ImFub24iLCJpYXQiOjE3NTg3OTYyMDAsImV4cCI6MjA3NDM3MjIwMH0.xESphg6h5ozaDsm4Vla3QnDJGc6Nc_cpfoqTHRynkCk'
|
||||
};
|
||||
|
||||
async function testWorkflowInsert() {
|
||||
const supabase = createClient(TELEMETRY_BACKEND.URL, TELEMETRY_BACKEND.ANON_KEY, {
|
||||
auth: {
|
||||
persistSession: false,
|
||||
autoRefreshToken: false,
|
||||
}
|
||||
});
|
||||
|
||||
const testWorkflow = {
|
||||
user_id: 'direct-test-' + Date.now(),
|
||||
workflow_hash: 'hash-direct-' + Date.now(),
|
||||
node_count: 2,
|
||||
node_types: ['webhook', 'http'],
|
||||
has_trigger: true,
|
||||
has_webhook: true,
|
||||
complexity: 'simple' as const,
|
||||
sanitized_workflow: {
|
||||
nodes: [
|
||||
{ id: '1', type: 'webhook', parameters: {} },
|
||||
{ id: '2', type: 'http', parameters: {} }
|
||||
],
|
||||
connections: {}
|
||||
}
|
||||
};
|
||||
|
||||
console.log('Attempting direct insert to telemetry_workflows...');
|
||||
console.log('Data:', JSON.stringify(testWorkflow, null, 2));
|
||||
|
||||
const { data, error } = await supabase
|
||||
.from('telemetry_workflows')
|
||||
.insert([testWorkflow]);
|
||||
|
||||
if (error) {
|
||||
console.error('\n❌ Error:', error);
|
||||
} else {
|
||||
console.log('\n✅ Success! Workflow inserted');
|
||||
if (data) {
|
||||
console.log('Response:', data);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
testWorkflowInsert().catch(console.error);
|
||||
67
scripts/test-workflow-sanitizer.ts
Normal file
67
scripts/test-workflow-sanitizer.ts
Normal file
@@ -0,0 +1,67 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Test workflow sanitizer
|
||||
*/
|
||||
|
||||
import { WorkflowSanitizer } from '../src/telemetry/workflow-sanitizer';
|
||||
|
||||
const testWorkflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: 'webhook1',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
name: 'Webhook',
|
||||
position: [0, 0],
|
||||
parameters: {
|
||||
path: '/test-webhook',
|
||||
httpMethod: 'POST'
|
||||
}
|
||||
},
|
||||
{
|
||||
id: 'http1',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
name: 'HTTP Request',
|
||||
position: [250, 0],
|
||||
parameters: {
|
||||
url: 'https://api.example.com/endpoint',
|
||||
method: 'GET',
|
||||
authentication: 'genericCredentialType',
|
||||
sendHeaders: true,
|
||||
headerParameters: {
|
||||
parameters: [
|
||||
{
|
||||
name: 'Authorization',
|
||||
value: 'Bearer sk-1234567890abcdef'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {
|
||||
'webhook1': {
|
||||
main: [[{ node: 'http1', type: 'main', index: 0 }]]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
console.log('🧪 Testing Workflow Sanitizer\n');
|
||||
console.log('Original workflow has', testWorkflow.nodes.length, 'nodes');
|
||||
|
||||
try {
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(testWorkflow);
|
||||
|
||||
console.log('\n✅ Sanitization successful!');
|
||||
console.log('\nSanitized output:');
|
||||
console.log(JSON.stringify(sanitized, null, 2));
|
||||
|
||||
console.log('\n📊 Metrics:');
|
||||
console.log('- Workflow Hash:', sanitized.workflowHash);
|
||||
console.log('- Node Count:', sanitized.nodeCount);
|
||||
console.log('- Node Types:', sanitized.nodeTypes);
|
||||
console.log('- Has Trigger:', sanitized.hasTrigger);
|
||||
console.log('- Has Webhook:', sanitized.hasWebhook);
|
||||
console.log('- Complexity:', sanitized.complexity);
|
||||
} catch (error) {
|
||||
console.error('❌ Sanitization failed:', error);
|
||||
}
|
||||
71
scripts/test-workflow-tracking-debug.ts
Normal file
71
scripts/test-workflow-tracking-debug.ts
Normal file
@@ -0,0 +1,71 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Debug workflow tracking in telemetry manager
|
||||
*/
|
||||
|
||||
import { TelemetryManager } from '../src/telemetry/telemetry-manager';
|
||||
|
||||
// Get the singleton instance
|
||||
const telemetry = TelemetryManager.getInstance();
|
||||
|
||||
const testWorkflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: 'webhook1',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
name: 'Webhook',
|
||||
position: [0, 0],
|
||||
parameters: {
|
||||
path: '/test-' + Date.now(),
|
||||
httpMethod: 'POST'
|
||||
}
|
||||
},
|
||||
{
|
||||
id: 'http1',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
name: 'HTTP Request',
|
||||
position: [250, 0],
|
||||
parameters: {
|
||||
url: 'https://api.example.com/data',
|
||||
method: 'GET'
|
||||
}
|
||||
},
|
||||
{
|
||||
id: 'slack1',
|
||||
type: 'n8n-nodes-base.slack',
|
||||
name: 'Slack',
|
||||
position: [500, 0],
|
||||
parameters: {
|
||||
channel: '#general',
|
||||
text: 'Workflow complete!'
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {
|
||||
'webhook1': {
|
||||
main: [[{ node: 'http1', type: 'main', index: 0 }]]
|
||||
},
|
||||
'http1': {
|
||||
main: [[{ node: 'slack1', type: 'main', index: 0 }]]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
console.log('🧪 Testing Workflow Tracking\n');
|
||||
console.log('Workflow has', testWorkflow.nodes.length, 'nodes');
|
||||
|
||||
// Track the workflow
|
||||
console.log('Calling trackWorkflowCreation...');
|
||||
telemetry.trackWorkflowCreation(testWorkflow, true);
|
||||
|
||||
console.log('Waiting for async processing...');
|
||||
|
||||
// Wait for setImmediate to process
|
||||
setTimeout(async () => {
|
||||
console.log('\nForcing flush...');
|
||||
await telemetry.flush();
|
||||
console.log('✅ Flush complete!');
|
||||
|
||||
console.log('\nWorkflow should now be in the telemetry_workflows table.');
|
||||
console.log('Check with: SELECT * FROM telemetry_workflows ORDER BY created_at DESC LIMIT 1;');
|
||||
}, 2000);
|
||||
@@ -27,6 +27,7 @@ import { InstanceContext, validateInstanceContext } from '../types/instance-cont
|
||||
import { WorkflowAutoFixer, AutoFixConfig } from '../services/workflow-auto-fixer';
|
||||
import { ExpressionFormatValidator } from '../services/expression-format-validator';
|
||||
import { handleUpdatePartialWorkflow } from './handlers-workflow-diff';
|
||||
import { telemetry } from '../telemetry';
|
||||
import {
|
||||
createCacheKey,
|
||||
createInstanceCache,
|
||||
@@ -280,16 +281,22 @@ export async function handleCreateWorkflow(args: unknown, context?: InstanceCont
|
||||
// Validate workflow structure
|
||||
const errors = validateWorkflowStructure(input);
|
||||
if (errors.length > 0) {
|
||||
// Track validation failure
|
||||
telemetry.trackWorkflowCreation(input, false);
|
||||
|
||||
return {
|
||||
success: false,
|
||||
error: 'Workflow validation failed',
|
||||
details: { errors }
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
// Create workflow
|
||||
const workflow = await client.createWorkflow(input);
|
||||
|
||||
|
||||
// Track successful workflow creation
|
||||
telemetry.trackWorkflowCreation(workflow, true);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: workflow,
|
||||
@@ -724,7 +731,12 @@ export async function handleValidateWorkflow(
|
||||
if (validationResult.suggestions.length > 0) {
|
||||
response.suggestions = validationResult.suggestions;
|
||||
}
|
||||
|
||||
|
||||
// Track successfully validated workflows in telemetry
|
||||
if (validationResult.valid) {
|
||||
telemetry.trackWorkflowCreation(workflow, true);
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: response
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import { N8NDocumentationMCPServer } from './server';
|
||||
import { logger } from '../utils/logger';
|
||||
import { TelemetryConfigManager } from '../telemetry/config-manager';
|
||||
|
||||
// Add error details to stderr for Claude Desktop debugging
|
||||
process.on('uncaughtException', (error) => {
|
||||
@@ -21,8 +22,42 @@ process.on('unhandledRejection', (reason, promise) => {
|
||||
});
|
||||
|
||||
async function main() {
|
||||
// Handle telemetry CLI commands
|
||||
const args = process.argv.slice(2);
|
||||
if (args.length > 0 && args[0] === 'telemetry') {
|
||||
const telemetryConfig = TelemetryConfigManager.getInstance();
|
||||
const action = args[1];
|
||||
|
||||
switch (action) {
|
||||
case 'enable':
|
||||
telemetryConfig.enable();
|
||||
process.exit(0);
|
||||
break;
|
||||
case 'disable':
|
||||
telemetryConfig.disable();
|
||||
process.exit(0);
|
||||
break;
|
||||
case 'status':
|
||||
console.log(telemetryConfig.getStatus());
|
||||
process.exit(0);
|
||||
break;
|
||||
default:
|
||||
console.log(`
|
||||
Usage: n8n-mcp telemetry [command]
|
||||
|
||||
Commands:
|
||||
enable Enable anonymous telemetry
|
||||
disable Disable anonymous telemetry
|
||||
status Show current telemetry status
|
||||
|
||||
Learn more: https://github.com/czlonkowski/n8n-mcp/blob/main/PRIVACY.md
|
||||
`);
|
||||
process.exit(args[1] ? 1 : 0);
|
||||
}
|
||||
}
|
||||
|
||||
const mode = process.env.MCP_MODE || 'stdio';
|
||||
|
||||
|
||||
try {
|
||||
// Only show debug messages in HTTP mode to avoid corrupting stdio communication
|
||||
if (mode === 'http') {
|
||||
|
||||
@@ -35,6 +35,7 @@ import {
|
||||
STANDARD_PROTOCOL_VERSION
|
||||
} from '../utils/protocol-version';
|
||||
import { InstanceContext } from '../types/instance-context';
|
||||
import { telemetry } from '../telemetry';
|
||||
|
||||
interface NodeRow {
|
||||
node_type: string;
|
||||
@@ -63,6 +64,8 @@ export class N8NDocumentationMCPServer {
|
||||
private cache = new SimpleCache();
|
||||
private clientInfo: any = null;
|
||||
private instanceContext?: InstanceContext;
|
||||
private previousTool: string | null = null;
|
||||
private previousToolTimestamp: number = Date.now();
|
||||
|
||||
constructor(instanceContext?: InstanceContext) {
|
||||
this.instanceContext = instanceContext;
|
||||
@@ -180,7 +183,10 @@ export class N8NDocumentationMCPServer {
|
||||
clientCapabilities,
|
||||
clientInfo
|
||||
});
|
||||
|
||||
|
||||
// Track session start
|
||||
telemetry.trackSessionStart();
|
||||
|
||||
// Store client info for later use
|
||||
this.clientInfo = clientInfo;
|
||||
|
||||
@@ -322,8 +328,23 @@ export class N8NDocumentationMCPServer {
|
||||
|
||||
try {
|
||||
logger.debug(`Executing tool: ${name}`, { args: processedArgs });
|
||||
const startTime = Date.now();
|
||||
const result = await this.executeTool(name, processedArgs);
|
||||
const duration = Date.now() - startTime;
|
||||
logger.debug(`Tool ${name} executed successfully`);
|
||||
|
||||
// Track tool usage and sequence
|
||||
telemetry.trackToolUsage(name, true, duration);
|
||||
|
||||
// Track tool sequence if there was a previous tool
|
||||
if (this.previousTool) {
|
||||
const timeDelta = Date.now() - this.previousToolTimestamp;
|
||||
telemetry.trackToolSequence(this.previousTool, name, timeDelta);
|
||||
}
|
||||
|
||||
// Update previous tool tracking
|
||||
this.previousTool = name;
|
||||
this.previousToolTimestamp = Date.now();
|
||||
|
||||
// Ensure the result is properly formatted for MCP
|
||||
let responseText: string;
|
||||
@@ -370,7 +391,25 @@ export class N8NDocumentationMCPServer {
|
||||
} catch (error) {
|
||||
logger.error(`Error executing tool ${name}`, error);
|
||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
|
||||
|
||||
|
||||
// Track tool error
|
||||
telemetry.trackToolUsage(name, false);
|
||||
telemetry.trackError(
|
||||
error instanceof Error ? error.constructor.name : 'UnknownError',
|
||||
`tool_execution`,
|
||||
name
|
||||
);
|
||||
|
||||
// Track tool sequence even for errors
|
||||
if (this.previousTool) {
|
||||
const timeDelta = Date.now() - this.previousToolTimestamp;
|
||||
telemetry.trackToolSequence(this.previousTool, name, timeDelta);
|
||||
}
|
||||
|
||||
// Update previous tool tracking (even for failed tools)
|
||||
this.previousTool = name;
|
||||
this.previousToolTimestamp = Date.now();
|
||||
|
||||
// Provide more helpful error messages for common n8n issues
|
||||
let helpfulMessage = `Error executing tool ${name}: ${errorMessage}`;
|
||||
|
||||
@@ -954,36 +993,36 @@ export class N8NDocumentationMCPServer {
|
||||
throw new Error(`Node ${nodeType} not found`);
|
||||
}
|
||||
|
||||
// Add AI tool capabilities information
|
||||
// Add AI tool capabilities information with null safety
|
||||
const aiToolCapabilities = {
|
||||
canBeUsedAsTool: true, // Any node can be used as a tool in n8n
|
||||
hasUsableAsToolProperty: node.isAITool,
|
||||
requiresEnvironmentVariable: !node.isAITool && node.package !== 'n8n-nodes-base',
|
||||
hasUsableAsToolProperty: node.isAITool ?? false,
|
||||
requiresEnvironmentVariable: !(node.isAITool ?? false) && node.package !== 'n8n-nodes-base',
|
||||
toolConnectionType: 'ai_tool',
|
||||
commonToolUseCases: this.getCommonAIToolUseCases(node.nodeType),
|
||||
environmentRequirement: node.package !== 'n8n-nodes-base' ?
|
||||
'N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true' :
|
||||
environmentRequirement: node.package && node.package !== 'n8n-nodes-base' ?
|
||||
'N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true' :
|
||||
null
|
||||
};
|
||||
|
||||
// Process outputs to provide clear mapping
|
||||
|
||||
// Process outputs to provide clear mapping with null safety
|
||||
let outputs = undefined;
|
||||
if (node.outputNames && node.outputNames.length > 0) {
|
||||
if (node.outputNames && Array.isArray(node.outputNames) && node.outputNames.length > 0) {
|
||||
outputs = node.outputNames.map((name: string, index: number) => {
|
||||
// Special handling for loop nodes like SplitInBatches
|
||||
const descriptions = this.getOutputDescriptions(node.nodeType, name, index);
|
||||
return {
|
||||
index,
|
||||
name,
|
||||
description: descriptions.description,
|
||||
connectionGuidance: descriptions.connectionGuidance
|
||||
description: descriptions?.description ?? '',
|
||||
connectionGuidance: descriptions?.connectionGuidance ?? ''
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
return {
|
||||
...node,
|
||||
workflowNodeType: getWorkflowNodeType(node.package, node.nodeType),
|
||||
workflowNodeType: getWorkflowNodeType(node.package ?? 'n8n-nodes-base', node.nodeType),
|
||||
aiToolCapabilities,
|
||||
outputs
|
||||
};
|
||||
@@ -1133,7 +1172,10 @@ export class N8NDocumentationMCPServer {
|
||||
if (mode !== 'OR') {
|
||||
result.mode = mode;
|
||||
}
|
||||
|
||||
|
||||
// Track search query telemetry
|
||||
telemetry.trackSearchQuery(query, scoredNodes.length, mode ?? 'OR');
|
||||
|
||||
return result;
|
||||
|
||||
} catch (error: any) {
|
||||
@@ -1146,6 +1188,10 @@ export class N8NDocumentationMCPServer {
|
||||
|
||||
// For problematic queries, use LIKE search with mode info
|
||||
const likeResult = await this.searchNodesLIKE(query, limit);
|
||||
|
||||
// Track search query telemetry for fallback
|
||||
telemetry.trackSearchQuery(query, likeResult.results?.length ?? 0, `${mode}_LIKE_FALLBACK`);
|
||||
|
||||
return {
|
||||
...likeResult,
|
||||
mode
|
||||
@@ -1595,23 +1641,25 @@ export class N8NDocumentationMCPServer {
|
||||
throw new Error(`Node ${nodeType} not found`);
|
||||
}
|
||||
|
||||
// If no documentation, generate fallback
|
||||
// If no documentation, generate fallback with null safety
|
||||
if (!node.documentation) {
|
||||
const essentials = await this.getNodeEssentials(nodeType);
|
||||
|
||||
|
||||
return {
|
||||
nodeType: node.node_type,
|
||||
displayName: node.display_name,
|
||||
displayName: node.display_name || 'Unknown Node',
|
||||
documentation: `
|
||||
# ${node.display_name}
|
||||
# ${node.display_name || 'Unknown Node'}
|
||||
|
||||
${node.description || 'No description available.'}
|
||||
|
||||
## Common Properties
|
||||
|
||||
${essentials.commonProperties.map((p: any) =>
|
||||
`### ${p.displayName}\n${p.description || `Type: ${p.type}`}`
|
||||
).join('\n\n')}
|
||||
${essentials?.commonProperties?.length > 0 ?
|
||||
essentials.commonProperties.map((p: any) =>
|
||||
`### ${p.displayName || 'Property'}\n${p.description || `Type: ${p.type || 'unknown'}`}`
|
||||
).join('\n\n') :
|
||||
'No common properties available.'}
|
||||
|
||||
## Note
|
||||
Full documentation is being prepared. For now, use get_node_essentials for configuration help.
|
||||
@@ -1619,10 +1667,10 @@ Full documentation is being prepared. For now, use get_node_essentials for confi
|
||||
hasDocumentation: false
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
return {
|
||||
nodeType: node.node_type,
|
||||
displayName: node.display_name,
|
||||
displayName: node.display_name || 'Unknown Node',
|
||||
documentation: node.documentation,
|
||||
hasDocumentation: true,
|
||||
};
|
||||
@@ -1731,12 +1779,12 @@ Full documentation is being prepared. For now, use get_node_essentials for confi
|
||||
|
||||
const result = {
|
||||
nodeType: node.nodeType,
|
||||
workflowNodeType: getWorkflowNodeType(node.package, node.nodeType),
|
||||
workflowNodeType: getWorkflowNodeType(node.package ?? 'n8n-nodes-base', node.nodeType),
|
||||
displayName: node.displayName,
|
||||
description: node.description,
|
||||
category: node.category,
|
||||
version: node.version || '1',
|
||||
isVersioned: node.isVersioned || false,
|
||||
version: node.version ?? '1',
|
||||
isVersioned: node.isVersioned ?? false,
|
||||
requiredProperties: essentials.required,
|
||||
commonProperties: essentials.common,
|
||||
operations: operations.map((op: any) => ({
|
||||
@@ -1748,12 +1796,12 @@ Full documentation is being prepared. For now, use get_node_essentials for confi
|
||||
// Examples removed - use validate_node_operation for working configurations
|
||||
metadata: {
|
||||
totalProperties: allProperties.length,
|
||||
isAITool: node.isAITool,
|
||||
isTrigger: node.isTrigger,
|
||||
isWebhook: node.isWebhook,
|
||||
isAITool: node.isAITool ?? false,
|
||||
isTrigger: node.isTrigger ?? false,
|
||||
isWebhook: node.isWebhook ?? false,
|
||||
hasCredentials: node.credentials ? true : false,
|
||||
package: node.package,
|
||||
developmentStyle: node.developmentStyle || 'programmatic'
|
||||
package: node.package ?? 'n8n-nodes-base',
|
||||
developmentStyle: node.developmentStyle ?? 'programmatic'
|
||||
}
|
||||
};
|
||||
|
||||
@@ -2633,7 +2681,28 @@ Full documentation is being prepared. For now, use get_node_essentials for confi
|
||||
if (result.suggestions.length > 0) {
|
||||
response.suggestions = result.suggestions;
|
||||
}
|
||||
|
||||
|
||||
// Track validation details in telemetry
|
||||
if (!result.valid && result.errors.length > 0) {
|
||||
// Track each validation error for analysis
|
||||
result.errors.forEach(error => {
|
||||
telemetry.trackValidationDetails(
|
||||
error.nodeName || 'workflow',
|
||||
error.type || 'validation_error',
|
||||
{
|
||||
message: error.message,
|
||||
nodeCount: workflow.nodes?.length ?? 0,
|
||||
hasConnections: Object.keys(workflow.connections || {}).length > 0
|
||||
}
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
// Track successfully validated workflows in telemetry
|
||||
if (result.valid) {
|
||||
telemetry.trackWorkflowCreation(workflow, true);
|
||||
}
|
||||
|
||||
return response;
|
||||
} catch (error) {
|
||||
logger.error('Error validating workflow:', error);
|
||||
|
||||
301
src/telemetry/config-manager.ts
Normal file
301
src/telemetry/config-manager.ts
Normal file
@@ -0,0 +1,301 @@
|
||||
/**
|
||||
* Telemetry Configuration Manager
|
||||
* Handles telemetry settings, opt-in/opt-out, and first-run detection
|
||||
*/
|
||||
|
||||
import { existsSync, readFileSync, writeFileSync, mkdirSync } from 'fs';
|
||||
import { join, resolve, dirname } from 'path';
|
||||
import { homedir } from 'os';
|
||||
import { createHash } from 'crypto';
|
||||
import { hostname, platform, arch } from 'os';
|
||||
|
||||
export interface TelemetryConfig {
|
||||
enabled: boolean;
|
||||
userId: string;
|
||||
firstRun?: string;
|
||||
lastModified?: string;
|
||||
version?: string;
|
||||
}
|
||||
|
||||
export class TelemetryConfigManager {
|
||||
private static instance: TelemetryConfigManager;
|
||||
private readonly configDir: string;
|
||||
private readonly configPath: string;
|
||||
private config: TelemetryConfig | null = null;
|
||||
|
||||
private constructor() {
|
||||
this.configDir = join(homedir(), '.n8n-mcp');
|
||||
this.configPath = join(this.configDir, 'telemetry.json');
|
||||
}
|
||||
|
||||
static getInstance(): TelemetryConfigManager {
|
||||
if (!TelemetryConfigManager.instance) {
|
||||
TelemetryConfigManager.instance = new TelemetryConfigManager();
|
||||
}
|
||||
return TelemetryConfigManager.instance;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a deterministic anonymous user ID based on machine characteristics
|
||||
*/
|
||||
private generateUserId(): string {
|
||||
const machineId = `${hostname()}-${platform()}-${arch()}-${homedir()}`;
|
||||
return createHash('sha256').update(machineId).digest('hex').substring(0, 16);
|
||||
}
|
||||
|
||||
/**
|
||||
* Load configuration from disk or create default
|
||||
*/
|
||||
loadConfig(): TelemetryConfig {
|
||||
if (this.config) {
|
||||
return this.config;
|
||||
}
|
||||
|
||||
if (!existsSync(this.configPath)) {
|
||||
// First run - create default config
|
||||
const version = this.getPackageVersion();
|
||||
|
||||
// Check if telemetry is disabled via environment variable
|
||||
const envDisabled = this.isDisabledByEnvironment();
|
||||
|
||||
this.config = {
|
||||
enabled: !envDisabled, // Respect env var on first run
|
||||
userId: this.generateUserId(),
|
||||
firstRun: new Date().toISOString(),
|
||||
version
|
||||
};
|
||||
|
||||
this.saveConfig();
|
||||
|
||||
// Only show notice if not disabled via environment
|
||||
if (!envDisabled) {
|
||||
this.showFirstRunNotice();
|
||||
}
|
||||
|
||||
return this.config;
|
||||
}
|
||||
|
||||
try {
|
||||
const rawConfig = readFileSync(this.configPath, 'utf-8');
|
||||
this.config = JSON.parse(rawConfig);
|
||||
|
||||
// Ensure userId exists (for upgrades from older versions)
|
||||
if (!this.config!.userId) {
|
||||
this.config!.userId = this.generateUserId();
|
||||
this.saveConfig();
|
||||
}
|
||||
|
||||
return this.config!;
|
||||
} catch (error) {
|
||||
console.error('Failed to load telemetry config, using defaults:', error);
|
||||
this.config = {
|
||||
enabled: false,
|
||||
userId: this.generateUserId()
|
||||
};
|
||||
return this.config;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Save configuration to disk
|
||||
*/
|
||||
private saveConfig(): void {
|
||||
if (!this.config) return;
|
||||
|
||||
try {
|
||||
if (!existsSync(this.configDir)) {
|
||||
mkdirSync(this.configDir, { recursive: true });
|
||||
}
|
||||
|
||||
this.config.lastModified = new Date().toISOString();
|
||||
writeFileSync(this.configPath, JSON.stringify(this.config, null, 2));
|
||||
} catch (error) {
|
||||
console.error('Failed to save telemetry config:', error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if telemetry is enabled
|
||||
* Priority: Environment variable > Config file > Default (true)
|
||||
*/
|
||||
isEnabled(): boolean {
|
||||
// Check environment variables first (for Docker users)
|
||||
if (this.isDisabledByEnvironment()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const config = this.loadConfig();
|
||||
return config.enabled;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if telemetry is disabled via environment variable
|
||||
*/
|
||||
private isDisabledByEnvironment(): boolean {
|
||||
const envVars = [
|
||||
'N8N_MCP_TELEMETRY_DISABLED',
|
||||
'TELEMETRY_DISABLED',
|
||||
'DISABLE_TELEMETRY'
|
||||
];
|
||||
|
||||
for (const varName of envVars) {
|
||||
const value = process.env[varName];
|
||||
if (value !== undefined) {
|
||||
const normalized = value.toLowerCase().trim();
|
||||
|
||||
// Warn about invalid values
|
||||
if (!['true', 'false', '1', '0', ''].includes(normalized)) {
|
||||
console.warn(
|
||||
`⚠️ Invalid telemetry environment variable value: ${varName}="${value}"\n` +
|
||||
` Use "true" to disable or "false" to enable telemetry.`
|
||||
);
|
||||
}
|
||||
|
||||
// Accept common truthy values
|
||||
if (normalized === 'true' || normalized === '1') {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the anonymous user ID
|
||||
*/
|
||||
getUserId(): string {
|
||||
const config = this.loadConfig();
|
||||
return config.userId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if this is the first run
|
||||
*/
|
||||
isFirstRun(): boolean {
|
||||
return !existsSync(this.configPath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Enable telemetry
|
||||
*/
|
||||
enable(): void {
|
||||
const config = this.loadConfig();
|
||||
config.enabled = true;
|
||||
this.config = config;
|
||||
this.saveConfig();
|
||||
console.log('✓ Anonymous telemetry enabled');
|
||||
}
|
||||
|
||||
/**
|
||||
* Disable telemetry
|
||||
*/
|
||||
disable(): void {
|
||||
const config = this.loadConfig();
|
||||
config.enabled = false;
|
||||
this.config = config;
|
||||
this.saveConfig();
|
||||
console.log('✓ Anonymous telemetry disabled');
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current status
|
||||
*/
|
||||
getStatus(): string {
|
||||
const config = this.loadConfig();
|
||||
|
||||
// Check if disabled by environment
|
||||
const envDisabled = this.isDisabledByEnvironment();
|
||||
|
||||
let status = config.enabled ? 'ENABLED' : 'DISABLED';
|
||||
if (envDisabled) {
|
||||
status = 'DISABLED (via environment variable)';
|
||||
}
|
||||
|
||||
return `
|
||||
Telemetry Status: ${status}
|
||||
Anonymous ID: ${config.userId}
|
||||
First Run: ${config.firstRun || 'Unknown'}
|
||||
Config Path: ${this.configPath}
|
||||
|
||||
To opt-out: npx n8n-mcp telemetry disable
|
||||
To opt-in: npx n8n-mcp telemetry enable
|
||||
|
||||
For Docker: Set N8N_MCP_TELEMETRY_DISABLED=true
|
||||
`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Show first-run notice to user
|
||||
*/
|
||||
private showFirstRunNotice(): void {
|
||||
console.log(`
|
||||
╔════════════════════════════════════════════════════════════╗
|
||||
║ Anonymous Usage Statistics ║
|
||||
╠════════════════════════════════════════════════════════════╣
|
||||
║ ║
|
||||
║ n8n-mcp collects anonymous usage data to improve the ║
|
||||
║ tool and understand how it's being used. ║
|
||||
║ ║
|
||||
║ We track: ║
|
||||
║ • Which MCP tools are used (no parameters) ║
|
||||
║ • Workflow structures (sanitized, no sensitive data) ║
|
||||
║ • Error patterns (hashed, no details) ║
|
||||
║ • Performance metrics (timing, success rates) ║
|
||||
║ ║
|
||||
║ We NEVER collect: ║
|
||||
║ • URLs, API keys, or credentials ║
|
||||
║ • Workflow content or actual data ║
|
||||
║ • Personal or identifiable information ║
|
||||
║ • n8n instance details or locations ║
|
||||
║ ║
|
||||
║ Your anonymous ID: ${this.config?.userId || 'generating...'} ║
|
||||
║ ║
|
||||
║ This helps me understand usage patterns and improve ║
|
||||
║ n8n-mcp for everyone. Thank you for your support! ║
|
||||
║ ║
|
||||
║ To opt-out at any time: ║
|
||||
║ npx n8n-mcp telemetry disable ║
|
||||
║ ║
|
||||
║ Learn more: ║
|
||||
║ https://github.com/czlonkowski/n8n-mcp/blob/main/PRIVACY.md ║
|
||||
║ ║
|
||||
╚════════════════════════════════════════════════════════════╝
|
||||
`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get package version safely
|
||||
*/
|
||||
private getPackageVersion(): string {
|
||||
try {
|
||||
// Try multiple approaches to find package.json
|
||||
const possiblePaths = [
|
||||
resolve(__dirname, '..', '..', 'package.json'),
|
||||
resolve(process.cwd(), 'package.json'),
|
||||
resolve(__dirname, '..', '..', '..', 'package.json')
|
||||
];
|
||||
|
||||
for (const packagePath of possiblePaths) {
|
||||
if (existsSync(packagePath)) {
|
||||
const packageJson = JSON.parse(readFileSync(packagePath, 'utf-8'));
|
||||
if (packageJson.version) {
|
||||
return packageJson.version;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: try require (works in some environments)
|
||||
try {
|
||||
const packageJson = require('../../package.json');
|
||||
return packageJson.version || 'unknown';
|
||||
} catch {
|
||||
// Ignore require error
|
||||
}
|
||||
|
||||
return 'unknown';
|
||||
} catch (error) {
|
||||
return 'unknown';
|
||||
}
|
||||
}
|
||||
}
|
||||
9
src/telemetry/index.ts
Normal file
9
src/telemetry/index.ts
Normal file
@@ -0,0 +1,9 @@
|
||||
/**
|
||||
* Telemetry Module
|
||||
* Exports for anonymous usage statistics
|
||||
*/
|
||||
|
||||
export { TelemetryManager, telemetry } from './telemetry-manager';
|
||||
export { TelemetryConfigManager } from './config-manager';
|
||||
export { WorkflowSanitizer } from './workflow-sanitizer';
|
||||
export type { TelemetryConfig } from './config-manager';
|
||||
636
src/telemetry/telemetry-manager.ts
Normal file
636
src/telemetry/telemetry-manager.ts
Normal file
@@ -0,0 +1,636 @@
|
||||
/**
|
||||
* Telemetry Manager
|
||||
* Main telemetry class for anonymous usage statistics
|
||||
*/
|
||||
|
||||
import { createClient, SupabaseClient } from '@supabase/supabase-js';
|
||||
import { TelemetryConfigManager } from './config-manager';
|
||||
import { WorkflowSanitizer } from './workflow-sanitizer';
|
||||
import { logger } from '../utils/logger';
|
||||
import { resolve } from 'path';
|
||||
import { existsSync, readFileSync } from 'fs';
|
||||
|
||||
interface TelemetryEvent {
|
||||
user_id: string;
|
||||
event: string;
|
||||
properties: Record<string, any>;
|
||||
created_at?: string;
|
||||
}
|
||||
|
||||
interface WorkflowTelemetry {
|
||||
user_id: string;
|
||||
workflow_hash: string;
|
||||
node_count: number;
|
||||
node_types: string[];
|
||||
has_trigger: boolean;
|
||||
has_webhook: boolean;
|
||||
complexity: 'simple' | 'medium' | 'complex';
|
||||
sanitized_workflow: any;
|
||||
created_at?: string;
|
||||
}
|
||||
|
||||
// Configuration constants
|
||||
const TELEMETRY_CONFIG = {
|
||||
BATCH_FLUSH_INTERVAL: 5000, // 5 seconds - reduced for multi-process
|
||||
EVENT_QUEUE_THRESHOLD: 1, // Immediate flush for multi-process compatibility
|
||||
WORKFLOW_QUEUE_THRESHOLD: 1, // Immediate flush for multi-process compatibility
|
||||
MAX_RETRIES: 3,
|
||||
RETRY_DELAY: 1000, // 1 second
|
||||
OPERATION_TIMEOUT: 5000, // 5 seconds
|
||||
} as const;
|
||||
|
||||
// Hardcoded telemetry backend configuration
|
||||
// IMPORTANT: This is intentionally hardcoded for zero-configuration telemetry
|
||||
// The anon key is PUBLIC and SAFE to expose because:
|
||||
// 1. It only allows INSERT operations (write-only)
|
||||
// 2. Row Level Security (RLS) policies prevent reading/updating/deleting data
|
||||
// 3. This is standard practice for anonymous telemetry collection
|
||||
// 4. No sensitive user data is ever sent
|
||||
const TELEMETRY_BACKEND = {
|
||||
URL: 'https://ydyufsohxdfpopqbubwk.supabase.co',
|
||||
ANON_KEY: 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6InlkeXVmc29oeGRmcG9wcWJ1YndrIiwicm9sZSI6ImFub24iLCJpYXQiOjE3NTg3OTYyMDAsImV4cCI6MjA3NDM3MjIwMH0.xESphg6h5ozaDsm4Vla3QnDJGc6Nc_cpfoqTHRynkCk'
|
||||
} as const;
|
||||
|
||||
export class TelemetryManager {
|
||||
private static instance: TelemetryManager;
|
||||
private supabase: SupabaseClient | null = null;
|
||||
private configManager: TelemetryConfigManager;
|
||||
private eventQueue: TelemetryEvent[] = [];
|
||||
private workflowQueue: WorkflowTelemetry[] = [];
|
||||
private flushTimer?: NodeJS.Timeout;
|
||||
private isInitialized: boolean = false;
|
||||
private isFlushingWorkflows: boolean = false;
|
||||
|
||||
private constructor() {
|
||||
this.configManager = TelemetryConfigManager.getInstance();
|
||||
this.initialize();
|
||||
}
|
||||
|
||||
static getInstance(): TelemetryManager {
|
||||
if (!TelemetryManager.instance) {
|
||||
TelemetryManager.instance = new TelemetryManager();
|
||||
}
|
||||
return TelemetryManager.instance;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize telemetry if enabled
|
||||
*/
|
||||
private initialize(): void {
|
||||
if (!this.configManager.isEnabled()) {
|
||||
logger.debug('Telemetry disabled by user preference');
|
||||
return;
|
||||
}
|
||||
|
||||
// Use hardcoded credentials for zero-configuration telemetry
|
||||
// Environment variables can override for development/testing
|
||||
const supabaseUrl = process.env.SUPABASE_URL || TELEMETRY_BACKEND.URL;
|
||||
const supabaseAnonKey = process.env.SUPABASE_ANON_KEY || TELEMETRY_BACKEND.ANON_KEY;
|
||||
|
||||
try {
|
||||
this.supabase = createClient(supabaseUrl, supabaseAnonKey, {
|
||||
auth: {
|
||||
persistSession: false,
|
||||
autoRefreshToken: false,
|
||||
},
|
||||
realtime: {
|
||||
params: {
|
||||
eventsPerSecond: 1,
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
this.isInitialized = true;
|
||||
this.startBatchProcessor();
|
||||
|
||||
// Flush on exit
|
||||
process.on('beforeExit', () => this.flush());
|
||||
process.on('SIGINT', () => {
|
||||
this.flush();
|
||||
process.exit(0);
|
||||
});
|
||||
process.on('SIGTERM', () => {
|
||||
this.flush();
|
||||
process.exit(0);
|
||||
});
|
||||
|
||||
logger.debug('Telemetry initialized successfully');
|
||||
} catch (error) {
|
||||
logger.debug('Failed to initialize telemetry:', error);
|
||||
this.isInitialized = false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Track a tool usage event
|
||||
*/
|
||||
trackToolUsage(toolName: string, success: boolean, duration?: number): void {
|
||||
if (!this.isEnabled()) return;
|
||||
|
||||
// Sanitize tool name (remove any potential sensitive data)
|
||||
const sanitizedToolName = toolName.replace(/[^a-zA-Z0-9_-]/g, '_');
|
||||
|
||||
this.trackEvent('tool_used', {
|
||||
tool: sanitizedToolName,
|
||||
success,
|
||||
duration: duration || 0,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Track workflow creation (fire-and-forget)
|
||||
*/
|
||||
trackWorkflowCreation(workflow: any, validationPassed: boolean): void {
|
||||
if (!this.isEnabled()) return;
|
||||
|
||||
// Only store workflows that pass validation
|
||||
if (!validationPassed) {
|
||||
this.trackEvent('workflow_validation_failed', {
|
||||
nodeCount: workflow.nodes?.length || 0,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Process asynchronously without blocking
|
||||
setImmediate(async () => {
|
||||
try {
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
const telemetryData: WorkflowTelemetry = {
|
||||
user_id: this.configManager.getUserId(),
|
||||
workflow_hash: sanitized.workflowHash,
|
||||
node_count: sanitized.nodeCount,
|
||||
node_types: sanitized.nodeTypes,
|
||||
has_trigger: sanitized.hasTrigger,
|
||||
has_webhook: sanitized.hasWebhook,
|
||||
complexity: sanitized.complexity,
|
||||
sanitized_workflow: {
|
||||
nodes: sanitized.nodes,
|
||||
connections: sanitized.connections,
|
||||
},
|
||||
};
|
||||
|
||||
// Add to queue synchronously to avoid race conditions
|
||||
const queueLength = this.addToWorkflowQueue(telemetryData);
|
||||
|
||||
// Also track as event
|
||||
this.trackEvent('workflow_created', {
|
||||
nodeCount: sanitized.nodeCount,
|
||||
nodeTypes: sanitized.nodeTypes.length,
|
||||
complexity: sanitized.complexity,
|
||||
hasTrigger: sanitized.hasTrigger,
|
||||
hasWebhook: sanitized.hasWebhook,
|
||||
});
|
||||
|
||||
// Flush if queue reached threshold
|
||||
if (queueLength >= TELEMETRY_CONFIG.WORKFLOW_QUEUE_THRESHOLD) {
|
||||
await this.flush();
|
||||
}
|
||||
} catch (error) {
|
||||
logger.debug('Failed to track workflow creation:', error);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Thread-safe method to add workflow to queue
|
||||
* Returns the new queue length after adding
|
||||
*/
|
||||
private addToWorkflowQueue(telemetryData: WorkflowTelemetry): number {
|
||||
// Don't add to queue if we're currently flushing workflows
|
||||
// This prevents race conditions where items are added during flush
|
||||
if (this.isFlushingWorkflows) {
|
||||
// Queue the flush for later to ensure we don't lose data
|
||||
setImmediate(() => {
|
||||
this.workflowQueue.push(telemetryData);
|
||||
if (this.workflowQueue.length >= TELEMETRY_CONFIG.WORKFLOW_QUEUE_THRESHOLD) {
|
||||
this.flush();
|
||||
}
|
||||
});
|
||||
return 0; // Don't trigger immediate flush
|
||||
}
|
||||
|
||||
this.workflowQueue.push(telemetryData);
|
||||
return this.workflowQueue.length;
|
||||
}
|
||||
|
||||
/**
|
||||
* Track an error event
|
||||
*/
|
||||
trackError(errorType: string, context: string, toolName?: string): void {
|
||||
if (!this.isEnabled()) return;
|
||||
|
||||
this.trackEvent('error_occurred', {
|
||||
errorType: this.sanitizeErrorType(errorType),
|
||||
context: this.sanitizeContext(context),
|
||||
tool: toolName ? toolName.replace(/[^a-zA-Z0-9_-]/g, '_') : undefined,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Track a generic event
|
||||
*/
|
||||
trackEvent(eventName: string, properties: Record<string, any>): void {
|
||||
if (!this.isEnabled()) return;
|
||||
|
||||
const event: TelemetryEvent = {
|
||||
user_id: this.configManager.getUserId(),
|
||||
event: eventName,
|
||||
properties: this.sanitizeProperties(properties),
|
||||
};
|
||||
|
||||
this.eventQueue.push(event);
|
||||
|
||||
// Flush if queue is getting large
|
||||
if (this.eventQueue.length >= TELEMETRY_CONFIG.EVENT_QUEUE_THRESHOLD) {
|
||||
this.flush();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Track session start
|
||||
*/
|
||||
trackSessionStart(): void {
|
||||
if (!this.isEnabled()) return;
|
||||
|
||||
this.trackEvent('session_start', {
|
||||
version: this.getPackageVersion(),
|
||||
platform: process.platform,
|
||||
arch: process.arch,
|
||||
nodeVersion: process.version,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Track search queries to identify documentation gaps
|
||||
*/
|
||||
trackSearchQuery(query: string, resultsFound: number, searchType: string): void {
|
||||
if (!this.isEnabled()) return;
|
||||
|
||||
this.trackEvent('search_query', {
|
||||
query: this.sanitizeString(query).substring(0, 100),
|
||||
resultsFound,
|
||||
searchType,
|
||||
hasResults: resultsFound > 0,
|
||||
isZeroResults: resultsFound === 0
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Track validation failure details for improvement insights
|
||||
*/
|
||||
trackValidationDetails(nodeType: string, errorType: string, details: Record<string, any>): void {
|
||||
if (!this.isEnabled()) return;
|
||||
|
||||
this.trackEvent('validation_details', {
|
||||
nodeType: nodeType.replace(/[^a-zA-Z0-9_.-]/g, '_'),
|
||||
errorType: this.sanitizeErrorType(errorType),
|
||||
errorCategory: this.categorizeError(errorType),
|
||||
details: this.sanitizeProperties(details)
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Track tool usage sequences to understand workflows
|
||||
*/
|
||||
trackToolSequence(previousTool: string, currentTool: string, timeDelta: number): void {
|
||||
if (!this.isEnabled()) return;
|
||||
|
||||
this.trackEvent('tool_sequence', {
|
||||
previousTool: previousTool.replace(/[^a-zA-Z0-9_-]/g, '_'),
|
||||
currentTool: currentTool.replace(/[^a-zA-Z0-9_-]/g, '_'),
|
||||
timeDelta: Math.min(timeDelta, 300000), // Cap at 5 minutes
|
||||
isSlowTransition: timeDelta > 10000, // More than 10 seconds
|
||||
sequence: `${previousTool}->${currentTool}`
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Track node configuration patterns
|
||||
*/
|
||||
trackNodeConfiguration(nodeType: string, propertiesSet: number, usedDefaults: boolean): void {
|
||||
if (!this.isEnabled()) return;
|
||||
|
||||
this.trackEvent('node_configuration', {
|
||||
nodeType: nodeType.replace(/[^a-zA-Z0-9_.-]/g, '_'),
|
||||
propertiesSet,
|
||||
usedDefaults,
|
||||
complexity: this.categorizeConfigComplexity(propertiesSet)
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Track performance metrics for optimization
|
||||
*/
|
||||
trackPerformanceMetric(operation: string, duration: number, metadata?: Record<string, any>): void {
|
||||
if (!this.isEnabled()) return;
|
||||
|
||||
this.trackEvent('performance_metric', {
|
||||
operation: operation.replace(/[^a-zA-Z0-9_-]/g, '_'),
|
||||
duration,
|
||||
isSlow: duration > 1000,
|
||||
isVerySlow: duration > 5000,
|
||||
metadata: metadata ? this.sanitizeProperties(metadata) : undefined
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Categorize error types for better analysis
|
||||
*/
|
||||
private categorizeError(errorType: string): string {
|
||||
const lowerError = errorType.toLowerCase();
|
||||
if (lowerError.includes('type')) return 'type_error';
|
||||
if (lowerError.includes('validation')) return 'validation_error';
|
||||
if (lowerError.includes('required')) return 'required_field_error';
|
||||
if (lowerError.includes('connection')) return 'connection_error';
|
||||
if (lowerError.includes('expression')) return 'expression_error';
|
||||
return 'other_error';
|
||||
}
|
||||
|
||||
/**
|
||||
* Categorize configuration complexity
|
||||
*/
|
||||
private categorizeConfigComplexity(propertiesSet: number): string {
|
||||
if (propertiesSet === 0) return 'defaults_only';
|
||||
if (propertiesSet <= 3) return 'simple';
|
||||
if (propertiesSet <= 10) return 'moderate';
|
||||
return 'complex';
|
||||
}
|
||||
|
||||
/**
|
||||
* Get package version safely
|
||||
*/
|
||||
private getPackageVersion(): string {
|
||||
try {
|
||||
// Try multiple approaches to find package.json
|
||||
const possiblePaths = [
|
||||
resolve(__dirname, '..', '..', 'package.json'),
|
||||
resolve(process.cwd(), 'package.json'),
|
||||
resolve(__dirname, '..', '..', '..', 'package.json')
|
||||
];
|
||||
|
||||
for (const packagePath of possiblePaths) {
|
||||
if (existsSync(packagePath)) {
|
||||
const packageJson = JSON.parse(readFileSync(packagePath, 'utf-8'));
|
||||
if (packageJson.version) {
|
||||
return packageJson.version;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: try require (works in some environments)
|
||||
try {
|
||||
const packageJson = require('../../package.json');
|
||||
return packageJson.version || 'unknown';
|
||||
} catch {
|
||||
// Ignore require error
|
||||
}
|
||||
|
||||
return 'unknown';
|
||||
} catch (error) {
|
||||
logger.debug('Failed to get package version:', error);
|
||||
return 'unknown';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute Supabase operation with retry and timeout
|
||||
*/
|
||||
private async executeWithRetry<T>(
|
||||
operation: () => Promise<T>,
|
||||
operationName: string
|
||||
): Promise<T | null> {
|
||||
let lastError: Error | null = null;
|
||||
|
||||
for (let attempt = 1; attempt <= TELEMETRY_CONFIG.MAX_RETRIES; attempt++) {
|
||||
try {
|
||||
// Create a timeout promise
|
||||
const timeoutPromise = new Promise<never>((_, reject) => {
|
||||
setTimeout(() => reject(new Error('Operation timed out')), TELEMETRY_CONFIG.OPERATION_TIMEOUT);
|
||||
});
|
||||
|
||||
// Race between operation and timeout
|
||||
const result = await Promise.race([operation(), timeoutPromise]) as T;
|
||||
return result;
|
||||
} catch (error) {
|
||||
lastError = error as Error;
|
||||
logger.debug(`${operationName} attempt ${attempt} failed:`, error);
|
||||
|
||||
if (attempt < TELEMETRY_CONFIG.MAX_RETRIES) {
|
||||
// Wait before retrying
|
||||
await new Promise(resolve => setTimeout(resolve, TELEMETRY_CONFIG.RETRY_DELAY * attempt));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
logger.debug(`${operationName} failed after ${TELEMETRY_CONFIG.MAX_RETRIES} attempts:`, lastError);
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Flush queued events to Supabase
|
||||
*/
|
||||
async flush(): Promise<void> {
|
||||
if (!this.isEnabled() || !this.supabase) return;
|
||||
|
||||
// Flush events
|
||||
if (this.eventQueue.length > 0) {
|
||||
const events = [...this.eventQueue];
|
||||
this.eventQueue = [];
|
||||
|
||||
await this.executeWithRetry(async () => {
|
||||
const { error } = await this.supabase!
|
||||
.from('telemetry_events')
|
||||
.insert(events); // No .select() - we don't need the response
|
||||
|
||||
if (error) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
logger.debug(`Flushed ${events.length} telemetry events`);
|
||||
return true;
|
||||
}, 'Flush telemetry events');
|
||||
}
|
||||
|
||||
// Flush workflows
|
||||
if (this.workflowQueue.length > 0) {
|
||||
this.isFlushingWorkflows = true;
|
||||
|
||||
try {
|
||||
const workflows = [...this.workflowQueue];
|
||||
this.workflowQueue = [];
|
||||
|
||||
const result = await this.executeWithRetry(async () => {
|
||||
// Deduplicate workflows by hash before inserting
|
||||
const uniqueWorkflows = workflows.reduce((acc, workflow) => {
|
||||
if (!acc.some(w => w.workflow_hash === workflow.workflow_hash)) {
|
||||
acc.push(workflow);
|
||||
}
|
||||
return acc;
|
||||
}, [] as WorkflowTelemetry[]);
|
||||
|
||||
logger.debug(`Deduplicating workflows: ${workflows.length} -> ${uniqueWorkflows.length} unique`);
|
||||
|
||||
// Use insert (same as events) - duplicates are handled by deduplication above
|
||||
const { error } = await this.supabase!
|
||||
.from('telemetry_workflows')
|
||||
.insert(uniqueWorkflows); // No .select() - we don't need the response
|
||||
|
||||
if (error) {
|
||||
logger.debug('Detailed workflow flush error:', {
|
||||
error: error,
|
||||
workflowCount: workflows.length,
|
||||
firstWorkflow: workflows[0] ? {
|
||||
user_id: workflows[0].user_id,
|
||||
workflow_hash: workflows[0].workflow_hash,
|
||||
node_count: workflows[0].node_count
|
||||
} : null
|
||||
});
|
||||
throw error;
|
||||
}
|
||||
|
||||
logger.debug(`Flushed ${uniqueWorkflows.length} unique telemetry workflows (${workflows.length} total processed)`);
|
||||
return true;
|
||||
}, 'Flush telemetry workflows');
|
||||
|
||||
if (!result) {
|
||||
logger.debug('Failed to flush workflows after retries');
|
||||
}
|
||||
} finally {
|
||||
this.isFlushingWorkflows = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Start batch processor for periodic flushing
|
||||
*/
|
||||
private startBatchProcessor(): void {
|
||||
// Flush periodically
|
||||
this.flushTimer = setInterval(() => {
|
||||
this.flush();
|
||||
}, TELEMETRY_CONFIG.BATCH_FLUSH_INTERVAL);
|
||||
|
||||
// Prevent timer from keeping process alive
|
||||
this.flushTimer.unref();
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if telemetry is enabled
|
||||
*/
|
||||
private isEnabled(): boolean {
|
||||
return this.isInitialized && this.configManager.isEnabled();
|
||||
}
|
||||
|
||||
/**
|
||||
* Sanitize properties to remove sensitive data
|
||||
*/
|
||||
private sanitizeProperties(properties: Record<string, any>): Record<string, any> {
|
||||
const sanitized: Record<string, any> = {};
|
||||
|
||||
for (const [key, value] of Object.entries(properties)) {
|
||||
// Skip sensitive keys
|
||||
if (this.isSensitiveKey(key)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Sanitize values
|
||||
if (typeof value === 'string') {
|
||||
sanitized[key] = this.sanitizeString(value);
|
||||
} else if (typeof value === 'object' && value !== null) {
|
||||
sanitized[key] = this.sanitizeProperties(value);
|
||||
} else {
|
||||
sanitized[key] = value;
|
||||
}
|
||||
}
|
||||
|
||||
return sanitized;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a key is sensitive
|
||||
*/
|
||||
private isSensitiveKey(key: string): boolean {
|
||||
const sensitiveKeys = [
|
||||
'password', 'token', 'key', 'secret', 'credential',
|
||||
'auth', 'url', 'endpoint', 'host', 'database',
|
||||
];
|
||||
|
||||
const lowerKey = key.toLowerCase();
|
||||
return sensitiveKeys.some(sensitive => lowerKey.includes(sensitive));
|
||||
}
|
||||
|
||||
/**
|
||||
* Sanitize string values
|
||||
*/
|
||||
private sanitizeString(value: string): string {
|
||||
// Remove URLs
|
||||
let sanitized = value.replace(/https?:\/\/[^\s]+/gi, '[URL]');
|
||||
|
||||
// Remove potential API keys (long alphanumeric strings)
|
||||
sanitized = sanitized.replace(/[a-zA-Z0-9_-]{32,}/g, '[KEY]');
|
||||
|
||||
// Remove email addresses
|
||||
sanitized = sanitized.replace(/[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}/g, '[EMAIL]');
|
||||
|
||||
return sanitized;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sanitize error type
|
||||
*/
|
||||
private sanitizeErrorType(errorType: string): string {
|
||||
// Remove any potential sensitive data from error type
|
||||
return errorType
|
||||
.replace(/[^a-zA-Z0-9_-]/g, '_')
|
||||
.substring(0, 50);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sanitize context
|
||||
*/
|
||||
private sanitizeContext(context: string): string {
|
||||
// Remove any potential sensitive data from context
|
||||
return context
|
||||
.replace(/https?:\/\/[^\s]+/gi, '[URL]')
|
||||
.replace(/[a-zA-Z0-9_-]{32,}/g, '[KEY]')
|
||||
.substring(0, 100);
|
||||
}
|
||||
|
||||
/**
|
||||
* Disable telemetry
|
||||
*/
|
||||
disable(): void {
|
||||
this.configManager.disable();
|
||||
if (this.flushTimer) {
|
||||
clearInterval(this.flushTimer);
|
||||
}
|
||||
this.isInitialized = false;
|
||||
this.supabase = null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Enable telemetry
|
||||
*/
|
||||
enable(): void {
|
||||
this.configManager.enable();
|
||||
this.initialize();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get telemetry status
|
||||
*/
|
||||
getStatus(): string {
|
||||
return this.configManager.getStatus();
|
||||
}
|
||||
}
|
||||
|
||||
// Create a global singleton to ensure only one instance across all imports
|
||||
const globalAny = global as any;
|
||||
|
||||
if (!globalAny.__telemetryManager) {
|
||||
globalAny.__telemetryManager = TelemetryManager.getInstance();
|
||||
}
|
||||
|
||||
// Export singleton instance
|
||||
export const telemetry = globalAny.__telemetryManager as TelemetryManager;
|
||||
299
src/telemetry/workflow-sanitizer.ts
Normal file
299
src/telemetry/workflow-sanitizer.ts
Normal file
@@ -0,0 +1,299 @@
|
||||
/**
|
||||
* Workflow Sanitizer
|
||||
* Removes sensitive data from workflows before telemetry storage
|
||||
*/
|
||||
|
||||
import { createHash } from 'crypto';
|
||||
|
||||
interface WorkflowNode {
|
||||
id: string;
|
||||
name: string;
|
||||
type: string;
|
||||
position: [number, number];
|
||||
parameters: any;
|
||||
credentials?: any;
|
||||
disabled?: boolean;
|
||||
typeVersion?: number;
|
||||
}
|
||||
|
||||
interface SanitizedWorkflow {
|
||||
nodes: WorkflowNode[];
|
||||
connections: any;
|
||||
nodeCount: number;
|
||||
nodeTypes: string[];
|
||||
hasTrigger: boolean;
|
||||
hasWebhook: boolean;
|
||||
complexity: 'simple' | 'medium' | 'complex';
|
||||
workflowHash: string;
|
||||
}
|
||||
|
||||
export class WorkflowSanitizer {
|
||||
private static readonly SENSITIVE_PATTERNS = [
|
||||
// Webhook URLs (replace with placeholder but keep structure) - MUST BE FIRST
|
||||
/https?:\/\/[^\s/]+\/webhook\/[^\s]+/g,
|
||||
/https?:\/\/[^\s/]+\/hook\/[^\s]+/g,
|
||||
|
||||
// API keys and tokens
|
||||
/sk-[a-zA-Z0-9]{16,}/g, // OpenAI keys
|
||||
/Bearer\s+[^\s]+/gi, // Bearer tokens
|
||||
/[a-zA-Z0-9_-]{20,}/g, // Long alphanumeric strings (API keys) - reduced threshold
|
||||
/token['":\s]+[^,}]+/gi, // Token fields
|
||||
/apikey['":\s]+[^,}]+/gi, // API key fields
|
||||
/api_key['":\s]+[^,}]+/gi,
|
||||
/secret['":\s]+[^,}]+/gi,
|
||||
/password['":\s]+[^,}]+/gi,
|
||||
/credential['":\s]+[^,}]+/gi,
|
||||
|
||||
// URLs with authentication
|
||||
/https?:\/\/[^:]+:[^@]+@[^\s/]+/g, // URLs with auth
|
||||
/wss?:\/\/[^:]+:[^@]+@[^\s/]+/g,
|
||||
|
||||
// Email addresses (optional - uncomment if needed)
|
||||
// /[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}/g,
|
||||
];
|
||||
|
||||
private static readonly SENSITIVE_FIELDS = [
|
||||
'apiKey',
|
||||
'api_key',
|
||||
'token',
|
||||
'secret',
|
||||
'password',
|
||||
'credential',
|
||||
'auth',
|
||||
'authorization',
|
||||
'webhook',
|
||||
'webhookUrl',
|
||||
'url',
|
||||
'endpoint',
|
||||
'host',
|
||||
'server',
|
||||
'database',
|
||||
'connectionString',
|
||||
'privateKey',
|
||||
'publicKey',
|
||||
'certificate',
|
||||
];
|
||||
|
||||
/**
|
||||
* Sanitize a complete workflow
|
||||
*/
|
||||
static sanitizeWorkflow(workflow: any): SanitizedWorkflow {
|
||||
// Create a deep copy to avoid modifying original
|
||||
const sanitized = JSON.parse(JSON.stringify(workflow));
|
||||
|
||||
// Sanitize nodes
|
||||
if (sanitized.nodes && Array.isArray(sanitized.nodes)) {
|
||||
sanitized.nodes = sanitized.nodes.map((node: WorkflowNode) =>
|
||||
this.sanitizeNode(node)
|
||||
);
|
||||
}
|
||||
|
||||
// Sanitize connections (keep structure only)
|
||||
if (sanitized.connections) {
|
||||
sanitized.connections = this.sanitizeConnections(sanitized.connections);
|
||||
}
|
||||
|
||||
// Remove other potentially sensitive data
|
||||
delete sanitized.settings?.errorWorkflow;
|
||||
delete sanitized.staticData;
|
||||
delete sanitized.pinData;
|
||||
delete sanitized.credentials;
|
||||
delete sanitized.sharedWorkflows;
|
||||
delete sanitized.ownedBy;
|
||||
delete sanitized.createdBy;
|
||||
delete sanitized.updatedBy;
|
||||
|
||||
// Calculate metrics
|
||||
const nodeTypes = sanitized.nodes?.map((n: WorkflowNode) => n.type) || [];
|
||||
const uniqueNodeTypes = [...new Set(nodeTypes)] as string[];
|
||||
|
||||
const hasTrigger = nodeTypes.some((type: string) =>
|
||||
type.includes('trigger') || type.includes('webhook')
|
||||
);
|
||||
|
||||
const hasWebhook = nodeTypes.some((type: string) =>
|
||||
type.includes('webhook')
|
||||
);
|
||||
|
||||
// Calculate complexity
|
||||
const nodeCount = sanitized.nodes?.length || 0;
|
||||
let complexity: 'simple' | 'medium' | 'complex' = 'simple';
|
||||
if (nodeCount > 20) {
|
||||
complexity = 'complex';
|
||||
} else if (nodeCount > 10) {
|
||||
complexity = 'medium';
|
||||
}
|
||||
|
||||
// Generate workflow hash (for deduplication)
|
||||
const workflowStructure = JSON.stringify({
|
||||
nodeTypes: uniqueNodeTypes.sort(),
|
||||
connections: sanitized.connections
|
||||
});
|
||||
const workflowHash = createHash('sha256')
|
||||
.update(workflowStructure)
|
||||
.digest('hex')
|
||||
.substring(0, 16);
|
||||
|
||||
return {
|
||||
nodes: sanitized.nodes || [],
|
||||
connections: sanitized.connections || {},
|
||||
nodeCount,
|
||||
nodeTypes: uniqueNodeTypes,
|
||||
hasTrigger,
|
||||
hasWebhook,
|
||||
complexity,
|
||||
workflowHash
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Sanitize a single node
|
||||
*/
|
||||
private static sanitizeNode(node: WorkflowNode): WorkflowNode {
|
||||
const sanitized = { ...node };
|
||||
|
||||
// Remove credentials entirely
|
||||
delete sanitized.credentials;
|
||||
|
||||
// Sanitize parameters
|
||||
if (sanitized.parameters) {
|
||||
sanitized.parameters = this.sanitizeObject(sanitized.parameters);
|
||||
}
|
||||
|
||||
return sanitized;
|
||||
}
|
||||
|
||||
/**
|
||||
* Recursively sanitize an object
|
||||
*/
|
||||
private static sanitizeObject(obj: any): any {
|
||||
if (!obj || typeof obj !== 'object') {
|
||||
return obj;
|
||||
}
|
||||
|
||||
if (Array.isArray(obj)) {
|
||||
return obj.map(item => this.sanitizeObject(item));
|
||||
}
|
||||
|
||||
const sanitized: any = {};
|
||||
|
||||
for (const [key, value] of Object.entries(obj)) {
|
||||
// Check if key is sensitive
|
||||
if (this.isSensitiveField(key)) {
|
||||
sanitized[key] = '[REDACTED]';
|
||||
continue;
|
||||
}
|
||||
|
||||
// Recursively sanitize nested objects
|
||||
if (typeof value === 'object' && value !== null) {
|
||||
sanitized[key] = this.sanitizeObject(value);
|
||||
}
|
||||
// Sanitize string values
|
||||
else if (typeof value === 'string') {
|
||||
sanitized[key] = this.sanitizeString(value, key);
|
||||
}
|
||||
// Keep other types as-is
|
||||
else {
|
||||
sanitized[key] = value;
|
||||
}
|
||||
}
|
||||
|
||||
return sanitized;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sanitize string values
|
||||
*/
|
||||
private static sanitizeString(value: string, fieldName: string): string {
|
||||
// First check if this is a webhook URL
|
||||
if (value.includes('/webhook/') || value.includes('/hook/')) {
|
||||
return 'https://[webhook-url]';
|
||||
}
|
||||
|
||||
let sanitized = value;
|
||||
|
||||
// Apply all sensitive patterns
|
||||
for (const pattern of this.SENSITIVE_PATTERNS) {
|
||||
// Skip webhook patterns - already handled above
|
||||
if (pattern.toString().includes('webhook')) {
|
||||
continue;
|
||||
}
|
||||
sanitized = sanitized.replace(pattern, '[REDACTED]');
|
||||
}
|
||||
|
||||
// Additional sanitization for specific field types
|
||||
if (fieldName.toLowerCase().includes('url') ||
|
||||
fieldName.toLowerCase().includes('endpoint')) {
|
||||
// Keep URL structure but remove domain details
|
||||
if (sanitized.startsWith('http://') || sanitized.startsWith('https://')) {
|
||||
// If value has been redacted, leave it as is
|
||||
if (sanitized.includes('[REDACTED]')) {
|
||||
return '[REDACTED]';
|
||||
}
|
||||
const urlParts = sanitized.split('/');
|
||||
if (urlParts.length > 2) {
|
||||
urlParts[2] = '[domain]';
|
||||
sanitized = urlParts.join('/');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return sanitized;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a field name is sensitive
|
||||
*/
|
||||
private static isSensitiveField(fieldName: string): boolean {
|
||||
const lowerFieldName = fieldName.toLowerCase();
|
||||
return this.SENSITIVE_FIELDS.some(sensitive =>
|
||||
lowerFieldName.includes(sensitive.toLowerCase())
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sanitize connections (keep structure only)
|
||||
*/
|
||||
private static sanitizeConnections(connections: any): any {
|
||||
if (!connections || typeof connections !== 'object') {
|
||||
return connections;
|
||||
}
|
||||
|
||||
const sanitized: any = {};
|
||||
|
||||
for (const [nodeId, nodeConnections] of Object.entries(connections)) {
|
||||
if (typeof nodeConnections === 'object' && nodeConnections !== null) {
|
||||
sanitized[nodeId] = {};
|
||||
|
||||
for (const [connType, connArray] of Object.entries(nodeConnections as any)) {
|
||||
if (Array.isArray(connArray)) {
|
||||
sanitized[nodeId][connType] = connArray.map((conns: any) => {
|
||||
if (Array.isArray(conns)) {
|
||||
return conns.map((conn: any) => ({
|
||||
node: conn.node,
|
||||
type: conn.type,
|
||||
index: conn.index
|
||||
}));
|
||||
}
|
||||
return conns;
|
||||
});
|
||||
} else {
|
||||
sanitized[nodeId][connType] = connArray;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
sanitized[nodeId] = nodeConnections;
|
||||
}
|
||||
}
|
||||
|
||||
return sanitized;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a hash for workflow deduplication
|
||||
*/
|
||||
static generateWorkflowHash(workflow: any): string {
|
||||
const sanitized = this.sanitizeWorkflow(workflow);
|
||||
return sanitized.workflowHash;
|
||||
}
|
||||
}
|
||||
507
tests/unit/telemetry/config-manager.test.ts
Normal file
507
tests/unit/telemetry/config-manager.test.ts
Normal file
@@ -0,0 +1,507 @@
|
||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||
import { TelemetryConfigManager } from '../../../src/telemetry/config-manager';
|
||||
import { existsSync, readFileSync, writeFileSync, mkdirSync, rmSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { homedir } from 'os';
|
||||
|
||||
// Mock fs module
|
||||
vi.mock('fs', async () => {
|
||||
const actual = await vi.importActual<typeof import('fs')>('fs');
|
||||
return {
|
||||
...actual,
|
||||
existsSync: vi.fn(),
|
||||
readFileSync: vi.fn(),
|
||||
writeFileSync: vi.fn(),
|
||||
mkdirSync: vi.fn()
|
||||
};
|
||||
});
|
||||
|
||||
describe('TelemetryConfigManager', () => {
|
||||
let manager: TelemetryConfigManager;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
// Clear singleton instance
|
||||
(TelemetryConfigManager as any).instance = null;
|
||||
|
||||
// Mock console.log to suppress first-run notice in tests
|
||||
vi.spyOn(console, 'log').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
describe('getInstance', () => {
|
||||
it('should return singleton instance', () => {
|
||||
const instance1 = TelemetryConfigManager.getInstance();
|
||||
const instance2 = TelemetryConfigManager.getInstance();
|
||||
expect(instance1).toBe(instance2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('loadConfig', () => {
|
||||
it('should create default config on first run', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(false);
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const config = manager.loadConfig();
|
||||
|
||||
expect(config.enabled).toBe(true);
|
||||
expect(config.userId).toMatch(/^[a-f0-9]{16}$/);
|
||||
expect(config.firstRun).toBeDefined();
|
||||
expect(vi.mocked(mkdirSync)).toHaveBeenCalledWith(
|
||||
join(homedir(), '.n8n-mcp'),
|
||||
{ recursive: true }
|
||||
);
|
||||
expect(vi.mocked(writeFileSync)).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should load existing config from disk', () => {
|
||||
const mockConfig = {
|
||||
enabled: false,
|
||||
userId: 'test-user-id',
|
||||
firstRun: '2024-01-01T00:00:00Z'
|
||||
};
|
||||
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify(mockConfig));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const config = manager.loadConfig();
|
||||
|
||||
expect(config).toEqual(mockConfig);
|
||||
});
|
||||
|
||||
it('should handle corrupted config file gracefully', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue('invalid json');
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const config = manager.loadConfig();
|
||||
|
||||
expect(config.enabled).toBe(false);
|
||||
expect(config.userId).toMatch(/^[a-f0-9]{16}$/);
|
||||
});
|
||||
|
||||
it('should add userId to config if missing', () => {
|
||||
const mockConfig = {
|
||||
enabled: true,
|
||||
firstRun: '2024-01-01T00:00:00Z'
|
||||
};
|
||||
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify(mockConfig));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const config = manager.loadConfig();
|
||||
|
||||
expect(config.userId).toMatch(/^[a-f0-9]{16}$/);
|
||||
expect(vi.mocked(writeFileSync)).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('isEnabled', () => {
|
||||
it('should return true when telemetry is enabled', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: true,
|
||||
userId: 'test-id'
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
expect(manager.isEnabled()).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false when telemetry is disabled', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: false,
|
||||
userId: 'test-id'
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
expect(manager.isEnabled()).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getUserId', () => {
|
||||
it('should return consistent user ID', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: true,
|
||||
userId: 'test-user-id-123'
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
expect(manager.getUserId()).toBe('test-user-id-123');
|
||||
});
|
||||
});
|
||||
|
||||
describe('isFirstRun', () => {
|
||||
it('should return true if config file does not exist', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(false);
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
expect(manager.isFirstRun()).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false if config file exists', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
expect(manager.isFirstRun()).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('enable/disable', () => {
|
||||
beforeEach(() => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: false,
|
||||
userId: 'test-id'
|
||||
}));
|
||||
});
|
||||
|
||||
it('should enable telemetry', () => {
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
manager.enable();
|
||||
|
||||
const calls = vi.mocked(writeFileSync).mock.calls;
|
||||
expect(calls.length).toBeGreaterThan(0);
|
||||
const lastCall = calls[calls.length - 1];
|
||||
expect(lastCall[1]).toContain('"enabled": true');
|
||||
});
|
||||
|
||||
it('should disable telemetry', () => {
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
manager.disable();
|
||||
|
||||
const calls = vi.mocked(writeFileSync).mock.calls;
|
||||
expect(calls.length).toBeGreaterThan(0);
|
||||
const lastCall = calls[calls.length - 1];
|
||||
expect(lastCall[1]).toContain('"enabled": false');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getStatus', () => {
|
||||
it('should return formatted status string', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: true,
|
||||
userId: 'test-id',
|
||||
firstRun: '2024-01-01T00:00:00Z'
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const status = manager.getStatus();
|
||||
|
||||
expect(status).toContain('ENABLED');
|
||||
expect(status).toContain('test-id');
|
||||
expect(status).toContain('2024-01-01T00:00:00Z');
|
||||
expect(status).toContain('npx n8n-mcp telemetry');
|
||||
});
|
||||
});
|
||||
|
||||
describe('edge cases and error handling', () => {
|
||||
it('should handle file system errors during config creation', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(false);
|
||||
vi.mocked(mkdirSync).mockImplementation(() => {
|
||||
throw new Error('Permission denied');
|
||||
});
|
||||
|
||||
// Should not crash on file system errors
|
||||
expect(() => TelemetryConfigManager.getInstance()).not.toThrow();
|
||||
});
|
||||
|
||||
it('should handle write errors during config save', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: false,
|
||||
userId: 'test-id'
|
||||
}));
|
||||
vi.mocked(writeFileSync).mockImplementation(() => {
|
||||
throw new Error('Disk full');
|
||||
});
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
|
||||
// Should not crash on write errors
|
||||
expect(() => manager.enable()).not.toThrow();
|
||||
expect(() => manager.disable()).not.toThrow();
|
||||
});
|
||||
|
||||
it('should handle missing home directory', () => {
|
||||
// Mock homedir to return empty string
|
||||
const originalHomedir = require('os').homedir;
|
||||
vi.doMock('os', () => ({
|
||||
homedir: () => ''
|
||||
}));
|
||||
|
||||
vi.mocked(existsSync).mockReturnValue(false);
|
||||
|
||||
expect(() => TelemetryConfigManager.getInstance()).not.toThrow();
|
||||
});
|
||||
|
||||
it('should generate valid user ID when crypto.randomBytes fails', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(false);
|
||||
|
||||
// Mock crypto to fail
|
||||
vi.doMock('crypto', () => ({
|
||||
randomBytes: () => {
|
||||
throw new Error('Crypto not available');
|
||||
}
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const config = manager.loadConfig();
|
||||
|
||||
expect(config.userId).toBeDefined();
|
||||
expect(config.userId).toMatch(/^[a-f0-9]{16}$/);
|
||||
});
|
||||
|
||||
it('should handle concurrent access to config file', () => {
|
||||
let readCount = 0;
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockImplementation(() => {
|
||||
readCount++;
|
||||
if (readCount === 1) {
|
||||
return JSON.stringify({
|
||||
enabled: false,
|
||||
userId: 'test-id-1'
|
||||
});
|
||||
}
|
||||
return JSON.stringify({
|
||||
enabled: true,
|
||||
userId: 'test-id-2'
|
||||
});
|
||||
});
|
||||
|
||||
const manager1 = TelemetryConfigManager.getInstance();
|
||||
const manager2 = TelemetryConfigManager.getInstance();
|
||||
|
||||
// Should be same instance due to singleton pattern
|
||||
expect(manager1).toBe(manager2);
|
||||
});
|
||||
|
||||
it('should handle environment variable overrides', () => {
|
||||
const originalEnv = process.env.N8N_MCP_TELEMETRY_DISABLED;
|
||||
|
||||
// Test with environment variable set to disable telemetry
|
||||
process.env.N8N_MCP_TELEMETRY_DISABLED = 'true';
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: true,
|
||||
userId: 'test-id'
|
||||
}));
|
||||
|
||||
(TelemetryConfigManager as any).instance = null;
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
|
||||
expect(manager.isEnabled()).toBe(false);
|
||||
|
||||
// Test with environment variable set to enable telemetry
|
||||
process.env.N8N_MCP_TELEMETRY_DISABLED = 'false';
|
||||
(TelemetryConfigManager as any).instance = null;
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: true,
|
||||
userId: 'test-id'
|
||||
}));
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
|
||||
expect(manager.isEnabled()).toBe(true);
|
||||
|
||||
// Restore original environment
|
||||
process.env.N8N_MCP_TELEMETRY_DISABLED = originalEnv;
|
||||
});
|
||||
|
||||
it('should handle invalid JSON in config file gracefully', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue('{ invalid json syntax');
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const config = manager.loadConfig();
|
||||
|
||||
expect(config.enabled).toBe(false); // Default to disabled on corrupt config
|
||||
expect(config.userId).toMatch(/^[a-f0-9]{16}$/); // Should generate new user ID
|
||||
});
|
||||
|
||||
it('should handle config file with partial structure', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: true
|
||||
// Missing userId and firstRun
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const config = manager.loadConfig();
|
||||
|
||||
expect(config.enabled).toBe(true);
|
||||
expect(config.userId).toMatch(/^[a-f0-9]{16}$/);
|
||||
// firstRun might not be defined if config is partial and loaded from disk
|
||||
// The implementation only adds firstRun on first creation
|
||||
});
|
||||
|
||||
it('should handle config file with invalid data types', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: 'not-a-boolean',
|
||||
userId: 12345, // Not a string
|
||||
firstRun: null
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const config = manager.loadConfig();
|
||||
|
||||
// The config manager loads the data as-is, so we get the original types
|
||||
// The validation happens during usage, not loading
|
||||
expect(config.enabled).toBe('not-a-boolean');
|
||||
expect(config.userId).toBe(12345);
|
||||
});
|
||||
|
||||
it('should handle very large config files', () => {
|
||||
const largeConfig = {
|
||||
enabled: true,
|
||||
userId: 'test-id',
|
||||
firstRun: '2024-01-01T00:00:00Z',
|
||||
extraData: 'x'.repeat(1000000) // 1MB of data
|
||||
};
|
||||
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify(largeConfig));
|
||||
|
||||
expect(() => TelemetryConfigManager.getInstance()).not.toThrow();
|
||||
});
|
||||
|
||||
it('should handle config directory creation race conditions', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(false);
|
||||
let mkdirCallCount = 0;
|
||||
vi.mocked(mkdirSync).mockImplementation(() => {
|
||||
mkdirCallCount++;
|
||||
if (mkdirCallCount === 1) {
|
||||
throw new Error('EEXIST: file already exists');
|
||||
}
|
||||
return undefined;
|
||||
});
|
||||
|
||||
expect(() => TelemetryConfigManager.getInstance()).not.toThrow();
|
||||
});
|
||||
|
||||
it('should handle file system permission changes', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: false,
|
||||
userId: 'test-id'
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
|
||||
// Simulate permission denied on subsequent write
|
||||
vi.mocked(writeFileSync).mockImplementationOnce(() => {
|
||||
throw new Error('EACCES: permission denied');
|
||||
});
|
||||
|
||||
expect(() => manager.enable()).not.toThrow();
|
||||
});
|
||||
|
||||
it('should handle system clock changes affecting timestamps', () => {
|
||||
const futureDate = new Date(Date.now() + 365 * 24 * 60 * 60 * 1000); // 1 year in future
|
||||
const pastDate = new Date(Date.now() - 365 * 24 * 60 * 60 * 1000); // 1 year in past
|
||||
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: true,
|
||||
userId: 'test-id',
|
||||
firstRun: futureDate.toISOString()
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const config = manager.loadConfig();
|
||||
|
||||
expect(config.firstRun).toBeDefined();
|
||||
expect(new Date(config.firstRun as string).getTime()).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
it('should handle config updates during runtime', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: false,
|
||||
userId: 'test-id'
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
expect(manager.isEnabled()).toBe(false);
|
||||
|
||||
// Simulate external config change by clearing cache first
|
||||
(manager as any).config = null;
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: true,
|
||||
userId: 'test-id'
|
||||
}));
|
||||
|
||||
// Now calling loadConfig should pick up changes
|
||||
const newConfig = manager.loadConfig();
|
||||
expect(newConfig.enabled).toBe(true);
|
||||
expect(manager.isEnabled()).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle multiple rapid enable/disable calls', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: false,
|
||||
userId: 'test-id'
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
|
||||
// Rapidly toggle state
|
||||
for (let i = 0; i < 100; i++) {
|
||||
if (i % 2 === 0) {
|
||||
manager.enable();
|
||||
} else {
|
||||
manager.disable();
|
||||
}
|
||||
}
|
||||
|
||||
// Should not crash and maintain consistent state
|
||||
expect(typeof manager.isEnabled()).toBe('boolean');
|
||||
});
|
||||
|
||||
it('should handle user ID collision (extremely unlikely)', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(false);
|
||||
|
||||
// Mock crypto to always return same bytes
|
||||
const mockBytes = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]);
|
||||
vi.doMock('crypto', () => ({
|
||||
randomBytes: () => mockBytes
|
||||
}));
|
||||
|
||||
(TelemetryConfigManager as any).instance = null;
|
||||
const manager1 = TelemetryConfigManager.getInstance();
|
||||
const userId1 = manager1.getUserId();
|
||||
|
||||
(TelemetryConfigManager as any).instance = null;
|
||||
const manager2 = TelemetryConfigManager.getInstance();
|
||||
const userId2 = manager2.getUserId();
|
||||
|
||||
// Should generate same ID from same random bytes
|
||||
expect(userId1).toBe(userId2);
|
||||
expect(userId1).toMatch(/^[a-f0-9]{16}$/);
|
||||
});
|
||||
|
||||
it('should handle status generation with missing fields', () => {
|
||||
vi.mocked(existsSync).mockReturnValue(true);
|
||||
vi.mocked(readFileSync).mockReturnValue(JSON.stringify({
|
||||
enabled: true
|
||||
// Missing userId and firstRun
|
||||
}));
|
||||
|
||||
manager = TelemetryConfigManager.getInstance();
|
||||
const status = manager.getStatus();
|
||||
|
||||
expect(status).toContain('ENABLED');
|
||||
expect(status).toBeDefined();
|
||||
expect(typeof status).toBe('string');
|
||||
});
|
||||
});
|
||||
});
|
||||
670
tests/unit/telemetry/workflow-sanitizer.test.ts
Normal file
670
tests/unit/telemetry/workflow-sanitizer.test.ts
Normal file
@@ -0,0 +1,670 @@
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { WorkflowSanitizer } from '../../../src/telemetry/workflow-sanitizer';
|
||||
|
||||
describe('WorkflowSanitizer', () => {
|
||||
describe('sanitizeWorkflow', () => {
|
||||
it('should remove API keys from parameters', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'HTTP Request',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
position: [100, 100],
|
||||
parameters: {
|
||||
url: 'https://api.example.com',
|
||||
apiKey: 'sk-1234567890abcdef1234567890abcdef',
|
||||
headers: {
|
||||
'Authorization': 'Bearer sk-1234567890abcdef1234567890abcdef'
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.nodes[0].parameters.apiKey).toBe('[REDACTED]');
|
||||
expect(sanitized.nodes[0].parameters.headers.Authorization).toBe('[REDACTED]');
|
||||
});
|
||||
|
||||
it('should sanitize webhook URLs but keep structure', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
position: [100, 100],
|
||||
parameters: {
|
||||
path: 'my-webhook',
|
||||
webhookUrl: 'https://n8n.example.com/webhook/abc-def-ghi',
|
||||
method: 'POST'
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.nodes[0].parameters.webhookUrl).toBe('[REDACTED]');
|
||||
expect(sanitized.nodes[0].parameters.method).toBe('POST'); // Method should remain
|
||||
expect(sanitized.nodes[0].parameters.path).toBe('my-webhook'); // Path should remain
|
||||
});
|
||||
|
||||
it('should remove credentials entirely', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Slack',
|
||||
type: 'n8n-nodes-base.slack',
|
||||
position: [100, 100],
|
||||
parameters: {
|
||||
channel: 'general',
|
||||
text: 'Hello World'
|
||||
},
|
||||
credentials: {
|
||||
slackApi: {
|
||||
id: 'cred-123',
|
||||
name: 'My Slack'
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.nodes[0].credentials).toBeUndefined();
|
||||
expect(sanitized.nodes[0].parameters.channel).toBe('general'); // Channel should remain
|
||||
expect(sanitized.nodes[0].parameters.text).toBe('Hello World'); // Text should remain
|
||||
});
|
||||
|
||||
it('should sanitize URLs in parameters', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'HTTP Request',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
position: [100, 100],
|
||||
parameters: {
|
||||
url: 'https://api.example.com/endpoint',
|
||||
endpoint: 'https://another.example.com/api',
|
||||
baseUrl: 'https://base.example.com'
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.nodes[0].parameters.url).toBe('[REDACTED]');
|
||||
expect(sanitized.nodes[0].parameters.endpoint).toBe('[REDACTED]');
|
||||
expect(sanitized.nodes[0].parameters.baseUrl).toBe('[REDACTED]');
|
||||
});
|
||||
|
||||
it('should calculate workflow metrics correctly', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
position: [100, 100],
|
||||
parameters: {}
|
||||
},
|
||||
{
|
||||
id: '2',
|
||||
name: 'HTTP Request',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
position: [200, 100],
|
||||
parameters: {}
|
||||
},
|
||||
{
|
||||
id: '3',
|
||||
name: 'Slack',
|
||||
type: 'n8n-nodes-base.slack',
|
||||
position: [300, 100],
|
||||
parameters: {}
|
||||
}
|
||||
],
|
||||
connections: {
|
||||
'1': {
|
||||
main: [[{ node: '2', type: 'main', index: 0 }]]
|
||||
},
|
||||
'2': {
|
||||
main: [[{ node: '3', type: 'main', index: 0 }]]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.nodeCount).toBe(3);
|
||||
expect(sanitized.nodeTypes).toContain('n8n-nodes-base.webhook');
|
||||
expect(sanitized.nodeTypes).toContain('n8n-nodes-base.httpRequest');
|
||||
expect(sanitized.nodeTypes).toContain('n8n-nodes-base.slack');
|
||||
expect(sanitized.hasTrigger).toBe(true);
|
||||
expect(sanitized.hasWebhook).toBe(true);
|
||||
expect(sanitized.complexity).toBe('simple');
|
||||
});
|
||||
|
||||
it('should calculate complexity based on node count', () => {
|
||||
const createWorkflow = (nodeCount: number) => ({
|
||||
nodes: Array.from({ length: nodeCount }, (_, i) => ({
|
||||
id: String(i),
|
||||
name: `Node ${i}`,
|
||||
type: 'n8n-nodes-base.function',
|
||||
position: [i * 100, 100],
|
||||
parameters: {}
|
||||
})),
|
||||
connections: {}
|
||||
});
|
||||
|
||||
const simple = WorkflowSanitizer.sanitizeWorkflow(createWorkflow(5));
|
||||
expect(simple.complexity).toBe('simple');
|
||||
|
||||
const medium = WorkflowSanitizer.sanitizeWorkflow(createWorkflow(15));
|
||||
expect(medium.complexity).toBe('medium');
|
||||
|
||||
const complex = WorkflowSanitizer.sanitizeWorkflow(createWorkflow(25));
|
||||
expect(complex.complexity).toBe('complex');
|
||||
});
|
||||
|
||||
it('should generate consistent workflow hash', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Webhook',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
position: [100, 100],
|
||||
parameters: { path: 'test' }
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const hash1 = WorkflowSanitizer.generateWorkflowHash(workflow);
|
||||
const hash2 = WorkflowSanitizer.generateWorkflowHash(workflow);
|
||||
|
||||
expect(hash1).toBe(hash2);
|
||||
expect(hash1).toMatch(/^[a-f0-9]{16}$/);
|
||||
});
|
||||
|
||||
it('should sanitize nested objects in parameters', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Complex Node',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
position: [100, 100],
|
||||
parameters: {
|
||||
options: {
|
||||
headers: {
|
||||
'X-API-Key': 'secret-key-1234567890abcdef',
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: {
|
||||
data: 'some data',
|
||||
token: 'another-secret-token-xyz123'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.nodes[0].parameters.options.headers['X-API-Key']).toBe('[REDACTED]');
|
||||
expect(sanitized.nodes[0].parameters.options.headers['Content-Type']).toBe('application/json');
|
||||
expect(sanitized.nodes[0].parameters.options.body.data).toBe('some data');
|
||||
expect(sanitized.nodes[0].parameters.options.body.token).toBe('[REDACTED]');
|
||||
});
|
||||
|
||||
it('should preserve connections structure', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Node 1',
|
||||
type: 'n8n-nodes-base.start',
|
||||
position: [100, 100],
|
||||
parameters: {}
|
||||
},
|
||||
{
|
||||
id: '2',
|
||||
name: 'Node 2',
|
||||
type: 'n8n-nodes-base.function',
|
||||
position: [200, 100],
|
||||
parameters: {}
|
||||
}
|
||||
],
|
||||
connections: {
|
||||
'1': {
|
||||
main: [[{ node: '2', type: 'main', index: 0 }]],
|
||||
error: [[{ node: '2', type: 'error', index: 0 }]]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.connections).toEqual({
|
||||
'1': {
|
||||
main: [[{ node: '2', type: 'main', index: 0 }]],
|
||||
error: [[{ node: '2', type: 'error', index: 0 }]]
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
it('should remove sensitive workflow metadata', () => {
|
||||
const workflow = {
|
||||
id: 'workflow-123',
|
||||
name: 'My Workflow',
|
||||
nodes: [],
|
||||
connections: {},
|
||||
settings: {
|
||||
errorWorkflow: 'error-workflow-id',
|
||||
timezone: 'America/New_York'
|
||||
},
|
||||
staticData: { some: 'data' },
|
||||
pinData: { node1: 'pinned' },
|
||||
credentials: { slack: 'cred-123' },
|
||||
sharedWorkflows: ['user-456'],
|
||||
ownedBy: 'user-123',
|
||||
createdBy: 'user-123',
|
||||
updatedBy: 'user-456'
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
// Verify that sensitive workflow-level properties are not in the sanitized output
|
||||
// The sanitized workflow should only have specific fields as defined in SanitizedWorkflow interface
|
||||
expect(sanitized.nodes).toEqual([]);
|
||||
expect(sanitized.connections).toEqual({});
|
||||
expect(sanitized.nodeCount).toBe(0);
|
||||
expect(sanitized.nodeTypes).toEqual([]);
|
||||
|
||||
// Verify these fields don't exist in the sanitized output
|
||||
const sanitizedAsAny = sanitized as any;
|
||||
expect(sanitizedAsAny.settings).toBeUndefined();
|
||||
expect(sanitizedAsAny.staticData).toBeUndefined();
|
||||
expect(sanitizedAsAny.pinData).toBeUndefined();
|
||||
expect(sanitizedAsAny.credentials).toBeUndefined();
|
||||
expect(sanitizedAsAny.sharedWorkflows).toBeUndefined();
|
||||
expect(sanitizedAsAny.ownedBy).toBeUndefined();
|
||||
expect(sanitizedAsAny.createdBy).toBeUndefined();
|
||||
expect(sanitizedAsAny.updatedBy).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('edge cases and error handling', () => {
|
||||
it('should handle null or undefined workflow', () => {
|
||||
// The actual implementation will throw because JSON.parse(JSON.stringify(null)) is valid but creates issues
|
||||
expect(() => WorkflowSanitizer.sanitizeWorkflow(null as any)).toThrow();
|
||||
expect(() => WorkflowSanitizer.sanitizeWorkflow(undefined as any)).toThrow();
|
||||
});
|
||||
|
||||
it('should handle workflow without nodes', () => {
|
||||
const workflow = {
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.nodeCount).toBe(0);
|
||||
expect(sanitized.nodeTypes).toEqual([]);
|
||||
expect(sanitized.nodes).toEqual([]);
|
||||
expect(sanitized.hasTrigger).toBe(false);
|
||||
expect(sanitized.hasWebhook).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle workflow without connections', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Test Node',
|
||||
type: 'n8n-nodes-base.function',
|
||||
position: [100, 100],
|
||||
parameters: {}
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.connections).toEqual({});
|
||||
expect(sanitized.nodeCount).toBe(1);
|
||||
});
|
||||
|
||||
it('should handle malformed nodes array', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '2',
|
||||
name: 'Valid Node',
|
||||
type: 'n8n-nodes-base.function',
|
||||
position: [100, 100],
|
||||
parameters: {}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
// Should handle workflow gracefully
|
||||
expect(sanitized.nodeCount).toBe(1);
|
||||
expect(sanitized.nodes.length).toBe(1);
|
||||
});
|
||||
|
||||
it('should handle deeply nested objects in parameters', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Deep Node',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
position: [100, 100],
|
||||
parameters: {
|
||||
level1: {
|
||||
level2: {
|
||||
level3: {
|
||||
level4: {
|
||||
level5: {
|
||||
secret: 'deep-secret-key-1234567890abcdef',
|
||||
safe: 'safe-value'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.nodes[0].parameters.level1.level2.level3.level4.level5.secret).toBe('[REDACTED]');
|
||||
expect(sanitized.nodes[0].parameters.level1.level2.level3.level4.level5.safe).toBe('safe-value');
|
||||
});
|
||||
|
||||
it('should handle circular references gracefully', () => {
|
||||
const workflow: any = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Circular Node',
|
||||
type: 'n8n-nodes-base.function',
|
||||
position: [100, 100],
|
||||
parameters: {}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
// Create circular reference
|
||||
workflow.nodes[0].parameters.selfRef = workflow.nodes[0];
|
||||
|
||||
// JSON.stringify throws on circular references, so this should throw
|
||||
expect(() => WorkflowSanitizer.sanitizeWorkflow(workflow)).toThrow();
|
||||
});
|
||||
|
||||
it('should handle extremely large workflows', () => {
|
||||
const largeWorkflow = {
|
||||
nodes: Array.from({ length: 1000 }, (_, i) => ({
|
||||
id: String(i),
|
||||
name: `Node ${i}`,
|
||||
type: 'n8n-nodes-base.function',
|
||||
position: [i * 10, 100],
|
||||
parameters: {
|
||||
code: `// Node ${i} code here`.repeat(100) // Large parameter
|
||||
}
|
||||
})),
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(largeWorkflow);
|
||||
|
||||
expect(sanitized.nodeCount).toBe(1000);
|
||||
expect(sanitized.complexity).toBe('complex');
|
||||
});
|
||||
|
||||
it('should handle various sensitive data patterns', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Sensitive Node',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
position: [100, 100],
|
||||
parameters: {
|
||||
// Different patterns of sensitive data
|
||||
api_key: 'sk-1234567890abcdef1234567890abcdef',
|
||||
accessToken: 'ghp_abcdefghijklmnopqrstuvwxyz123456',
|
||||
secret_token: 'secret-123-abc-def',
|
||||
authKey: 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9',
|
||||
clientSecret: 'abc123def456ghi789',
|
||||
webhookUrl: 'https://hooks.example.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX',
|
||||
databaseUrl: 'postgres://user:password@localhost:5432/db',
|
||||
connectionString: 'Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;',
|
||||
// Safe values that should remain
|
||||
timeout: 5000,
|
||||
method: 'POST',
|
||||
retries: 3,
|
||||
name: 'My API Call'
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
const params = sanitized.nodes[0].parameters;
|
||||
expect(params.api_key).toBe('[REDACTED]');
|
||||
expect(params.accessToken).toBe('[REDACTED]');
|
||||
expect(params.secret_token).toBe('[REDACTED]');
|
||||
expect(params.authKey).toBe('[REDACTED]');
|
||||
expect(params.clientSecret).toBe('[REDACTED]');
|
||||
expect(params.webhookUrl).toBe('[REDACTED]');
|
||||
expect(params.databaseUrl).toBe('[REDACTED]');
|
||||
expect(params.connectionString).toBe('[REDACTED]');
|
||||
|
||||
// Safe values should remain
|
||||
expect(params.timeout).toBe(5000);
|
||||
expect(params.method).toBe('POST');
|
||||
expect(params.retries).toBe(3);
|
||||
expect(params.name).toBe('My API Call');
|
||||
});
|
||||
|
||||
it('should handle arrays in parameters', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Array Node',
|
||||
type: 'n8n-nodes-base.httpRequest',
|
||||
position: [100, 100],
|
||||
parameters: {
|
||||
headers: [
|
||||
{ name: 'Authorization', value: 'Bearer secret-token-123456789' },
|
||||
{ name: 'Content-Type', value: 'application/json' },
|
||||
{ name: 'X-API-Key', value: 'api-key-abcdefghijklmnopqrstuvwxyz' }
|
||||
],
|
||||
methods: ['GET', 'POST']
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
const headers = sanitized.nodes[0].parameters.headers;
|
||||
expect(headers[0].value).toBe('[REDACTED]'); // Authorization
|
||||
expect(headers[1].value).toBe('application/json'); // Content-Type (safe)
|
||||
expect(headers[2].value).toBe('[REDACTED]'); // X-API-Key
|
||||
expect(sanitized.nodes[0].parameters.methods).toEqual(['GET', 'POST']); // Array should remain
|
||||
});
|
||||
|
||||
it('should handle mixed data types in parameters', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Mixed Node',
|
||||
type: 'n8n-nodes-base.function',
|
||||
position: [100, 100],
|
||||
parameters: {
|
||||
numberValue: 42,
|
||||
booleanValue: true,
|
||||
stringValue: 'safe string',
|
||||
nullValue: null,
|
||||
undefinedValue: undefined,
|
||||
dateValue: new Date('2024-01-01'),
|
||||
arrayValue: [1, 2, 3],
|
||||
nestedObject: {
|
||||
secret: 'secret-key-12345678',
|
||||
safe: 'safe-value'
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
const params = sanitized.nodes[0].parameters;
|
||||
expect(params.numberValue).toBe(42);
|
||||
expect(params.booleanValue).toBe(true);
|
||||
expect(params.stringValue).toBe('safe string');
|
||||
expect(params.nullValue).toBeNull();
|
||||
expect(params.undefinedValue).toBeUndefined();
|
||||
expect(params.arrayValue).toEqual([1, 2, 3]);
|
||||
expect(params.nestedObject.secret).toBe('[REDACTED]');
|
||||
expect(params.nestedObject.safe).toBe('safe-value');
|
||||
});
|
||||
|
||||
it('should handle missing node properties gracefully', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{ id: '3', name: 'Complete', type: 'n8n-nodes-base.function' } // Missing position but has required fields
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.nodes).toBeDefined();
|
||||
expect(sanitized.nodeCount).toBe(1);
|
||||
});
|
||||
|
||||
it('should handle complex connection structures', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{ id: '1', name: 'Start', type: 'n8n-nodes-base.start', position: [0, 0], parameters: {} },
|
||||
{ id: '2', name: 'Branch', type: 'n8n-nodes-base.if', position: [100, 0], parameters: {} },
|
||||
{ id: '3', name: 'Path A', type: 'n8n-nodes-base.function', position: [200, 0], parameters: {} },
|
||||
{ id: '4', name: 'Path B', type: 'n8n-nodes-base.function', position: [200, 100], parameters: {} },
|
||||
{ id: '5', name: 'Merge', type: 'n8n-nodes-base.merge', position: [300, 50], parameters: {} }
|
||||
],
|
||||
connections: {
|
||||
'1': {
|
||||
main: [[{ node: '2', type: 'main', index: 0 }]]
|
||||
},
|
||||
'2': {
|
||||
main: [
|
||||
[{ node: '3', type: 'main', index: 0 }],
|
||||
[{ node: '4', type: 'main', index: 0 }]
|
||||
]
|
||||
},
|
||||
'3': {
|
||||
main: [[{ node: '5', type: 'main', index: 0 }]]
|
||||
},
|
||||
'4': {
|
||||
main: [[{ node: '5', type: 'main', index: 1 }]]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.connections).toEqual(workflow.connections);
|
||||
expect(sanitized.nodeCount).toBe(5);
|
||||
expect(sanitized.complexity).toBe('simple'); // 5 nodes = simple
|
||||
});
|
||||
|
||||
it('should generate different hashes for different workflows', () => {
|
||||
const workflow1 = {
|
||||
nodes: [{ id: '1', name: 'Node1', type: 'type1', position: [0, 0], parameters: {} }],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const workflow2 = {
|
||||
nodes: [{ id: '1', name: 'Node2', type: 'type2', position: [0, 0], parameters: {} }],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const hash1 = WorkflowSanitizer.generateWorkflowHash(workflow1);
|
||||
const hash2 = WorkflowSanitizer.generateWorkflowHash(workflow2);
|
||||
|
||||
expect(hash1).not.toBe(hash2);
|
||||
expect(hash1).toMatch(/^[a-f0-9]{16}$/);
|
||||
expect(hash2).toMatch(/^[a-f0-9]{16}$/);
|
||||
});
|
||||
|
||||
it('should handle workflow with only trigger nodes', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{ id: '1', name: 'Cron', type: 'n8n-nodes-base.cron', position: [0, 0], parameters: {} },
|
||||
{ id: '2', name: 'Webhook', type: 'n8n-nodes-base.webhook', position: [100, 0], parameters: {} }
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.hasTrigger).toBe(true);
|
||||
expect(sanitized.hasWebhook).toBe(true);
|
||||
expect(sanitized.nodeTypes).toContain('n8n-nodes-base.cron');
|
||||
expect(sanitized.nodeTypes).toContain('n8n-nodes-base.webhook');
|
||||
});
|
||||
|
||||
it('should handle workflow with special characters in node names and types', () => {
|
||||
const workflow = {
|
||||
nodes: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'Node with émojis 🚀 and specíal chars',
|
||||
type: 'n8n-nodes-base.function',
|
||||
position: [0, 0],
|
||||
parameters: {
|
||||
message: 'Test with émojis 🎉 and URLs https://example.com'
|
||||
}
|
||||
}
|
||||
],
|
||||
connections: {}
|
||||
};
|
||||
|
||||
const sanitized = WorkflowSanitizer.sanitizeWorkflow(workflow);
|
||||
|
||||
expect(sanitized.nodeCount).toBe(1);
|
||||
expect(sanitized.nodes[0].name).toBe('Node with émojis 🚀 and specíal chars');
|
||||
});
|
||||
});
|
||||
});
|
||||
132
verify-telemetry-fix.js
Normal file
132
verify-telemetry-fix.js
Normal file
@@ -0,0 +1,132 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* Verification script to test that telemetry permissions are fixed
|
||||
* Run this AFTER applying the GRANT permissions fix
|
||||
*/
|
||||
|
||||
const { createClient } = require('@supabase/supabase-js');
|
||||
const crypto = require('crypto');
|
||||
|
||||
const TELEMETRY_BACKEND = {
|
||||
URL: 'https://ydyufsohxdfpopqbubwk.supabase.co',
|
||||
ANON_KEY: 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6InlkeXVmc29oeGRmcG9wcWJ1YndrIiwicm9sZSI6ImFub24iLCJpYXQiOjE3NTg3OTYyMDAsImV4cCI6MjA3NDM3MjIwMH0.xESphg6h5ozaDsm4Vla3QnDJGc6Nc_cpfoqTHRynkCk'
|
||||
};
|
||||
|
||||
async function verifyTelemetryFix() {
|
||||
console.log('🔍 VERIFYING TELEMETRY PERMISSIONS FIX');
|
||||
console.log('====================================\n');
|
||||
|
||||
const supabase = createClient(TELEMETRY_BACKEND.URL, TELEMETRY_BACKEND.ANON_KEY, {
|
||||
auth: {
|
||||
persistSession: false,
|
||||
autoRefreshToken: false,
|
||||
}
|
||||
});
|
||||
|
||||
const testUserId = 'verify-' + crypto.randomBytes(4).toString('hex');
|
||||
|
||||
// Test 1: Event insert
|
||||
console.log('📝 Test 1: Event insert');
|
||||
try {
|
||||
const { data, error } = await supabase
|
||||
.from('telemetry_events')
|
||||
.insert([{
|
||||
user_id: testUserId,
|
||||
event: 'verification_test',
|
||||
properties: { fixed: true }
|
||||
}]);
|
||||
|
||||
if (error) {
|
||||
console.error('❌ Event insert failed:', error.message);
|
||||
return false;
|
||||
} else {
|
||||
console.log('✅ Event insert successful');
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('❌ Event insert exception:', e.message);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Test 2: Workflow insert
|
||||
console.log('📝 Test 2: Workflow insert');
|
||||
try {
|
||||
const { data, error } = await supabase
|
||||
.from('telemetry_workflows')
|
||||
.insert([{
|
||||
user_id: testUserId,
|
||||
workflow_hash: 'verify-' + crypto.randomBytes(4).toString('hex'),
|
||||
node_count: 2,
|
||||
node_types: ['n8n-nodes-base.webhook', 'n8n-nodes-base.set'],
|
||||
has_trigger: true,
|
||||
has_webhook: true,
|
||||
complexity: 'simple',
|
||||
sanitized_workflow: {
|
||||
nodes: [{
|
||||
id: 'test-node',
|
||||
type: 'n8n-nodes-base.webhook',
|
||||
position: [100, 100],
|
||||
parameters: {}
|
||||
}],
|
||||
connections: {}
|
||||
}
|
||||
}]);
|
||||
|
||||
if (error) {
|
||||
console.error('❌ Workflow insert failed:', error.message);
|
||||
return false;
|
||||
} else {
|
||||
console.log('✅ Workflow insert successful');
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('❌ Workflow insert exception:', e.message);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Test 3: Upsert operation (like real telemetry)
|
||||
console.log('📝 Test 3: Upsert operation');
|
||||
try {
|
||||
const workflowHash = 'upsert-verify-' + crypto.randomBytes(4).toString('hex');
|
||||
|
||||
const { data, error } = await supabase
|
||||
.from('telemetry_workflows')
|
||||
.upsert([{
|
||||
user_id: testUserId,
|
||||
workflow_hash: workflowHash,
|
||||
node_count: 3,
|
||||
node_types: ['n8n-nodes-base.webhook', 'n8n-nodes-base.set', 'n8n-nodes-base.if'],
|
||||
has_trigger: true,
|
||||
has_webhook: true,
|
||||
complexity: 'medium',
|
||||
sanitized_workflow: {
|
||||
nodes: [],
|
||||
connections: {}
|
||||
}
|
||||
}], {
|
||||
onConflict: 'workflow_hash',
|
||||
ignoreDuplicates: true,
|
||||
});
|
||||
|
||||
if (error) {
|
||||
console.error('❌ Upsert failed:', error.message);
|
||||
return false;
|
||||
} else {
|
||||
console.log('✅ Upsert successful');
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('❌ Upsert exception:', e.message);
|
||||
return false;
|
||||
}
|
||||
|
||||
console.log('\n🎉 All tests passed! Telemetry permissions are fixed.');
|
||||
console.log('👍 Workflow telemetry should now work in the actual application.');
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
async function main() {
|
||||
const success = await verifyTelemetryFix();
|
||||
process.exit(success ? 0 : 1);
|
||||
}
|
||||
|
||||
main().catch(console.error);
|
||||
Reference in New Issue
Block a user