Implement SQLite database with full-text search for n8n node documentation
Major features implemented: - SQLite storage service with FTS5 for fast node search - Database rebuild mechanism for bulk node extraction - MCP tools: search_nodes, extract_all_nodes, get_node_statistics - Production Docker deployment with persistent storage - Management scripts for database operations - Comprehensive test suite for all functionality Database capabilities: - Stores node source code and metadata - Full-text search by node name or content - No versioning (stores latest only as per requirements) - Supports complete database rebuilds - ~4.5MB database with 500+ nodes indexed Production features: - Automated deployment script - Docker Compose production configuration - Database initialization on first run - Volume persistence for data - Management utilities for operations Documentation: - Updated README with complete instructions - Production deployment guide - Clear troubleshooting section - API reference for all new tools 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
188
docs/IMPLEMENTATION_ROADMAP.md
Normal file
188
docs/IMPLEMENTATION_ROADMAP.md
Normal file
@@ -0,0 +1,188 @@
|
||||
# n8n-MCP Implementation Roadmap
|
||||
|
||||
## ✅ Completed Features
|
||||
|
||||
### 1. Core MCP Server Implementation
|
||||
- [x] Basic MCP server with stdio transport
|
||||
- [x] Tool handlers for n8n workflow operations
|
||||
- [x] Resource handlers for workflow data
|
||||
- [x] Authentication and error handling
|
||||
|
||||
### 2. n8n Integration
|
||||
- [x] n8n API client for workflow management
|
||||
- [x] MCP<->n8n data bridge for format conversion
|
||||
- [x] Workflow execution and monitoring
|
||||
|
||||
### 3. Node Source Extraction
|
||||
- [x] Extract source code from any n8n node
|
||||
- [x] Handle pnpm directory structures
|
||||
- [x] Support for AI Agent node extraction
|
||||
- [x] Bulk extraction capabilities
|
||||
|
||||
### 4. Node Storage System
|
||||
- [x] In-memory storage service
|
||||
- [x] Search functionality
|
||||
- [x] Package statistics
|
||||
- [x] Database export format
|
||||
|
||||
## 🚧 Next Implementation Steps
|
||||
|
||||
### Phase 1: Database Integration (Priority: High)
|
||||
1. **Real Database Backend**
|
||||
- [ ] Add PostgreSQL/SQLite support
|
||||
- [ ] Implement proper migrations
|
||||
- [ ] Add connection pooling
|
||||
- [ ] Transaction support
|
||||
|
||||
2. **Enhanced Storage Features**
|
||||
- [ ] Version tracking for nodes
|
||||
- [ ] Diff detection for updates
|
||||
- [ ] Backup/restore functionality
|
||||
- [ ] Data compression
|
||||
|
||||
### Phase 2: Advanced Search & Analysis (Priority: High)
|
||||
1. **Full-Text Search**
|
||||
- [ ] Elasticsearch/MeiliSearch integration
|
||||
- [ ] Code analysis and indexing
|
||||
- [ ] Semantic search capabilities
|
||||
- [ ] Search by functionality
|
||||
|
||||
2. **Node Analysis**
|
||||
- [ ] Dependency graph generation
|
||||
- [ ] Security vulnerability scanning
|
||||
- [ ] Performance profiling
|
||||
- [ ] Code quality metrics
|
||||
|
||||
### Phase 3: AI Integration (Priority: Medium)
|
||||
1. **AI-Powered Features**
|
||||
- [ ] Node recommendation system
|
||||
- [ ] Workflow generation from descriptions
|
||||
- [ ] Code explanation generation
|
||||
- [ ] Automatic documentation
|
||||
|
||||
2. **Vector Database**
|
||||
- [ ] Node embeddings generation
|
||||
- [ ] Similarity search
|
||||
- [ ] Clustering similar nodes
|
||||
- [ ] AI training data export
|
||||
|
||||
### Phase 4: n8n Node Development (Priority: Medium)
|
||||
1. **MCPNode Enhancements**
|
||||
- [ ] Dynamic tool discovery
|
||||
- [ ] Streaming responses
|
||||
- [ ] File upload/download
|
||||
- [ ] WebSocket support
|
||||
|
||||
2. **Custom Node Features**
|
||||
- [ ] Visual configuration UI
|
||||
- [ ] Credential management
|
||||
- [ ] Error handling improvements
|
||||
- [ ] Performance monitoring
|
||||
|
||||
### Phase 5: API & Web Interface (Priority: Low)
|
||||
1. **REST/GraphQL API**
|
||||
- [ ] Node search API
|
||||
- [ ] Statistics dashboard
|
||||
- [ ] Webhook notifications
|
||||
- [ ] Rate limiting
|
||||
|
||||
2. **Web Dashboard**
|
||||
- [ ] Node browser interface
|
||||
- [ ] Code viewer with syntax highlighting
|
||||
- [ ] Search interface
|
||||
- [ ] Analytics dashboard
|
||||
|
||||
### Phase 6: Production Features (Priority: Low)
|
||||
1. **Deployment**
|
||||
- [ ] Kubernetes manifests
|
||||
- [ ] Helm charts
|
||||
- [ ] Auto-scaling configuration
|
||||
- [ ] Health checks
|
||||
|
||||
2. **Monitoring**
|
||||
- [ ] Prometheus metrics
|
||||
- [ ] Grafana dashboards
|
||||
- [ ] Log aggregation
|
||||
- [ ] Alerting rules
|
||||
|
||||
## 🎯 Immediate Next Steps
|
||||
|
||||
1. **Database Integration** (Week 1-2)
|
||||
```typescript
|
||||
// Add to package.json
|
||||
"typeorm": "^0.3.x",
|
||||
"pg": "^8.x"
|
||||
|
||||
// Create entities/Node.entity.ts
|
||||
@Entity()
|
||||
export class Node {
|
||||
@PrimaryGeneratedColumn('uuid')
|
||||
id: string;
|
||||
|
||||
@Column({ unique: true })
|
||||
nodeType: string;
|
||||
|
||||
@Column('text')
|
||||
sourceCode: string;
|
||||
// ... etc
|
||||
}
|
||||
```
|
||||
|
||||
2. **Add Database MCP Tools** (Week 2)
|
||||
```typescript
|
||||
// New tools:
|
||||
- sync_nodes_to_database
|
||||
- query_nodes_database
|
||||
- export_nodes_for_training
|
||||
```
|
||||
|
||||
3. **Create Migration Scripts** (Week 2-3)
|
||||
```bash
|
||||
npm run migrate:create -- CreateNodesTable
|
||||
npm run migrate:run
|
||||
```
|
||||
|
||||
4. **Implement Caching Layer** (Week 3)
|
||||
- Redis for frequently accessed nodes
|
||||
- LRU cache for search results
|
||||
- Invalidation strategies
|
||||
|
||||
5. **Add Real-Time Updates** (Week 4)
|
||||
- WebSocket server for live updates
|
||||
- Node change notifications
|
||||
- Workflow execution streaming
|
||||
|
||||
## 📊 Success Metrics
|
||||
|
||||
- [ ] Extract and store 100% of n8n nodes
|
||||
- [ ] Search response time < 100ms
|
||||
- [ ] Support for 10k+ stored nodes
|
||||
- [ ] 99.9% uptime for MCP server
|
||||
- [ ] Full-text search accuracy > 90%
|
||||
|
||||
## 🔗 Integration Points
|
||||
|
||||
1. **n8n Community Store**
|
||||
- Sync with community nodes
|
||||
- Version tracking
|
||||
- Popularity metrics
|
||||
|
||||
2. **AI Platforms**
|
||||
- OpenAI fine-tuning exports
|
||||
- Anthropic training data
|
||||
- Local LLM integration
|
||||
|
||||
3. **Development Tools**
|
||||
- VS Code extension
|
||||
- CLI tools
|
||||
- SDK libraries
|
||||
|
||||
## 📝 Documentation Needs
|
||||
|
||||
- [ ] API reference documentation
|
||||
- [ ] Database schema documentation
|
||||
- [ ] Search query syntax guide
|
||||
- [ ] Performance tuning guide
|
||||
- [ ] Security best practices
|
||||
|
||||
This roadmap provides a clear path forward for the n8n-MCP project, with the most critical next step being proper database integration to persist the extracted node data.
|
||||
207
docs/PRODUCTION_DEPLOYMENT.md
Normal file
207
docs/PRODUCTION_DEPLOYMENT.md
Normal file
@@ -0,0 +1,207 @@
|
||||
# Production Deployment Guide for n8n-MCP
|
||||
|
||||
This guide provides instructions for deploying n8n-MCP in a production environment.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker and Docker Compose v2 installed
|
||||
- Node.js 18+ installed (for building)
|
||||
- At least 2GB of available RAM
|
||||
- 1GB of available disk space
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. **Clone the repository**
|
||||
```bash
|
||||
git clone https://github.com/yourusername/n8n-mcp.git
|
||||
cd n8n-mcp
|
||||
```
|
||||
|
||||
2. **Run the deployment script**
|
||||
```bash
|
||||
./scripts/deploy-production.sh
|
||||
```
|
||||
|
||||
This script will:
|
||||
- Check prerequisites
|
||||
- Create a secure `.env` file with generated passwords
|
||||
- Build the project
|
||||
- Create Docker images
|
||||
- Start all services
|
||||
- Initialize the node database
|
||||
|
||||
3. **Access n8n**
|
||||
- URL: `http://localhost:5678`
|
||||
- Use the credentials displayed during deployment
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
If you prefer manual deployment:
|
||||
|
||||
1. **Create .env file**
|
||||
```bash
|
||||
cp .env.example .env
|
||||
# Edit .env with your configuration
|
||||
```
|
||||
|
||||
2. **Build the project**
|
||||
```bash
|
||||
npm install
|
||||
npm run build
|
||||
```
|
||||
|
||||
3. **Start services**
|
||||
```bash
|
||||
docker compose -f docker-compose.prod.yml up -d
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `N8N_BASIC_AUTH_USER` | n8n admin username | admin |
|
||||
| `N8N_BASIC_AUTH_PASSWORD` | n8n admin password | (generated) |
|
||||
| `N8N_HOST` | n8n hostname | localhost |
|
||||
| `N8N_API_KEY` | API key for n8n access | (generated) |
|
||||
| `NODE_DB_PATH` | SQLite database path | /app/data/nodes.db |
|
||||
| `LOG_LEVEL` | Logging level | info |
|
||||
|
||||
### Volumes
|
||||
|
||||
The deployment creates persistent volumes:
|
||||
- `n8n-data`: n8n workflows and credentials
|
||||
- `mcp-data`: MCP node database
|
||||
- `n8n-node-modules`: Read-only n8n node modules
|
||||
|
||||
## Management
|
||||
|
||||
Use the management script for common operations:
|
||||
|
||||
```bash
|
||||
# Check service status
|
||||
./scripts/manage-production.sh status
|
||||
|
||||
# View logs
|
||||
./scripts/manage-production.sh logs
|
||||
|
||||
# Rebuild node database
|
||||
./scripts/manage-production.sh rebuild-db
|
||||
|
||||
# Show database statistics
|
||||
./scripts/manage-production.sh db-stats
|
||||
|
||||
# Create backup
|
||||
./scripts/manage-production.sh backup
|
||||
|
||||
# Update services
|
||||
./scripts/manage-production.sh update
|
||||
```
|
||||
|
||||
## Database Management
|
||||
|
||||
### Initial Database Population
|
||||
|
||||
The database is automatically populated on first startup. To manually rebuild:
|
||||
|
||||
```bash
|
||||
docker compose -f docker-compose.prod.yml exec n8n-mcp node dist/scripts/rebuild-database.js
|
||||
```
|
||||
|
||||
### Database Queries
|
||||
|
||||
Search for nodes:
|
||||
```bash
|
||||
docker compose -f docker-compose.prod.yml exec n8n-mcp sqlite3 /app/data/nodes.db \
|
||||
"SELECT node_type, display_name FROM nodes WHERE name LIKE '%webhook%';"
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Change default passwords**: Always change the generated passwords in production
|
||||
2. **Use HTTPS**: Configure a reverse proxy (nginx, traefik) for HTTPS
|
||||
3. **Firewall**: Restrict access to ports 5678
|
||||
4. **API Keys**: Keep API keys secure and rotate regularly
|
||||
5. **Backups**: Regular backup of data volumes
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
Both services include health checks:
|
||||
- n8n: `http://localhost:5678/healthz`
|
||||
- MCP: Database file existence check
|
||||
|
||||
### Logs
|
||||
|
||||
View logs for debugging:
|
||||
```bash
|
||||
# All services
|
||||
docker compose -f docker-compose.prod.yml logs -f
|
||||
|
||||
# Specific service
|
||||
docker compose -f docker-compose.prod.yml logs -f n8n-mcp
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Database Issues
|
||||
|
||||
If the database is corrupted or needs rebuilding:
|
||||
```bash
|
||||
# Stop services
|
||||
docker compose -f docker-compose.prod.yml stop
|
||||
|
||||
# Remove database
|
||||
docker compose -f docker-compose.prod.yml exec n8n-mcp rm /app/data/nodes.db
|
||||
|
||||
# Start services (database will rebuild)
|
||||
docker compose -f docker-compose.prod.yml start
|
||||
```
|
||||
|
||||
### Memory Issues
|
||||
|
||||
If services run out of memory, increase Docker memory limits:
|
||||
```yaml
|
||||
# In docker-compose.prod.yml
|
||||
services:
|
||||
n8n-mcp:
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
```
|
||||
|
||||
### Connection Issues
|
||||
|
||||
If n8n can't connect to MCP:
|
||||
1. Check both services are running: `docker compose -f docker-compose.prod.yml ps`
|
||||
2. Verify network connectivity: `docker compose -f docker-compose.prod.yml exec n8n ping n8n-mcp`
|
||||
3. Check MCP logs: `docker compose -f docker-compose.prod.yml logs n8n-mcp`
|
||||
|
||||
## Scaling
|
||||
|
||||
For high-availability deployments:
|
||||
|
||||
1. **Database Replication**: Use external SQLite replication or migrate to PostgreSQL
|
||||
2. **Load Balancing**: Deploy multiple MCP instances behind a load balancer
|
||||
3. **Caching**: Implement Redis caching for frequently accessed nodes
|
||||
|
||||
## Updates
|
||||
|
||||
To update to the latest version:
|
||||
|
||||
```bash
|
||||
# Pull latest code
|
||||
git pull
|
||||
|
||||
# Rebuild and restart
|
||||
./scripts/manage-production.sh update
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues and questions:
|
||||
- GitHub Issues: [your-repo-url]/issues
|
||||
- Documentation: [your-docs-url]
|
||||
Reference in New Issue
Block a user