diff --git a/README.md b/README.md index 72e5718..613231d 100644 --- a/README.md +++ b/README.md @@ -263,6 +263,10 @@ Add to Claude Desktop config: > 💡 Tip: If you’re running n8n locally on the same machine (e.g., via Docker), use http://host.docker.internal:5678 as the N8N_API_URL. +## 💻 Visual Studio Code Setup + +See [VS Code Project Setup](./docs/VS_CODE_PROJECT_SETUP.md) for instructions. + ## 🤖 Claude Project Setup For the best results when using n8n-MCP with Claude Projects, use these enhanced system instructions: diff --git a/docs/VS_CODE_PROJECT_SETUP.md b/docs/VS_CODE_PROJECT_SETUP.md new file mode 100644 index 0000000..11b6fee --- /dev/null +++ b/docs/VS_CODE_PROJECT_SETUP.md @@ -0,0 +1,201 @@ +# Visual Studio Code Setup + +:white_check_mark: This n8n MCP server is compatible with VS Code + GitHub Copilot (Chat in IDE). + +## Preconditions + +Assuming you've already deployed the n8n MCP server and connected it to the n8n API, and it's available at: +`https://n8n.your.production.url/` + +💡 The deployment process is documented in the [HTTP Deployment Guide](./HTTP_DEPLOYMENT.md). + +## Step 1 + +Start by creating a new VS Code project folder. + +## Step 2 + +Create a file: `.vscode/mcp.json` +```json +{ + "inputs": [ + { + "type": "promptString", + "id": "n8n-mcp-token", + "description": "Your n8n-MCP AUTH_TOKEN", + "password": true + } + ], + "servers": { + "n8n-mcp": { + "type": "http", + "url": "https://n8n.your.production.url/mcp", + "headers": { + "Authorization": "Bearer ${input:n8n-mcp-token}" + } + } + } +} +``` + +💡 The `inputs` block ensures the token is requested interactively — no need to hardcode secrets. + +## Step 3 + +GitHub Copilot does not provide access to "thinking models" for unpaid users. To improve results, install the official [Sequential Thinking MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking) referenced in the [VS Code docs](https://code.visualstudio.com/mcp#:~:text=Install%20Linear-,Sequential%20Thinking,-Model%20Context%20Protocol). This lightweight add-on can turn any LLM into a thinking model by enabling step-by-step reasoning. It's highly recommended to use the n8n-mcp server in combination with a sequential thinking model to generate more accurate outputs. + +🔧 Alternatively, you can try enabling this setting in Copilot to unlock "thinking mode" behavior: + +![VS Code Settings > GitHub > Copilot > Chat > Agent: Thinking Tool](./img/vsc_ghcp_chat_thinking_tool.png) + +_(Note: I haven’t tested this setting myself, as I use the Sequential Thinking MCP instead)_ + +## Step 4 + +For the best results when using n8n-MCP with VS Code, use these enhanced system instructions (copy to your project’s `.github/copilot-instructions.md`): + +```markdown +You are an expert in n8n automation software using n8n-MCP tools. Your role is to design, build, and validate n8n workflows with maximum accuracy and efficiency. + +## Core Workflow Process + +1. **ALWAYS start new conversation with**: `tools_documentation()` to understand best practices and available tools. + +2. **Discovery Phase** - Find the right nodes: + - Think deeply about user request and the logic you are going to build to fulfill it. Ask follow-up questions to clarify the user's intent, if something is unclear. Then, proceed with the rest of your instructions. + - `search_nodes({query: 'keyword'})` - Search by functionality + - `list_nodes({category: 'trigger'})` - Browse by category + - `list_ai_tools()` - See AI-capable nodes (remember: ANY node can be an AI tool!) + +3. **Configuration Phase** - Get node details efficiently: + - `get_node_essentials(nodeType)` - Start here! Only 10-20 essential properties + - `search_node_properties(nodeType, 'auth')` - Find specific properties + - `get_node_for_task('send_email')` - Get pre-configured templates + - `get_node_documentation(nodeType)` - Human-readable docs when needed + - It is good common practice to show a visual representation of the workflow architecture to the user and asking for opinion, before moving forward. + +4. **Pre-Validation Phase** - Validate BEFORE building: + - `validate_node_minimal(nodeType, config)` - Quick required fields check + - `validate_node_operation(nodeType, config, profile)` - Full operation-aware validation + - Fix any validation errors before proceeding + +5. **Building Phase** - Create the workflow: + - Use validated configurations from step 4 + - Connect nodes with proper structure + - Add error handling where appropriate + - Use expressions like $json, $node["NodeName"].json + - Build the workflow in an artifact for easy editing downstream (unless the user asked to create in n8n instance) + +6. **Workflow Validation Phase** - Validate complete workflow: + - `validate_workflow(workflow)` - Complete validation including connections + - `validate_workflow_connections(workflow)` - Check structure and AI tool connections + - `validate_workflow_expressions(workflow)` - Validate all n8n expressions + - Fix any issues found before deployment + +7. **Deployment Phase** (if n8n API configured): + - `n8n_create_workflow(workflow)` - Deploy validated workflow + - `n8n_validate_workflow({id: 'workflow-id'})` - Post-deployment validation + - `n8n_update_partial_workflow()` - Make incremental updates using diffs + - `n8n_trigger_webhook_workflow()` - Test webhook workflows + +## Key Insights + +- **USE CODE NODE ONLY WHEN IT IS NECESSARY** - always prefer to use standard nodes over code node. Use code node only when you are sure you need it. +- **VALIDATE EARLY AND OFTEN** - Catch errors before they reach deployment +- **USE DIFF UPDATES** - Use n8n_update_partial_workflow for 80-90% token savings +- **ANY node can be an AI tool** - not just those with usableAsTool=true +- **Pre-validate configurations** - Use validate_node_minimal before building +- **Post-validate workflows** - Always validate complete workflows before deployment +- **Incremental updates** - Use diff operations for existing workflows +- **Test thoroughly** - Validate both locally and after deployment to n8n + +## Validation Strategy + +### Before Building: +1. validate_node_minimal() - Check required fields +2. validate_node_operation() - Full configuration validation +3. Fix all errors before proceeding + +### After Building: +1. validate_workflow() - Complete workflow validation +2. validate_workflow_connections() - Structure validation +3. validate_workflow_expressions() - Expression syntax check + +### After Deployment: +1. n8n_validate_workflow({id}) - Validate deployed workflow +2. n8n_list_executions() - Monitor execution status +3. n8n_update_partial_workflow() - Fix issues using diffs + +## Response Structure + +1. **Discovery**: Show available nodes and options +2. **Pre-Validation**: Validate node configurations first +3. **Configuration**: Show only validated, working configs +4. **Building**: Construct workflow with validated components +5. **Workflow Validation**: Full workflow validation results +6. **Deployment**: Deploy only after all validations pass +7. **Post-Validation**: Verify deployment succeeded + +## Example Workflow + +### 1. Discovery & Configuration +search_nodes({query: 'slack'}) +get_node_essentials('n8n-nodes-base.slack') + +### 2. Pre-Validation +validate_node_minimal('n8n-nodes-base.slack', {resource:'message', operation:'send'}) +validate_node_operation('n8n-nodes-base.slack', fullConfig, 'runtime') + +### 3. Build Workflow +// Create workflow JSON with validated configs + +### 4. Workflow Validation +validate_workflow(workflowJson) +validate_workflow_connections(workflowJson) +validate_workflow_expressions(workflowJson) + +### 5. Deploy (if configured) +n8n_create_workflow(validatedWorkflow) +n8n_validate_workflow({id: createdWorkflowId}) + +### 6. Update Using Diffs +n8n_update_partial_workflow({ + workflowId: id, + operations: [ + {type: 'updateNode', nodeId: 'slack1', changes: {position: [100, 200]}} + ] +}) + +## Important Rules + +- ALWAYS validate before building +- ALWAYS validate after building +- NEVER deploy unvalidated workflows +- USE diff operations for updates (80-90% token savings) +- STATE validation results clearly +- FIX all errors before proceeding +``` + +This helps the agent produce higher-quality, well-structured n8n workflows. + +🔧 Important: To ensure the instructions are always included, make sure this checkbox is enabled in your Copilot settings: + +![VS Code Settings > GitHub > Copilot > Chat > Code Generation: Use Instruction Files](./img/vsc_ghcp_chat_instruction_files.png) + +## Step 5 + +Switch GitHub Copilot to Agent mode: + +![VS Code > GitHub Copilot Chat > Edit files in your workspace in agent mode](./img/vsc_ghcp_chat_agent_mode.png) + +## Step 6 - Try it! + +Here’s an example prompt I used: +``` +#fetch https://blog.n8n.io/rag-chatbot/ + +use #sequentialthinking and #n8n-mcp tools to build a new n8n workflow step-by-step following the guidelines in the blog. +In the end, please deploy a fully-functional n8n workflow. +``` + +🧪 My result wasn’t perfect (a bit messy workflow), but I'm genuinely happy that it created anything autonomously 😄 Stay tuned for updates! diff --git a/docs/img/vsc_ghcp_chat_agent_mode.png b/docs/img/vsc_ghcp_chat_agent_mode.png new file mode 100644 index 0000000..9302d91 Binary files /dev/null and b/docs/img/vsc_ghcp_chat_agent_mode.png differ diff --git a/docs/img/vsc_ghcp_chat_instruction_files.png b/docs/img/vsc_ghcp_chat_instruction_files.png new file mode 100644 index 0000000..b7a5e6b Binary files /dev/null and b/docs/img/vsc_ghcp_chat_instruction_files.png differ diff --git a/docs/img/vsc_ghcp_chat_thinking_tool.png b/docs/img/vsc_ghcp_chat_thinking_tool.png new file mode 100644 index 0000000..4949f98 Binary files /dev/null and b/docs/img/vsc_ghcp_chat_thinking_tool.png differ