Cleaned up all skills to remove research/telemetry context that was used during design but is not needed at runtime when AI agents use the skills. ## Changes Made ### Pattern 1: Research Framing Removed - "From analysis of X workflows/events" → Removed - "From telemetry analysis:" → Replaced with operational context - "Based on X real workflows" → Simplified to general statements ### Pattern 2: Popularity Metrics Removed - "**Popularity**: Second most common (892 templates)" → Removed entirely - "813 searches", "456 templates", etc. → Removed ### Pattern 3: Frequency Percentages Converted - "**Frequency**: 45% of errors" → "Most common error" - "**Frequency**: 28%" → "Second most common" - "**Frequency**: 12%" → "Common error" - Percentages in tables → Priority levels (Highest/High/Medium/Low) ### Pattern 4: Operational Guidance Kept - ✅ Success rates (91.7%) - helps tool selection - ✅ Average times (18s, 56s) - sets expectations - ✅ Relative priority (most common, typical) - guides decisions - ✅ Iteration counts (2-3 cycles) - manages expectations ## Files Modified (19 files across 4 skills) **Skill #2: MCP Tools Expert (5 files)** - Removed telemetry occurrence counts - Kept success rates and average times **Skill #3: Workflow Patterns (7 files)** - Removed all popularity metrics from pattern files - Removed "From analysis of 31,917 workflows" - Removed template counts **Skill #4: Validation Expert (4 files)** - Converted frequency % to priority levels - Removed "From analysis of 19,113 errors" - Removed telemetry loop counts (kept iteration guidance) **Skill #5: Node Configuration (3 files)** - Removed workflow update counts - Removed essentials call counts - Kept success rates and timing guidance ## Result Skills now provide clean, focused runtime guidance without research justification. Content is more actionable for AI agents using the skills. All technical guidance, examples, patterns, and operational metrics preserved. Only removed: research methodology, data source attribution, and statistical justification for design decisions. 🤖 Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
n8n Validation Expert
Expert guidance for interpreting and fixing n8n validation errors.
Overview
Skill Name: n8n Validation Expert Priority: Medium Purpose: Interpret validation errors and guide systematic fixing through the validation loop
The Problem This Solves
Validation errors are common:
- Validation often requires iteration (79% lead to feedback loops)
- 7,841 validate → fix cycles (avg 23s thinking + 58s fixing)
- 2-3 iterations average to achieve valid configuration
Key insight: Validation is an iterative process, not a one-shot fix!
What This Skill Teaches
Core Concepts
-
Error Severity Levels
- Errors (must fix) - Block execution
- Warnings (should fix) - Don't block but indicate issues
- Suggestions (optional) - Nice-to-have improvements
-
The Validation Loop
- Configure → Validate → Read errors → Fix → Validate again
- Average 2-3 iterations to success
- 23 seconds thinking + 58 seconds fixing per cycle
-
Validation Profiles
minimal- Quick checks, most permissiveruntime- Recommended for most use casesai-friendly- Reduces false positives for AI workflowsstrict- Maximum safety, many warnings
-
Auto-Sanitization System
- Automatically fixes operator structure issues
- Runs on every workflow save
- Fixes binary/unary operator problems
- Adds IF/Switch metadata
-
False Positives
- Not all warnings need fixing
- 40% of warnings are acceptable in context
- Use
ai-friendlyprofile to reduce by 60% - Document accepted warnings
File Structure
n8n-validation-expert/
├── SKILL.md (690 lines)
│ Core validation concepts and workflow
│ - Validation philosophy
│ - Error severity levels
│ - The validation loop pattern
│ - Validation profiles
│ - Common error types
│ - Auto-sanitization system
│ - Workflow validation
│ - Recovery strategies
│ - Best practices
│
├── ERROR_CATALOG.md (865 lines)
│ Complete error reference with examples
│ - 9 error types with real examples
│ - missing_required (45% of errors)
│ - invalid_value (28%)
│ - type_mismatch (12%)
│ - invalid_expression (8%)
│ - invalid_reference (5%)
│ - operator_structure (2%, auto-fixed)
│ - Recovery patterns
│ - Summary with frequencies
│
├── FALSE_POSITIVES.md (669 lines)
│ When warnings are acceptable
│ - Philosophy of warning acceptance
│ - 6 common false positive types
│ - When acceptable vs when to fix
│ - Validation profile strategies
│ - Decision framework
│ - Documentation template
│ - Known n8n issues (#304, #306, #338)
│
└── README.md (this file)
Skill metadata and statistics
Total: ~2,224 lines across 4 files
Error Distribution
Based on 19,113 validation errors:
| Error Type | Frequency | Auto-Fix | Severity |
|---|---|---|---|
| missing_required | 45% | ❌ | Error |
| invalid_value | 28% | ❌ | Error |
| type_mismatch | 12% | ❌ | Error |
| invalid_expression | 8% | ❌ | Error |
| invalid_reference | 5% | ❌ | Error |
| operator_structure | 2% | ✅ | Warning |
Validation Loop Statistics
From 7,841 validate → fix cycles:
- Average thinking time: 23 seconds
- Average fix time: 58 seconds
- Total cycle time: 81 seconds average
- Iterations to success: 2-3 average
- Success rate after 3 iterations: 94%
Key Insights
1. Validation is Iterative
Don't expect to get it right on the first try. The data shows 2-3 iterations is normal!
2. False Positives Exist
~40% of warnings are accepted in production workflows. Learn to recognize them.
3. Auto-Sanitization Works
Operator structure issues (2% of errors) are auto-fixed. Don't manually fix these!
4. Profile Matters
ai-friendlyreduces false positives by 60%runtimeis the sweet spot for most use casesstricthas value pre-production but is noisy
5. Error Messages Help
Validation errors include fix guidance - read them carefully!
Usage Examples
Example 1: Basic Validation Loop
// Iteration 1
let config = {
resource: "channel",
operation: "create"
};
const result1 = validate_node_operation({
nodeType: "nodes-base.slack",
config,
profile: "runtime"
});
// → Error: Missing "name"
// Iteration 2
config.name = "general";
const result2 = validate_node_operation({...});
// → Valid! ✅
Example 2: Handling False Positives
// Run validation
const result = validate_node_operation({
nodeType: "nodes-base.slack",
config,
profile: "runtime"
});
// Fix errors (must fix)
if (!result.valid) {
result.errors.forEach(error => {
console.log(`MUST FIX: ${error.message}`);
});
}
// Review warnings (context-dependent)
result.warnings.forEach(warning => {
if (warning.type === 'best_practice' && isDevWorkflow) {
console.log(`ACCEPTABLE: ${warning.message}`);
} else {
console.log(`SHOULD FIX: ${warning.message}`);
}
});
Example 3: Using Auto-Fix
// Check what can be auto-fixed
const preview = n8n_autofix_workflow({
id: "workflow-id",
applyFixes: false // Preview mode
});
console.log(`Can auto-fix: ${preview.fixCount} issues`);
// Apply fixes
if (preview.fixCount > 0) {
n8n_autofix_workflow({
id: "workflow-id",
applyFixes: true
});
}
When This Skill Activates
Trigger phrases:
- "validation error"
- "validation failing"
- "what does this error mean"
- "false positive"
- "validation loop"
- "operator structure"
- "validation profile"
Common scenarios:
- Encountering validation errors
- Stuck in validation feedback loops
- Wondering if warnings need fixing
- Choosing the right validation profile
- Understanding auto-sanitization
Integration with Other Skills
Works With:
- n8n MCP Tools Expert - How to use validation tools correctly
- n8n Expression Syntax - Fix invalid_expression errors
- n8n Node Configuration - Understand required fields
- n8n Workflow Patterns - Validate pattern implementations
Complementary:
- Use MCP Tools Expert to call validation tools
- Use Expression Syntax to fix expression errors
- Use Node Configuration to understand dependencies
- Use Workflow Patterns to validate structure
Testing
Evaluations: 4 test scenarios
-
eval-001-missing-required-field.json
- Tests error interpretation
- Guides to get_node_essentials
- References ERROR_CATALOG.md
-
eval-002-false-positive.json
- Tests warning vs error distinction
- Explains false positives
- References FALSE_POSITIVES.md
- Suggests ai-friendly profile
-
eval-003-auto-sanitization.json
- Tests auto-sanitization understanding
- Explains operator structure fixes
- Advises trusting auto-fix
-
eval-004-validation-loop.json
- Tests iterative validation process
- Explains 2-3 iteration pattern
- Provides systematic approach
Success Metrics
Before this skill:
- Users confused by validation errors
- Multiple failed attempts to fix
- Frustration with "validation loops"
- Fixing issues that auto-fix handles
- Fixing all warnings unnecessarily
After this skill:
- Systematic error resolution
- Understanding of iteration process
- Recognition of false positives
- Trust in auto-sanitization
- Context-aware warning handling
- 94% success within 3 iterations
Related Documentation
- n8n-mcp MCP Server: Provides validation tools
- n8n Validation API: validate_node_operation, validate_workflow, n8n_autofix_workflow
- n8n Issues: #304 (IF metadata), #306 (Switch branches), #338 (credentials)
Version History
- v1.0 (2025-10-20): Initial implementation
- SKILL.md with core concepts
- ERROR_CATALOG.md with 9 error types
- FALSE_POSITIVES.md with 6 false positive patterns
- 4 evaluation scenarios
Author
Conceived by Romuald Członkowski - www.aiadvisors.pl/en
Part of the n8n-skills meta-skill collection.