mirror of
https://github.com/czlonkowski/n8n-skills.git
synced 2026-03-17 16:03:09 +00:00
Implements comprehensive Python Code node guidance with critical focus on "NO external libraries" limitation. ## Skill #7 - n8n Code Python **Critical Message**: Use JavaScript for 95% of use cases. Python in n8n has NO external libraries (no requests, pandas, numpy). ### Files Created **Core Skill Files (6 files, 4,205 lines total)**: 1. **SKILL.md** (748 lines) - When to use Python vs JavaScript (95% JavaScript recommendation) - Critical limitation: NO external libraries - Mode selection (All Items vs Each Item) - Data access overview (_input, _json, _node) - Return format requirements - Standard library overview 2. **DATA_ACCESS.md** (702 lines) - _input.all() - Process all items - _input.first() - Get first item - _input.item - Current item (Each Item mode only) - _node["Name"] - Reference other nodes - Webhook body structure (data under ["body"]) - Pattern selection guide - Python vs JavaScript comparison 3. **STANDARD_LIBRARY.md** (974 lines) - Complete reference for available modules - json - JSON parsing and generation - datetime - Date/time operations - re - Regular expressions - base64 - Encoding/decoding - hashlib - Hashing (MD5, SHA256) - urllib.parse - URL operations - math, random, statistics - What's NOT available (requests, pandas, numpy, etc.) - Workarounds for missing libraries 4. **COMMON_PATTERNS.md** (794 lines) - 10 production-tested Python patterns - Multi-source data aggregation - Regex-based filtering - Markdown to structured data - JSON object comparison - CRM data transformation - Release notes processing - Array transformation - Dictionary lookup - Top N filtering - String aggregation - Python vs JavaScript pattern comparison 5. **ERROR_PATTERNS.md** (601 lines) - Top 5 Python-specific errors with solutions - Error #1: ModuleNotFoundError (THE critical Python error) - Error #2: Empty code / missing return - Error #3: KeyError (use .get() instead) - Error #4: IndexError (check bounds first) - Error #5: Incorrect return format - Error prevention checklist - Quick fix reference table - Testing patterns 6. **README.md** (386 lines) - Skill metadata and activation triggers - "JavaScript First" recommendation prominent - What this skill teaches - File structure overview - Integration with other skills - Success metrics checklist - Quick reference guide - Common use cases - Limitations and workarounds - Best practices **Evaluations (5 scenarios)**: 1. **eval-001-module-import-error.json** - Tests understanding of external library limitation - Scenario: ModuleNotFoundError with requests - Expected: Recommend JavaScript or HTTP Request node 2. **eval-002-dictionary-keyerror.json** - Tests safe dictionary access with .get() - Scenario: KeyError when accessing missing field - Expected: Use .get() with default values 3. **eval-003-webhook-body-gotcha.json** - Tests webhook data under ["body"] understanding - Scenario: KeyError when accessing webhook data directly - Expected: Access via data.get("body", {}) 4. **eval-004-return-format-error.json** - Tests proper return format requirement - Scenario: Returning plain dict instead of array - Expected: Return [{"json": {...}}] 5. **eval-005-standard-library-usage.json** - Tests knowledge of available modules - Scenario: What modules for JSON, hashing, dates, regex - Expected: json, hashlib, datetime, re (standard library only) ### Key Features **Critical Limitations Emphasized**: - NO external libraries (no requests, pandas, numpy) - JavaScript recommended for 95% of use cases - Only standard library available - ModuleNotFoundError is #1 Python error **Python-Specific Syntax**: - Underscore prefix: _input, _json, _node (vs $ in JavaScript) - Dictionary access: _json["body"]["field"] (vs dot notation) - Safe access: .get() method with defaults **Complete Standard Library Coverage**: - 15+ modules documented with examples - json, datetime, re, base64, hashlib, urllib.parse - math, random, statistics, collections - Clear list of what's NOT available - Workarounds for missing functionality **Production Patterns**: - 10 tested patterns adapted from JavaScript - Python-specific implementations - List comprehensions and dictionary operations - Standard library usage examples **Error Prevention Focus**: - Top 5 errors cover majority of failures - ModuleNotFoundError prominently featured - Safe dictionary access (.get()) - Proper return format emphasized - Error prevention checklist ### Integration Works seamlessly with: - **n8n Code JavaScript**: Compare approaches, know when to use which - **n8n Expression Syntax**: Different from {{}} expressions - **n8n MCP Tools Expert**: Validate Code node configurations - **n8n Workflow Patterns**: Code nodes in larger workflows - **n8n Node Configuration**: Configure mode and connections ### Statistics - **6 skill files**: 4,205 lines total - **5 evaluations**: Cover critical Python scenarios - **10 patterns**: Production-tested Python code - **15+ modules**: Standard library coverage - **5 top errors**: Prevention and solutions ### Design Principles 1. **JavaScript First**: 95% recommendation throughout 2. **Critical Limitation**: NO external libraries emphasized everywhere 3. **Safe Patterns**: .get() for dicts, bounds checking for lists 4. **Proper Format**: [{"json": {...}}] return format 5. **Standard Library**: Complete reference with examples 6. **Error Prevention**: Top 5 errors with solutions ### Recommendation **Use JavaScript Code node for 95% of use cases.** Use Python only when: - Complex Python-specific logic required - Python standard library features needed - Team more comfortable with Python than JavaScript For HTTP requests, date operations, and most transformations → Use JavaScript. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Conceived by Romuald Członkowski - https://www.aiadvisors.pl/en
795 lines
19 KiB
Markdown
795 lines
19 KiB
Markdown
# Common Patterns - Python Code Node
|
|
|
|
Production-tested Python patterns for n8n Code nodes.
|
|
|
|
---
|
|
|
|
## ⚠️ Important: JavaScript First
|
|
|
|
**Use JavaScript for 95% of use cases.**
|
|
|
|
Python in n8n has **NO external libraries** (no requests, pandas, numpy).
|
|
|
|
Only use Python when:
|
|
- You have complex Python-specific logic
|
|
- You need Python's standard library features
|
|
- You're more comfortable with Python than JavaScript
|
|
|
|
For most workflows, **JavaScript is the better choice**.
|
|
|
|
---
|
|
|
|
## Pattern Overview
|
|
|
|
These 10 patterns cover common n8n Code node scenarios using Python:
|
|
|
|
1. **Multi-Source Data Aggregation** - Combine data from multiple nodes
|
|
2. **Regex-Based Filtering** - Filter items using pattern matching
|
|
3. **Markdown to Structured Data** - Parse markdown into structured format
|
|
4. **JSON Object Comparison** - Compare two JSON objects for changes
|
|
5. **CRM Data Transformation** - Transform CRM data to standard format
|
|
6. **Release Notes Processing** - Parse and categorize release notes
|
|
7. **Array Transformation** - Reshape arrays and extract fields
|
|
8. **Dictionary Lookup** - Create and use lookup dictionaries
|
|
9. **Top N Filtering** - Get top items by score/value
|
|
10. **String Aggregation** - Aggregate strings with formatting
|
|
|
|
---
|
|
|
|
## Pattern 1: Multi-Source Data Aggregation
|
|
|
|
**Use case**: Combine data from multiple sources (APIs, webhooks, databases).
|
|
|
|
**Scenario**: Aggregate news articles from multiple sources.
|
|
|
|
### Implementation
|
|
|
|
```python
|
|
from datetime import datetime
|
|
|
|
all_items = _input.all()
|
|
processed_articles = []
|
|
|
|
for item in all_items:
|
|
source_name = item["json"].get("name", "Unknown")
|
|
source_data = item["json"]
|
|
|
|
# Process Hacker News source
|
|
if source_name == "Hacker News" and source_data.get("hits"):
|
|
for hit in source_data["hits"]:
|
|
processed_articles.append({
|
|
"title": hit.get("title", "No title"),
|
|
"url": hit.get("url", ""),
|
|
"summary": hit.get("story_text") or "No summary",
|
|
"source": "Hacker News",
|
|
"score": hit.get("points", 0),
|
|
"fetched_at": datetime.now().isoformat()
|
|
})
|
|
|
|
# Process Reddit source
|
|
elif source_name == "Reddit" and source_data.get("data"):
|
|
for post in source_data["data"].get("children", []):
|
|
post_data = post.get("data", {})
|
|
processed_articles.append({
|
|
"title": post_data.get("title", "No title"),
|
|
"url": post_data.get("url", ""),
|
|
"summary": post_data.get("selftext", "")[:200],
|
|
"source": "Reddit",
|
|
"score": post_data.get("score", 0),
|
|
"fetched_at": datetime.now().isoformat()
|
|
})
|
|
|
|
# Sort by score descending
|
|
processed_articles.sort(key=lambda x: x["score"], reverse=True)
|
|
|
|
# Return as n8n items
|
|
return [{"json": article} for article in processed_articles]
|
|
```
|
|
|
|
### Key Techniques
|
|
|
|
- Process multiple data sources in one loop
|
|
- Normalize different data structures
|
|
- Use datetime for timestamps
|
|
- Sort by criteria
|
|
- Return properly formatted items
|
|
|
|
---
|
|
|
|
## Pattern 2: Regex-Based Filtering
|
|
|
|
**Use case**: Filter items based on pattern matching in text fields.
|
|
|
|
**Scenario**: Filter support tickets by priority keywords.
|
|
|
|
### Implementation
|
|
|
|
```python
|
|
import re
|
|
|
|
all_items = _input.all()
|
|
priority_tickets = []
|
|
|
|
# High priority keywords pattern
|
|
high_priority_pattern = re.compile(
|
|
r'\b(urgent|critical|emergency|asap|down|outage|broken)\b',
|
|
re.IGNORECASE
|
|
)
|
|
|
|
for item in all_items:
|
|
ticket = item["json"]
|
|
|
|
# Check subject and description
|
|
subject = ticket.get("subject", "")
|
|
description = ticket.get("description", "")
|
|
combined_text = f"{subject} {description}"
|
|
|
|
# Find matches
|
|
matches = high_priority_pattern.findall(combined_text)
|
|
|
|
if matches:
|
|
priority_tickets.append({
|
|
"json": {
|
|
**ticket,
|
|
"priority": "high",
|
|
"matched_keywords": list(set(matches)),
|
|
"keyword_count": len(matches)
|
|
}
|
|
})
|
|
else:
|
|
priority_tickets.append({
|
|
"json": {
|
|
**ticket,
|
|
"priority": "normal",
|
|
"matched_keywords": [],
|
|
"keyword_count": 0
|
|
}
|
|
})
|
|
|
|
# Sort by keyword count (most urgent first)
|
|
priority_tickets.sort(key=lambda x: x["json"]["keyword_count"], reverse=True)
|
|
|
|
return priority_tickets
|
|
```
|
|
|
|
### Key Techniques
|
|
|
|
- Use re.compile() for reusable patterns
|
|
- re.IGNORECASE for case-insensitive matching
|
|
- Combine multiple text fields for searching
|
|
- Extract and deduplicate matches
|
|
- Sort by priority indicators
|
|
|
|
---
|
|
|
|
## Pattern 3: Markdown to Structured Data
|
|
|
|
**Use case**: Parse markdown text into structured data.
|
|
|
|
**Scenario**: Extract tasks from markdown checklist.
|
|
|
|
### Implementation
|
|
|
|
```python
|
|
import re
|
|
|
|
markdown_text = _input.first()["json"]["body"].get("markdown", "")
|
|
|
|
# Parse markdown checklist
|
|
tasks = []
|
|
lines = markdown_text.split("\n")
|
|
|
|
for line in lines:
|
|
# Match: - [ ] Task or - [x] Task
|
|
match = re.match(r'^\s*-\s*\[([ x])\]\s*(.+)$', line, re.IGNORECASE)
|
|
|
|
if match:
|
|
checked = match.group(1).lower() == 'x'
|
|
task_text = match.group(2).strip()
|
|
|
|
# Extract priority if present (e.g., [P1], [HIGH])
|
|
priority_match = re.search(r'\[(P\d|HIGH|MEDIUM|LOW)\]', task_text, re.IGNORECASE)
|
|
priority = priority_match.group(1).upper() if priority_match else "NORMAL"
|
|
|
|
# Remove priority tag from text
|
|
clean_text = re.sub(r'\[(P\d|HIGH|MEDIUM|LOW)\]', '', task_text, flags=re.IGNORECASE).strip()
|
|
|
|
tasks.append({
|
|
"text": clean_text,
|
|
"completed": checked,
|
|
"priority": priority,
|
|
"original_line": line.strip()
|
|
})
|
|
|
|
return [{
|
|
"json": {
|
|
"tasks": tasks,
|
|
"total": len(tasks),
|
|
"completed": sum(1 for t in tasks if t["completed"]),
|
|
"pending": sum(1 for t in tasks if not t["completed"])
|
|
}
|
|
}]
|
|
```
|
|
|
|
### Key Techniques
|
|
|
|
- Line-by-line parsing
|
|
- Multiple regex patterns for extraction
|
|
- Extract metadata from text
|
|
- Calculate summary statistics
|
|
- Return structured data
|
|
|
|
---
|
|
|
|
## Pattern 4: JSON Object Comparison
|
|
|
|
**Use case**: Compare two JSON objects to find differences.
|
|
|
|
**Scenario**: Compare old and new user profile data.
|
|
|
|
### Implementation
|
|
|
|
```python
|
|
import json
|
|
|
|
all_items = _input.all()
|
|
|
|
# Assume first item is old data, second is new data
|
|
old_data = all_items[0]["json"] if len(all_items) > 0 else {}
|
|
new_data = all_items[1]["json"] if len(all_items) > 1 else {}
|
|
|
|
changes = {
|
|
"added": {},
|
|
"removed": {},
|
|
"modified": {},
|
|
"unchanged": {}
|
|
}
|
|
|
|
# Find all unique keys
|
|
all_keys = set(old_data.keys()) | set(new_data.keys())
|
|
|
|
for key in all_keys:
|
|
old_value = old_data.get(key)
|
|
new_value = new_data.get(key)
|
|
|
|
if key not in old_data:
|
|
# Added field
|
|
changes["added"][key] = new_value
|
|
elif key not in new_data:
|
|
# Removed field
|
|
changes["removed"][key] = old_value
|
|
elif old_value != new_value:
|
|
# Modified field
|
|
changes["modified"][key] = {
|
|
"old": old_value,
|
|
"new": new_value
|
|
}
|
|
else:
|
|
# Unchanged field
|
|
changes["unchanged"][key] = old_value
|
|
|
|
return [{
|
|
"json": {
|
|
"changes": changes,
|
|
"summary": {
|
|
"added_count": len(changes["added"]),
|
|
"removed_count": len(changes["removed"]),
|
|
"modified_count": len(changes["modified"]),
|
|
"unchanged_count": len(changes["unchanged"]),
|
|
"has_changes": len(changes["added"]) > 0 or len(changes["removed"]) > 0 or len(changes["modified"]) > 0
|
|
}
|
|
}
|
|
}]
|
|
```
|
|
|
|
### Key Techniques
|
|
|
|
- Set operations for key comparison
|
|
- Dictionary .get() for safe access
|
|
- Categorize changes by type
|
|
- Create summary statistics
|
|
- Return detailed comparison
|
|
|
|
---
|
|
|
|
## Pattern 5: CRM Data Transformation
|
|
|
|
**Use case**: Transform CRM data to standard format.
|
|
|
|
**Scenario**: Normalize data from different CRM systems.
|
|
|
|
### Implementation
|
|
|
|
```python
|
|
from datetime import datetime
|
|
import re
|
|
|
|
all_items = _input.all()
|
|
normalized_contacts = []
|
|
|
|
for item in all_items:
|
|
raw_contact = item["json"]
|
|
source = raw_contact.get("source", "unknown")
|
|
|
|
# Normalize email
|
|
email = raw_contact.get("email", "").lower().strip()
|
|
|
|
# Normalize phone (remove non-digits)
|
|
phone_raw = raw_contact.get("phone", "")
|
|
phone = re.sub(r'\D', '', phone_raw)
|
|
|
|
# Parse name
|
|
if "full_name" in raw_contact:
|
|
name_parts = raw_contact["full_name"].split(" ", 1)
|
|
first_name = name_parts[0] if len(name_parts) > 0 else ""
|
|
last_name = name_parts[1] if len(name_parts) > 1 else ""
|
|
else:
|
|
first_name = raw_contact.get("first_name", "")
|
|
last_name = raw_contact.get("last_name", "")
|
|
|
|
# Normalize status
|
|
status_raw = raw_contact.get("status", "").lower()
|
|
status = "active" if status_raw in ["active", "enabled", "true", "1"] else "inactive"
|
|
|
|
# Create normalized contact
|
|
normalized_contacts.append({
|
|
"json": {
|
|
"id": raw_contact.get("id", ""),
|
|
"first_name": first_name.strip(),
|
|
"last_name": last_name.strip(),
|
|
"full_name": f"{first_name} {last_name}".strip(),
|
|
"email": email,
|
|
"phone": phone,
|
|
"status": status,
|
|
"source": source,
|
|
"normalized_at": datetime.now().isoformat(),
|
|
"original_data": raw_contact
|
|
}
|
|
})
|
|
|
|
return normalized_contacts
|
|
```
|
|
|
|
### Key Techniques
|
|
|
|
- Multiple field name variations handling
|
|
- String cleaning and normalization
|
|
- Regex for phone number cleaning
|
|
- Name parsing logic
|
|
- Status normalization
|
|
- Preserve original data
|
|
|
|
---
|
|
|
|
## Pattern 6: Release Notes Processing
|
|
|
|
**Use case**: Parse release notes and categorize changes.
|
|
|
|
**Scenario**: Extract features, fixes, and breaking changes from release notes.
|
|
|
|
### Implementation
|
|
|
|
```python
|
|
import re
|
|
|
|
release_notes = _input.first()["json"]["body"].get("notes", "")
|
|
|
|
categories = {
|
|
"features": [],
|
|
"fixes": [],
|
|
"breaking": [],
|
|
"other": []
|
|
}
|
|
|
|
# Split into lines
|
|
lines = release_notes.split("\n")
|
|
|
|
for line in lines:
|
|
line = line.strip()
|
|
|
|
# Skip empty lines and headers
|
|
if not line or line.startswith("#"):
|
|
continue
|
|
|
|
# Remove bullet points
|
|
clean_line = re.sub(r'^[\*\-\+]\s*', '', line)
|
|
|
|
# Categorize
|
|
if re.search(r'\b(feature|add|new)\b', clean_line, re.IGNORECASE):
|
|
categories["features"].append(clean_line)
|
|
elif re.search(r'\b(fix|bug|patch|resolve)\b', clean_line, re.IGNORECASE):
|
|
categories["fixes"].append(clean_line)
|
|
elif re.search(r'\b(breaking|deprecated|remove)\b', clean_line, re.IGNORECASE):
|
|
categories["breaking"].append(clean_line)
|
|
else:
|
|
categories["other"].append(clean_line)
|
|
|
|
return [{
|
|
"json": {
|
|
"categories": categories,
|
|
"summary": {
|
|
"features": len(categories["features"]),
|
|
"fixes": len(categories["fixes"]),
|
|
"breaking": len(categories["breaking"]),
|
|
"other": len(categories["other"]),
|
|
"total": sum(len(v) for v in categories.values())
|
|
}
|
|
}
|
|
}]
|
|
```
|
|
|
|
### Key Techniques
|
|
|
|
- Line-by-line parsing
|
|
- Pattern-based categorization
|
|
- Bullet point removal
|
|
- Skip headers and empty lines
|
|
- Summary statistics
|
|
|
|
---
|
|
|
|
## Pattern 7: Array Transformation
|
|
|
|
**Use case**: Reshape arrays and extract specific fields.
|
|
|
|
**Scenario**: Transform user data array to extract specific fields.
|
|
|
|
### Implementation
|
|
|
|
```python
|
|
all_items = _input.all()
|
|
|
|
# Extract and transform
|
|
transformed = []
|
|
|
|
for item in all_items:
|
|
user = item["json"]
|
|
|
|
# Extract nested fields
|
|
profile = user.get("profile", {})
|
|
settings = user.get("settings", {})
|
|
|
|
transformed.append({
|
|
"json": {
|
|
"user_id": user.get("id"),
|
|
"email": user.get("email"),
|
|
"name": profile.get("name", "Unknown"),
|
|
"avatar": profile.get("avatar_url"),
|
|
"bio": profile.get("bio", "")[:100], # Truncate to 100 chars
|
|
"notifications_enabled": settings.get("notifications", True),
|
|
"theme": settings.get("theme", "light"),
|
|
"created_at": user.get("created_at"),
|
|
"last_login": user.get("last_login_at")
|
|
}
|
|
})
|
|
|
|
return transformed
|
|
```
|
|
|
|
### Key Techniques
|
|
|
|
- Field extraction from nested objects
|
|
- Default values with .get()
|
|
- String truncation
|
|
- Flattening nested structures
|
|
|
|
---
|
|
|
|
## Pattern 8: Dictionary Lookup
|
|
|
|
**Use case**: Create lookup dictionary for fast data access.
|
|
|
|
**Scenario**: Look up user details by ID.
|
|
|
|
### Implementation
|
|
|
|
```python
|
|
all_items = _input.all()
|
|
|
|
# Build lookup dictionary
|
|
users_by_id = {}
|
|
|
|
for item in all_items:
|
|
user = item["json"]
|
|
user_id = user.get("id")
|
|
|
|
if user_id:
|
|
users_by_id[user_id] = {
|
|
"name": user.get("name"),
|
|
"email": user.get("email"),
|
|
"status": user.get("status")
|
|
}
|
|
|
|
# Example: Look up specific users
|
|
lookup_ids = [1, 3, 5]
|
|
looked_up = []
|
|
|
|
for user_id in lookup_ids:
|
|
if user_id in users_by_id:
|
|
looked_up.append({
|
|
"json": {
|
|
"id": user_id,
|
|
**users_by_id[user_id],
|
|
"found": True
|
|
}
|
|
})
|
|
else:
|
|
looked_up.append({
|
|
"json": {
|
|
"id": user_id,
|
|
"found": False
|
|
}
|
|
})
|
|
|
|
return looked_up
|
|
```
|
|
|
|
### Key Techniques
|
|
|
|
- Dictionary comprehension alternative
|
|
- O(1) lookup time
|
|
- Handle missing keys gracefully
|
|
- Preserve lookup order
|
|
|
|
---
|
|
|
|
## Pattern 9: Top N Filtering
|
|
|
|
**Use case**: Get top items by score or value.
|
|
|
|
**Scenario**: Get top 10 products by sales.
|
|
|
|
### Implementation
|
|
|
|
```python
|
|
all_items = _input.all()
|
|
|
|
# Extract products with sales
|
|
products = []
|
|
|
|
for item in all_items:
|
|
product = item["json"]
|
|
products.append({
|
|
"id": product.get("id"),
|
|
"name": product.get("name"),
|
|
"sales": product.get("sales", 0),
|
|
"revenue": product.get("revenue", 0.0),
|
|
"category": product.get("category")
|
|
})
|
|
|
|
# Sort by sales descending
|
|
products.sort(key=lambda p: p["sales"], reverse=True)
|
|
|
|
# Get top 10
|
|
top_10 = products[:10]
|
|
|
|
return [
|
|
{
|
|
"json": {
|
|
**product,
|
|
"rank": index + 1
|
|
}
|
|
}
|
|
for index, product in enumerate(top_10)
|
|
]
|
|
```
|
|
|
|
### Key Techniques
|
|
|
|
- List sorting with custom key
|
|
- Slicing for top N
|
|
- Add ranking information
|
|
- Enumerate for index
|
|
|
|
---
|
|
|
|
## Pattern 10: String Aggregation
|
|
|
|
**Use case**: Aggregate strings with formatting.
|
|
|
|
**Scenario**: Create summary text from multiple items.
|
|
|
|
### Implementation
|
|
|
|
```python
|
|
all_items = _input.all()
|
|
|
|
# Collect messages
|
|
messages = []
|
|
|
|
for item in all_items:
|
|
data = item["json"]
|
|
|
|
user = data.get("user", "Unknown")
|
|
message = data.get("message", "")
|
|
timestamp = data.get("timestamp", "")
|
|
|
|
# Format each message
|
|
formatted = f"[{timestamp}] {user}: {message}"
|
|
messages.append(formatted)
|
|
|
|
# Join with newlines
|
|
summary = "\n".join(messages)
|
|
|
|
# Create statistics
|
|
total_length = sum(len(msg) for msg in messages)
|
|
average_length = total_length / len(messages) if messages else 0
|
|
|
|
return [{
|
|
"json": {
|
|
"summary": summary,
|
|
"message_count": len(messages),
|
|
"total_characters": total_length,
|
|
"average_length": round(average_length, 2)
|
|
}
|
|
}]
|
|
```
|
|
|
|
### Key Techniques
|
|
|
|
- String formatting with f-strings
|
|
- Join lists with separator
|
|
- Calculate string statistics
|
|
- Handle empty lists
|
|
|
|
---
|
|
|
|
## Pattern Comparison: Python vs JavaScript
|
|
|
|
### Data Access
|
|
|
|
```python
|
|
# Python
|
|
all_items = _input.all()
|
|
first_item = _input.first()
|
|
current = _input.item
|
|
webhook_data = _json["body"]
|
|
|
|
# JavaScript
|
|
const allItems = $input.all();
|
|
const firstItem = $input.first();
|
|
const current = $input.item;
|
|
const webhookData = $json.body;
|
|
```
|
|
|
|
### Dictionary/Object Access
|
|
|
|
```python
|
|
# Python - Dictionary key access
|
|
name = user["name"] # May raise KeyError
|
|
name = user.get("name", "?") # Safe with default
|
|
|
|
# JavaScript - Object property access
|
|
const name = user.name; // May be undefined
|
|
const name = user.name || "?"; // Safe with default
|
|
```
|
|
|
|
### Array Operations
|
|
|
|
```python
|
|
# Python - List comprehension
|
|
filtered = [item for item in items if item["active"]]
|
|
|
|
# JavaScript - Array methods
|
|
const filtered = items.filter(item => item.active);
|
|
```
|
|
|
|
### Sorting
|
|
|
|
```python
|
|
# Python
|
|
items.sort(key=lambda x: x["score"], reverse=True)
|
|
|
|
# JavaScript
|
|
items.sort((a, b) => b.score - a.score);
|
|
```
|
|
|
|
---
|
|
|
|
## Best Practices
|
|
|
|
### 1. Use .get() for Safe Access
|
|
|
|
```python
|
|
# ✅ SAFE: Use .get() with defaults
|
|
name = user.get("name", "Unknown")
|
|
email = user.get("email", "no-email@example.com")
|
|
|
|
# ❌ RISKY: Direct key access
|
|
name = user["name"] # KeyError if missing!
|
|
```
|
|
|
|
### 2. Handle Empty Lists
|
|
|
|
```python
|
|
# ✅ SAFE: Check before processing
|
|
items = _input.all()
|
|
if items:
|
|
first = items[0]
|
|
else:
|
|
return [{"json": {"error": "No items"}}]
|
|
|
|
# ❌ RISKY: Assume items exist
|
|
first = items[0] # IndexError if empty!
|
|
```
|
|
|
|
### 3. Use List Comprehensions
|
|
|
|
```python
|
|
# ✅ PYTHONIC: List comprehension
|
|
active = [item for item in items if item["json"].get("active")]
|
|
|
|
# ❌ VERBOSE: Traditional loop
|
|
active = []
|
|
for item in items:
|
|
if item["json"].get("active"):
|
|
active.append(item)
|
|
```
|
|
|
|
### 4. Return Proper Format
|
|
|
|
```python
|
|
# ✅ CORRECT: Array of objects with "json" key
|
|
return [{"json": {"field": "value"}}]
|
|
|
|
# ❌ WRONG: Just the data
|
|
return {"field": "value"}
|
|
|
|
# ❌ WRONG: Array without "json" wrapper
|
|
return [{"field": "value"}]
|
|
```
|
|
|
|
### 5. Use Standard Library
|
|
|
|
```python
|
|
# ✅ GOOD: Use standard library
|
|
import statistics
|
|
average = statistics.mean(numbers)
|
|
|
|
# ✅ ALSO GOOD: Built-in functions
|
|
average = sum(numbers) / len(numbers) if numbers else 0
|
|
|
|
# ❌ CAN'T DO: External libraries
|
|
import numpy as np # ModuleNotFoundError!
|
|
```
|
|
|
|
---
|
|
|
|
## When to Use Each Pattern
|
|
|
|
| Pattern | When to Use |
|
|
|---------|-------------|
|
|
| Multi-Source Aggregation | Combining data from different nodes/sources |
|
|
| Regex Filtering | Text pattern matching, validation, extraction |
|
|
| Markdown Parsing | Processing formatted text into structured data |
|
|
| JSON Comparison | Detecting changes between objects |
|
|
| CRM Transformation | Normalizing data from different systems |
|
|
| Release Notes | Categorizing text by keywords |
|
|
| Array Transformation | Reshaping data, extracting fields |
|
|
| Dictionary Lookup | Fast ID-based lookups |
|
|
| Top N Filtering | Getting best/worst items by criteria |
|
|
| String Aggregation | Creating formatted text summaries |
|
|
|
|
---
|
|
|
|
## Summary
|
|
|
|
**Key Takeaways**:
|
|
- Use `.get()` for safe dictionary access
|
|
- List comprehensions are pythonic and efficient
|
|
- Handle empty lists/None values
|
|
- Use standard library (json, datetime, re)
|
|
- Return proper n8n format: `[{"json": {...}}]`
|
|
|
|
**Remember**:
|
|
- JavaScript is recommended for 95% of use cases
|
|
- Python has NO external libraries
|
|
- Use n8n nodes for complex operations
|
|
- Code node is for data transformation, not API calls
|
|
|
|
**See Also**:
|
|
- [SKILL.md](SKILL.md) - Python Code overview
|
|
- [DATA_ACCESS.md](DATA_ACCESS.md) - Data access patterns
|
|
- [STANDARD_LIBRARY.md](STANDARD_LIBRARY.md) - Available modules
|
|
- [ERROR_PATTERNS.md](ERROR_PATTERNS.md) - Avoid common mistakes
|