mirror of
https://github.com/czlonkowski/n8n-mcp.git
synced 2026-01-30 06:22:04 +00:00
Replace generic placeholder benchmarks with real-world MCP tool performance benchmarks using production database (525+ nodes). Changes: - Delete sample.bench.ts (generic JS benchmarks not relevant to n8n-mcp) - Add mcp-tools.bench.ts with 8 benchmarks covering 4 critical MCP tools: * search_nodes: FTS5 search performance (common/AI queries) * get_node_essentials: Property filtering performance * list_nodes: Pagination performance (all nodes/AI tools) * validate_node_operation: Configuration validation performance - Clarify database-queries.bench.ts uses mock data, not production data - Update benchmark index to export new suite These benchmarks measure what AI assistants actually experience when calling MCP tools, making them the most meaningful performance metric for the system. Target performance: <20ms for search, <10ms for essentials, <15ms for validation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>