feat: replace placeholder benchmarks with meaningful MCP tool performance tests

Replace generic placeholder benchmarks with real-world MCP tool performance
benchmarks using production database (525+ nodes).

Changes:
- Delete sample.bench.ts (generic JS benchmarks not relevant to n8n-mcp)
- Add mcp-tools.bench.ts with 8 benchmarks covering 4 critical MCP tools:
  * search_nodes: FTS5 search performance (common/AI queries)
  * get_node_essentials: Property filtering performance
  * list_nodes: Pagination performance (all nodes/AI tools)
  * validate_node_operation: Configuration validation performance
- Clarify database-queries.bench.ts uses mock data, not production data
- Update benchmark index to export new suite

These benchmarks measure what AI assistants actually experience when calling
MCP tools, making them the most meaningful performance metric for the system.
Target performance: <20ms for search, <10ms for essentials, <15ms for validation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
czlonkowski
2025-10-08 09:43:33 +02:00
parent f33b626179
commit cf3c66c0ea
4 changed files with 181 additions and 52 deletions

View File

@@ -1,7 +1,3 @@
// Export all benchmark suites
// Note: Some benchmarks are temporarily disabled due to API changes
// export * from './node-loading.bench';
export * from './database-queries.bench';
// export * from './search-operations.bench';
// export * from './validation-performance.bench';
// export * from './mcp-tools.bench';
export * from './mcp-tools.bench';