Performance Benchmark CLI - Benchmark Suites¶
Overview¶
The Performance Benchmark CLI provides two main benchmark suites, each targeting different layers of the Essert.MF application:
- Standard Suite - Infrastructure layer (databases, repositories, services, workflows)
- REST API Suite - REST API endpoints, payloads, concurrent load, scenarios
This document details each suite's purpose, benchmarks, thresholds, and use cases.
Standard Suite (Infrastructure)¶
Tests the infrastructure layer performance: database connectivity, repository operations, domain services, and complex scenarios.
Running the Standard Suite¶
# Run all standard benchmarks
dotnet run -- run
dotnet run -- run --suite standard
# Run with custom iterations
dotnet run -- run --suite standard --iterations 20
Sub-Suites¶
1. Connectivity Suite¶
Purpose: Validate database connection performance across all seven databases.
Threshold: 100ms
Benchmarks: - ProcessDb connection test - StatisticsDb connection test - ChangelogsDb connection test - EssertDb connection test - ProductParameterDb connection test - RobotsDb connection test - WpcDb connection test
Performance Classification: - Excellent: < 50ms - Good: < 100ms - Warning: < 200ms - Critical: ≥ 200ms
What It Tests: - Network latency to database servers - Connection pool performance - Database server response time - SSL/TLS handshake overhead (if enabled)
Example Output:
Connectivity Suite (7 benchmarks)
✓ ProcessDb Connection : 23ms [Excellent]
✓ StatisticsDb Connection : 28ms [Excellent]
✓ ChangelogsDb Connection : 31ms [Excellent]
✓ EssertDb Connection : 19ms [Excellent]
✓ ProductParameterDb Connection: 26ms [Excellent]
✓ RobotsDb Connection : 22ms [Excellent]
✓ WpcDb Connection : 25ms [Excellent]
When to Use: - Validate network connectivity to databases - Baseline database server performance - Diagnose connection pool issues - Test after infrastructure changes
2. Repository Suite¶
Purpose: Measure CRUD operations and query performance for all repositories.
Threshold: 500ms
Benchmarks Include: - Product Operations - Create product with CRC calculation - Get product by UID - Update product name - Delete product with versions - Query products by name pattern
- Version Operations
- Create version for product
- Get version by UID
- List versions for product
-
Update version metadata
-
WPC (Work Piece Carrier) Operations
- Create WPC
- Get WPC by UID
- Search WPC by number
- Update WPC state
-
Query WPC by position
-
Mapping Operations
- Create parameter mappings
- Query mappings by version
- Update mapping values
-
Delete cascading mappings
-
Measurement Operations (Statistics)
- Insert measurement data
- Query measurements by time range
- Aggregate measurement statistics
- Query by serial number
Performance Classification: - Excellent: < 250ms - Good: < 500ms - Warning: < 1000ms - Critical: ≥ 1000ms
What It Tests: - Entity Framework query generation - Database index utilization - Query plan efficiency - Transaction overhead - CRC calculation performance - Navigation property loading
Example Output:
Repository Suite (45 benchmarks)
✓ Product Create with CRC : 156ms [Excellent]
✓ Product Get by UID : 23ms [Excellent]
✓ Product Update Name : 89ms [Excellent]
✓ Product Query by Pattern : 312ms [Good]
⚠ WPC Search Complex Query : 687ms [Warning]
✓ Version List for Product : 145ms [Excellent]
When to Use: - Validate repository layer performance - Identify slow database queries - Test after Entity Framework changes - Optimize query performance - Validate index effectiveness
3. Service Suite¶
Purpose: Test domain service performance, especially CRC calculation and content building.
Threshold: 100ms
Benchmarks: - CRC Calculation - Product CRC calculation - Version CRC calculation - Setup CRC calculation (TH, SC, RFL, etc.) - Mapping CRC calculation
- Content Building
- Build setup content (serialization)
- Build mapping content
- Build version content
-
Cache hit/miss performance
-
Cache Performance
- First access (cold cache)
- Subsequent access (warm cache)
- Cache invalidation
Performance Classification: - Excellent: < 50ms - Good: < 100ms - Warning: < 200ms - Critical: ≥ 200ms
What It Tests: - CRC algorithm performance - Serialization/deserialization speed - Memory allocation patterns - Cache effectiveness - String manipulation performance
Example Output:
Service Suite (12 benchmarks)
✓ Product CRC Calculation : 12ms [Excellent]
✓ Version CRC Calculation : 18ms [Excellent]
✓ Setup Content Build (TH) : 45ms [Excellent]
✓ Cache Hit (Warm) : 3ms [Excellent]
⚠ Cache Miss (Cold) : 156ms [Warning]
✓ Mapping Content Serialization: 67ms [Good]
When to Use: - Optimize CRC calculation logic - Test caching strategies - Validate serialization performance - Baseline after algorithm changes
4. Scenario Suite¶
Purpose: Test realistic multi-step workflows and complex operations.
Threshold: 2000ms
Scenarios Include:
- Complete Product Creation
- Create product
- Add 3 versions
- Create parameter mappings for each version
- Calculate CRCs
-
Validate relationships
-
Version Update Workflow
- Query existing product
- Create new version
- Copy mappings from previous version
- Update version metadata
-
Recalculate CRCs
-
WPC Lifecycle
- Create WPC
- Update position multiple times
- Query position history
- Update state transitions
-
Delete WPC
-
Measurement Collection
- Insert batch measurements (100 records)
- Query recent measurements
- Calculate statistics
-
Aggregate by time periods
-
Cross-Database Transaction
- Create product (ProductParameterDb)
- Log change (ChangelogsDb)
- Update statistics (StatisticsDb)
-
Commit transaction
-
Search and Filter
- Complex product search with filters
- Version lookup by criteria
- WPC search with position filters
- Measurement queries with aggregations
Performance Classification: - Excellent: < 1000ms - Good: < 2000ms - Warning: < 4000ms - Critical: ≥ 4000ms
What It Tests: - End-to-end workflow performance - Cross-database transaction overhead - Unit of Work pattern efficiency - Complex query performance - Cascading operation overhead - Data consistency checks
Example Output:
Scenario Suite (8 benchmarks)
✓ Complete Product Creation : 876ms [Excellent]
✓ Version Update Workflow : 1245ms [Good]
✓ WPC Lifecycle : 1567ms [Good]
⚠ Batch Measurement Insert : 2845ms [Warning]
✓ Cross-Database Transaction : 1892ms [Good]
✓ Complex Product Search : 1123ms [Good]
When to Use: - Validate end-to-end performance - Test after major refactoring - Identify workflow bottlenecks - Baseline production-like operations
REST API Suite¶
Tests the REST API layer performance: endpoint latency, payload sizes, concurrent load handling, and end-to-end API scenarios.
Running the REST API Suite¶
# Run all REST API benchmarks
dotnet run -- run --suite restapi
# Run with custom iterations
dotnet run -- run --suite restapi --iterations 10
Sub-Suites¶
1. API Endpoints Suite¶
Purpose: Measure individual REST API endpoint performance.
Threshold: 20-200ms (varies by endpoint complexity)
Endpoint Categories:
Products Endpoints (Simple: 50ms, Complex: 200ms)
- GET /api/v1/products - List all products
- GET /api/v1/products/{uid} - Get product by UID
- GET /api/v1/products/search?name={pattern} - Search products
- GET /api/v1/products/{uid}/versions - List versions
- GET /api/v1/versions/{uid} - Get version details
Robots Endpoints (Simple: 50ms, Complex: 150ms)
- GET /api/v1/robots - List all robots
- GET /api/v1/robots/{uid} - Get robot by UID
- GET /api/v1/robots/{uid}/positions - Get position history
Statistics Endpoints (Simple: 100ms, Complex: 500ms)
- GET /api/v1/measurements - List recent measurements
- GET /api/v1/measurements/{uid} - Get measurement by UID
- GET /api/v1/measurements/stats?from={date}&to={date} - Aggregate stats
- GET /api/v1/measurements/serial/{sn} - Query by serial number
WPC Endpoints (Simple: 50ms, Complex: 200ms)
- GET /api/v1/wpc - List all WPCs
- GET /api/v1/wpc/{uid} - Get WPC by UID
- GET /api/v1/wpc/search?number={pattern} - Search WPCs
- GET /api/v1/wpc/{uid}/position - Get current position
Parameters Endpoints (Simple: 50ms, Complex: 300ms)
- GET /api/v1/parameters/th - Get TH parameters
- GET /api/v1/parameters/sc - Get SC parameters
- GET /api/v1/parameters/rfl - Get RFL parameters
- GET /api/v1/mappings/{versionUid} - Get parameter mappings
System Endpoints (Simple: 20ms)
- GET /api/v1/health - Health check
- GET /api/v1/health/ready - Readiness check
- GET /api/v1/health/live - Liveness check
Performance Classification (varies by endpoint): - Excellent: < 50% of threshold - Good: < 100% of threshold - Warning: < 200% of threshold - Critical: ≥ 200% of threshold
What It Tests: - HTTP request/response overhead - API middleware performance - Controller action execution - Model binding and validation - Serialization performance - Database query latency - Cache hit/miss patterns
Example Output:
API Endpoints Suite (55 benchmarks)
✓ GET /api/v1/products : 18ms [Excellent]
✓ GET /api/v1/products/{uid} : 23ms [Excellent]
✓ GET /api/v1/products/search : 87ms [Good]
⚠ GET /api/v1/measurements/stats : 412ms [Warning]
✓ GET /api/v1/health : 5ms [Excellent]
Latency Analysis:
When to Use: - Validate API endpoint performance - Identify slow endpoints - Test after API changes - Optimize serialization - Validate caching effectiveness
2. API Payload Suite¶
Purpose: Measure API performance across different response payload sizes.
Threshold: Varies by payload size
Payload Categories:
- Small Payloads (< 5KB)
- Threshold: 50ms
- Examples: Single product, health check, robot status
-
Classification: Excellent < 25ms, Good < 50ms
-
Medium Payloads (5-50KB)
- Threshold: 100ms
- Examples: Product list (10 items), version with mappings
-
Classification: Excellent < 50ms, Good < 100ms
-
Large Payloads (50-500KB)
- Threshold: 200ms
- Examples: Full product catalog, measurement batch
-
Classification: Excellent < 100ms, Good < 200ms
-
Extra Large Payloads (500KB+)
- Threshold: 500ms
- Examples: Historical statistics, full WPC catalog
- Classification: Excellent < 250ms, Good < 500ms
What It Tests: - Serialization performance at scale - Network bandwidth utilization - Memory allocation patterns - Garbage collection pressure - JSON serialization efficiency - Response compression effectiveness
Example Output:
API Payload Suite (13 benchmarks)
✓ Small: Health Check (0.2KB) : 5ms [Excellent]
✓ Small: Single Product (3.5KB) : 22ms [Excellent]
✓ Medium: Product List (25KB) : 67ms [Excellent]
✓ Large: Full Catalog (250KB) : 189ms [Good]
⚠ XLarge: Historical Stats (850KB) : 567ms [Warning]
Payload Size Analysis:
Small (< 5KB) : Avg 15ms, 8 endpoints
Medium (5-50KB) : Avg 78ms, 12 endpoints
Large (50-500KB) : Avg 205ms, 3 endpoints
XLarge (500KB+) : Avg 523ms, 2 endpoints
When to Use: - Optimize large response payloads - Test pagination effectiveness - Validate compression settings - Identify memory issues
3. Concurrent Load Suite¶
Purpose: Test API performance under concurrent user load.
Threshold: Varies by concurrency level
Load Test Scenarios:
- Light Load (10 concurrent users)
- Threshold: 100ms average response time
-
Tests: Normal operation under light load
-
Medium Load (50 concurrent users)
- Threshold: 250ms average response time
-
Tests: Typical production load
-
Heavy Load (100 concurrent users)
- Threshold: 500ms average response time
-
Tests: Peak production load
-
Throughput Tests
- Requests per second (RPS)
- Concurrent connections
- Connection pool saturation
- Database connection limits
What It Tests: - Thread pool efficiency - Database connection pooling - Lock contention - Resource exhaustion - Memory pressure under load - Garbage collection performance - Request queuing behavior
Example Output:
Concurrent Load Suite (10 benchmarks)
Light Load (10 concurrent users):
✓ Products API : 45ms avg, 220 RPS [Excellent]
✓ WPC API : 52ms avg, 190 RPS [Excellent]
Medium Load (50 concurrent users):
✓ Products API : 123ms avg, 405 RPS [Good]
⚠ Statistics API : 287ms avg, 174 RPS [Warning]
Heavy Load (100 concurrent users):
⚠ Products API : 456ms avg, 219 RPS [Warning]
✗ Statistics API : 1234ms avg, 81 RPS [Critical]
Throughput Analysis:
Max Throughput : 450 RPS (50 concurrent users)
Optimal Concurrency: 40-60 users
Saturation Point : 80+ users (response time >500ms)
When to Use: - Capacity planning - Load testing before production - Identify concurrency issues - Optimize thread pool settings - Test connection pool configuration
4. API Scenario Suite¶
Purpose: Test end-to-end API workflows and realistic user scenarios.
Threshold: 500-3000ms (varies by scenario complexity)
Scenarios:
- Product CRUD Workflow
- POST /api/v1/products (create)
- GET /api/v1/products/{uid} (read)
- PUT /api/v1/products/{uid} (update)
- GET /api/v1/products/{uid} (verify)
-
DELETE /api/v1/products/{uid} (delete)
-
Version Management Workflow
- Create product
- Create version 1
- Create version 2
- List versions
- Update version metadata
-
Delete product (cascade)
-
Search and Filter Workflow
- Search products by name pattern
- Filter by date range
- Sort results
-
Paginate through results
-
WPC Tracking Workflow
- Create WPC
- Update position (5 times)
- Query position history
- Update state
-
Search by position
-
Measurement Collection Workflow
- Insert batch measurements (POST)
- Query measurements by serial number
- Calculate statistics
-
Export results
-
Complex Query Workflow
- Multi-table join queries
- Aggregation with grouping
- Filtering with multiple criteria
- Sorting and pagination
Performance Classification: - Excellent: < 50% of threshold - Good: < 100% of threshold - Warning: < 200% of threshold - Critical: ≥ 200% of threshold
What It Tests: - Multi-request workflows - State management between requests - Transaction handling - Error recovery - Validation performance - Business logic execution
Example Output:
API Scenario Suite (11 benchmarks)
✓ Product CRUD Workflow : 456ms [Excellent]
✓ Version Management Workflow : 1234ms [Good]
✓ Search and Filter Workflow : 789ms [Good]
⚠ WPC Tracking Workflow : 1876ms [Warning]
✓ Measurement Collection : 2345ms [Good]
⚠ Complex Query Workflow : 3456ms [Warning]
Workflow Analysis:
Average Steps per Workflow: 5.2
Average Workflow Duration : 1542ms
Slowest Step : Complex Query (2.3s)
When to Use: - Validate end-to-end API functionality - Test realistic user workflows - Identify transaction bottlenecks - Optimize multi-step operations
Choosing the Right Suite¶
Use Standard Suite When:¶
- Testing infrastructure changes (database, repositories)
- Optimizing query performance
- Validating Entity Framework configurations
- Testing after database schema changes
- Baseline testing without API overhead
Use REST API Suite When:¶
- Testing API layer changes
- Validating endpoint performance
- Testing under concurrent load
- Optimizing serialization
- Preparing for production deployment
Use Both Suites When:¶
- Comprehensive performance testing
- Creating production baselines
- Performance regression testing
- Capacity planning
- Release validation
Performance Optimization Tips¶
Standard Suite Optimization¶
- Connectivity Issues
- Check network latency
- Optimize connection pool settings
- Enable connection pooling
-
Consider connection string parameters
-
Repository Performance
- Add database indexes
- Optimize Entity Framework queries
- Use
.AsNoTracking()for read-only queries - Implement query result caching
-
Review N+1 query problems
-
Service Performance
- Optimize CRC algorithms
- Implement caching strategies
- Reduce memory allocations
-
Use value types where possible
-
Scenario Performance
- Batch operations where possible
- Optimize transaction scope
- Reduce database round-trips
- Implement parallel operations
REST API Suite Optimization¶
- Endpoint Performance
- Implement response caching
- Optimize serialization settings
- Add ETag support
- Compress responses
-
Implement pagination
-
Payload Optimization
- Implement field filtering
- Use sparse fieldsets
- Enable response compression
- Consider GraphQL for complex queries
-
Implement lazy loading
-
Concurrent Load
- Tune thread pool settings
- Optimize connection pooling
- Implement request throttling
- Add caching layers (Redis, memory cache)
-
Scale horizontally
-
Scenario Performance
- Reduce workflow steps
- Implement batch operations
- Use async/await properly
- Optimize validation logic
Related Documentation¶
- Main README - Overview and quick start
- Configuration Guide - Configure suite thresholds
- Command Reference - Run specific suites
- Report Formats - Analyze suite results