Skip to content

Performance Benchmark CLI - Benchmark Suites

Overview

The Performance Benchmark CLI provides two main benchmark suites, each targeting different layers of the Essert.MF application:

  1. Standard Suite - Infrastructure layer (databases, repositories, services, workflows)
  2. REST API Suite - REST API endpoints, payloads, concurrent load, scenarios

This document details each suite's purpose, benchmarks, thresholds, and use cases.

Standard Suite (Infrastructure)

Tests the infrastructure layer performance: database connectivity, repository operations, domain services, and complex scenarios.

Running the Standard Suite

# Run all standard benchmarks
dotnet run -- run
dotnet run -- run --suite standard

# Run with custom iterations
dotnet run -- run --suite standard --iterations 20

Sub-Suites

1. Connectivity Suite

Purpose: Validate database connection performance across all seven databases.

Threshold: 100ms

Benchmarks: - ProcessDb connection test - StatisticsDb connection test - ChangelogsDb connection test - EssertDb connection test - ProductParameterDb connection test - RobotsDb connection test - WpcDb connection test

Performance Classification: - Excellent: < 50ms - Good: < 100ms - Warning: < 200ms - Critical: ≥ 200ms

What It Tests: - Network latency to database servers - Connection pool performance - Database server response time - SSL/TLS handshake overhead (if enabled)

Example Output:

Connectivity Suite (7 benchmarks)
  ✓ ProcessDb Connection         : 23ms [Excellent]
  ✓ StatisticsDb Connection      : 28ms [Excellent]
  ✓ ChangelogsDb Connection      : 31ms [Excellent]
  ✓ EssertDb Connection          : 19ms [Excellent]
  ✓ ProductParameterDb Connection: 26ms [Excellent]
  ✓ RobotsDb Connection          : 22ms [Excellent]
  ✓ WpcDb Connection             : 25ms [Excellent]

When to Use: - Validate network connectivity to databases - Baseline database server performance - Diagnose connection pool issues - Test after infrastructure changes


2. Repository Suite

Purpose: Measure CRUD operations and query performance for all repositories.

Threshold: 500ms

Benchmarks Include: - Product Operations - Create product with CRC calculation - Get product by UID - Update product name - Delete product with versions - Query products by name pattern

  • Version Operations
  • Create version for product
  • Get version by UID
  • List versions for product
  • Update version metadata

  • WPC (Work Piece Carrier) Operations

  • Create WPC
  • Get WPC by UID
  • Search WPC by number
  • Update WPC state
  • Query WPC by position

  • Mapping Operations

  • Create parameter mappings
  • Query mappings by version
  • Update mapping values
  • Delete cascading mappings

  • Measurement Operations (Statistics)

  • Insert measurement data
  • Query measurements by time range
  • Aggregate measurement statistics
  • Query by serial number

Performance Classification: - Excellent: < 250ms - Good: < 500ms - Warning: < 1000ms - Critical: ≥ 1000ms

What It Tests: - Entity Framework query generation - Database index utilization - Query plan efficiency - Transaction overhead - CRC calculation performance - Navigation property loading

Example Output:

Repository Suite (45 benchmarks)
  ✓ Product Create with CRC      : 156ms [Excellent]
  ✓ Product Get by UID           : 23ms  [Excellent]
  ✓ Product Update Name          : 89ms  [Excellent]
  ✓ Product Query by Pattern     : 312ms [Good]
  ⚠ WPC Search Complex Query     : 687ms [Warning]
  ✓ Version List for Product     : 145ms [Excellent]

When to Use: - Validate repository layer performance - Identify slow database queries - Test after Entity Framework changes - Optimize query performance - Validate index effectiveness


3. Service Suite

Purpose: Test domain service performance, especially CRC calculation and content building.

Threshold: 100ms

Benchmarks: - CRC Calculation - Product CRC calculation - Version CRC calculation - Setup CRC calculation (TH, SC, RFL, etc.) - Mapping CRC calculation

  • Content Building
  • Build setup content (serialization)
  • Build mapping content
  • Build version content
  • Cache hit/miss performance

  • Cache Performance

  • First access (cold cache)
  • Subsequent access (warm cache)
  • Cache invalidation

Performance Classification: - Excellent: < 50ms - Good: < 100ms - Warning: < 200ms - Critical: ≥ 200ms

What It Tests: - CRC algorithm performance - Serialization/deserialization speed - Memory allocation patterns - Cache effectiveness - String manipulation performance

Example Output:

Service Suite (12 benchmarks)
  ✓ Product CRC Calculation      : 12ms [Excellent]
  ✓ Version CRC Calculation      : 18ms [Excellent]
  ✓ Setup Content Build (TH)     : 45ms [Excellent]
  ✓ Cache Hit (Warm)             : 3ms  [Excellent]
  ⚠ Cache Miss (Cold)            : 156ms [Warning]
  ✓ Mapping Content Serialization: 67ms [Good]

When to Use: - Optimize CRC calculation logic - Test caching strategies - Validate serialization performance - Baseline after algorithm changes


4. Scenario Suite

Purpose: Test realistic multi-step workflows and complex operations.

Threshold: 2000ms

Scenarios Include:

  1. Complete Product Creation
  2. Create product
  3. Add 3 versions
  4. Create parameter mappings for each version
  5. Calculate CRCs
  6. Validate relationships

  7. Version Update Workflow

  8. Query existing product
  9. Create new version
  10. Copy mappings from previous version
  11. Update version metadata
  12. Recalculate CRCs

  13. WPC Lifecycle

  14. Create WPC
  15. Update position multiple times
  16. Query position history
  17. Update state transitions
  18. Delete WPC

  19. Measurement Collection

  20. Insert batch measurements (100 records)
  21. Query recent measurements
  22. Calculate statistics
  23. Aggregate by time periods

  24. Cross-Database Transaction

  25. Create product (ProductParameterDb)
  26. Log change (ChangelogsDb)
  27. Update statistics (StatisticsDb)
  28. Commit transaction

  29. Search and Filter

  30. Complex product search with filters
  31. Version lookup by criteria
  32. WPC search with position filters
  33. Measurement queries with aggregations

Performance Classification: - Excellent: < 1000ms - Good: < 2000ms - Warning: < 4000ms - Critical: ≥ 4000ms

What It Tests: - End-to-end workflow performance - Cross-database transaction overhead - Unit of Work pattern efficiency - Complex query performance - Cascading operation overhead - Data consistency checks

Example Output:

Scenario Suite (8 benchmarks)
  ✓ Complete Product Creation    : 876ms  [Excellent]
  ✓ Version Update Workflow      : 1245ms [Good]
  ✓ WPC Lifecycle                : 1567ms [Good]
  ⚠ Batch Measurement Insert     : 2845ms [Warning]
  ✓ Cross-Database Transaction   : 1892ms [Good]
  ✓ Complex Product Search       : 1123ms [Good]

When to Use: - Validate end-to-end performance - Test after major refactoring - Identify workflow bottlenecks - Baseline production-like operations


REST API Suite

Tests the REST API layer performance: endpoint latency, payload sizes, concurrent load handling, and end-to-end API scenarios.

Running the REST API Suite

# Run all REST API benchmarks
dotnet run -- run --suite restapi

# Run with custom iterations
dotnet run -- run --suite restapi --iterations 10

Sub-Suites

1. API Endpoints Suite

Purpose: Measure individual REST API endpoint performance.

Threshold: 20-200ms (varies by endpoint complexity)

Endpoint Categories:

Products Endpoints (Simple: 50ms, Complex: 200ms) - GET /api/v1/products - List all products - GET /api/v1/products/{uid} - Get product by UID - GET /api/v1/products/search?name={pattern} - Search products - GET /api/v1/products/{uid}/versions - List versions - GET /api/v1/versions/{uid} - Get version details

Robots Endpoints (Simple: 50ms, Complex: 150ms) - GET /api/v1/robots - List all robots - GET /api/v1/robots/{uid} - Get robot by UID - GET /api/v1/robots/{uid}/positions - Get position history

Statistics Endpoints (Simple: 100ms, Complex: 500ms) - GET /api/v1/measurements - List recent measurements - GET /api/v1/measurements/{uid} - Get measurement by UID - GET /api/v1/measurements/stats?from={date}&to={date} - Aggregate stats - GET /api/v1/measurements/serial/{sn} - Query by serial number

WPC Endpoints (Simple: 50ms, Complex: 200ms) - GET /api/v1/wpc - List all WPCs - GET /api/v1/wpc/{uid} - Get WPC by UID - GET /api/v1/wpc/search?number={pattern} - Search WPCs - GET /api/v1/wpc/{uid}/position - Get current position

Parameters Endpoints (Simple: 50ms, Complex: 300ms) - GET /api/v1/parameters/th - Get TH parameters - GET /api/v1/parameters/sc - Get SC parameters - GET /api/v1/parameters/rfl - Get RFL parameters - GET /api/v1/mappings/{versionUid} - Get parameter mappings

System Endpoints (Simple: 20ms) - GET /api/v1/health - Health check - GET /api/v1/health/ready - Readiness check - GET /api/v1/health/live - Liveness check

Performance Classification (varies by endpoint): - Excellent: < 50% of threshold - Good: < 100% of threshold - Warning: < 200% of threshold - Critical: ≥ 200% of threshold

What It Tests: - HTTP request/response overhead - API middleware performance - Controller action execution - Model binding and validation - Serialization performance - Database query latency - Cache hit/miss patterns

Example Output:

API Endpoints Suite (55 benchmarks)
  ✓ GET /api/v1/products              : 18ms [Excellent]
  ✓ GET /api/v1/products/{uid}        : 23ms [Excellent]
  ✓ GET /api/v1/products/search       : 87ms [Good]
  ⚠ GET /api/v1/measurements/stats    : 412ms [Warning]
  ✓ GET /api/v1/health                : 5ms  [Excellent]

Latency Analysis:

Latency Percentiles:
  P50 (Median) : 45ms
  P95          : 156ms
  P99          : 287ms

When to Use: - Validate API endpoint performance - Identify slow endpoints - Test after API changes - Optimize serialization - Validate caching effectiveness


2. API Payload Suite

Purpose: Measure API performance across different response payload sizes.

Threshold: Varies by payload size

Payload Categories:

  1. Small Payloads (< 5KB)
  2. Threshold: 50ms
  3. Examples: Single product, health check, robot status
  4. Classification: Excellent < 25ms, Good < 50ms

  5. Medium Payloads (5-50KB)

  6. Threshold: 100ms
  7. Examples: Product list (10 items), version with mappings
  8. Classification: Excellent < 50ms, Good < 100ms

  9. Large Payloads (50-500KB)

  10. Threshold: 200ms
  11. Examples: Full product catalog, measurement batch
  12. Classification: Excellent < 100ms, Good < 200ms

  13. Extra Large Payloads (500KB+)

  14. Threshold: 500ms
  15. Examples: Historical statistics, full WPC catalog
  16. Classification: Excellent < 250ms, Good < 500ms

What It Tests: - Serialization performance at scale - Network bandwidth utilization - Memory allocation patterns - Garbage collection pressure - JSON serialization efficiency - Response compression effectiveness

Example Output:

API Payload Suite (13 benchmarks)
  ✓ Small: Health Check (0.2KB)         : 5ms   [Excellent]
  ✓ Small: Single Product (3.5KB)       : 22ms  [Excellent]
  ✓ Medium: Product List (25KB)         : 67ms  [Excellent]
  ✓ Large: Full Catalog (250KB)         : 189ms [Good]
  ⚠ XLarge: Historical Stats (850KB)    : 567ms [Warning]

Payload Size Analysis:
  Small (< 5KB)      : Avg 15ms, 8 endpoints
  Medium (5-50KB)    : Avg 78ms, 12 endpoints
  Large (50-500KB)   : Avg 205ms, 3 endpoints
  XLarge (500KB+)    : Avg 523ms, 2 endpoints

When to Use: - Optimize large response payloads - Test pagination effectiveness - Validate compression settings - Identify memory issues


3. Concurrent Load Suite

Purpose: Test API performance under concurrent user load.

Threshold: Varies by concurrency level

Load Test Scenarios:

  1. Light Load (10 concurrent users)
  2. Threshold: 100ms average response time
  3. Tests: Normal operation under light load

  4. Medium Load (50 concurrent users)

  5. Threshold: 250ms average response time
  6. Tests: Typical production load

  7. Heavy Load (100 concurrent users)

  8. Threshold: 500ms average response time
  9. Tests: Peak production load

  10. Throughput Tests

  11. Requests per second (RPS)
  12. Concurrent connections
  13. Connection pool saturation
  14. Database connection limits

What It Tests: - Thread pool efficiency - Database connection pooling - Lock contention - Resource exhaustion - Memory pressure under load - Garbage collection performance - Request queuing behavior

Example Output:

Concurrent Load Suite (10 benchmarks)

Light Load (10 concurrent users):
  ✓ Products API     : 45ms avg, 220 RPS [Excellent]
  ✓ WPC API          : 52ms avg, 190 RPS [Excellent]

Medium Load (50 concurrent users):
  ✓ Products API     : 123ms avg, 405 RPS [Good]
  ⚠ Statistics API   : 287ms avg, 174 RPS [Warning]

Heavy Load (100 concurrent users):
  ⚠ Products API     : 456ms avg, 219 RPS [Warning]
  ✗ Statistics API   : 1234ms avg, 81 RPS [Critical]

Throughput Analysis:
  Max Throughput     : 450 RPS (50 concurrent users)
  Optimal Concurrency: 40-60 users
  Saturation Point   : 80+ users (response time >500ms)

When to Use: - Capacity planning - Load testing before production - Identify concurrency issues - Optimize thread pool settings - Test connection pool configuration


4. API Scenario Suite

Purpose: Test end-to-end API workflows and realistic user scenarios.

Threshold: 500-3000ms (varies by scenario complexity)

Scenarios:

  1. Product CRUD Workflow
  2. POST /api/v1/products (create)
  3. GET /api/v1/products/{uid} (read)
  4. PUT /api/v1/products/{uid} (update)
  5. GET /api/v1/products/{uid} (verify)
  6. DELETE /api/v1/products/{uid} (delete)

  7. Version Management Workflow

  8. Create product
  9. Create version 1
  10. Create version 2
  11. List versions
  12. Update version metadata
  13. Delete product (cascade)

  14. Search and Filter Workflow

  15. Search products by name pattern
  16. Filter by date range
  17. Sort results
  18. Paginate through results

  19. WPC Tracking Workflow

  20. Create WPC
  21. Update position (5 times)
  22. Query position history
  23. Update state
  24. Search by position

  25. Measurement Collection Workflow

  26. Insert batch measurements (POST)
  27. Query measurements by serial number
  28. Calculate statistics
  29. Export results

  30. Complex Query Workflow

  31. Multi-table join queries
  32. Aggregation with grouping
  33. Filtering with multiple criteria
  34. Sorting and pagination

Performance Classification: - Excellent: < 50% of threshold - Good: < 100% of threshold - Warning: < 200% of threshold - Critical: ≥ 200% of threshold

What It Tests: - Multi-request workflows - State management between requests - Transaction handling - Error recovery - Validation performance - Business logic execution

Example Output:

API Scenario Suite (11 benchmarks)
  ✓ Product CRUD Workflow       : 456ms  [Excellent]
  ✓ Version Management Workflow : 1234ms [Good]
  ✓ Search and Filter Workflow  : 789ms  [Good]
  ⚠ WPC Tracking Workflow       : 1876ms [Warning]
  ✓ Measurement Collection      : 2345ms [Good]
  ⚠ Complex Query Workflow      : 3456ms [Warning]

Workflow Analysis:
  Average Steps per Workflow: 5.2
  Average Workflow Duration : 1542ms
  Slowest Step              : Complex Query (2.3s)

When to Use: - Validate end-to-end API functionality - Test realistic user workflows - Identify transaction bottlenecks - Optimize multi-step operations


Choosing the Right Suite

Use Standard Suite When:

  • Testing infrastructure changes (database, repositories)
  • Optimizing query performance
  • Validating Entity Framework configurations
  • Testing after database schema changes
  • Baseline testing without API overhead

Use REST API Suite When:

  • Testing API layer changes
  • Validating endpoint performance
  • Testing under concurrent load
  • Optimizing serialization
  • Preparing for production deployment

Use Both Suites When:

  • Comprehensive performance testing
  • Creating production baselines
  • Performance regression testing
  • Capacity planning
  • Release validation

Performance Optimization Tips

Standard Suite Optimization

  1. Connectivity Issues
  2. Check network latency
  3. Optimize connection pool settings
  4. Enable connection pooling
  5. Consider connection string parameters

  6. Repository Performance

  7. Add database indexes
  8. Optimize Entity Framework queries
  9. Use .AsNoTracking() for read-only queries
  10. Implement query result caching
  11. Review N+1 query problems

  12. Service Performance

  13. Optimize CRC algorithms
  14. Implement caching strategies
  15. Reduce memory allocations
  16. Use value types where possible

  17. Scenario Performance

  18. Batch operations where possible
  19. Optimize transaction scope
  20. Reduce database round-trips
  21. Implement parallel operations

REST API Suite Optimization

  1. Endpoint Performance
  2. Implement response caching
  3. Optimize serialization settings
  4. Add ETag support
  5. Compress responses
  6. Implement pagination

  7. Payload Optimization

  8. Implement field filtering
  9. Use sparse fieldsets
  10. Enable response compression
  11. Consider GraphQL for complex queries
  12. Implement lazy loading

  13. Concurrent Load

  14. Tune thread pool settings
  15. Optimize connection pooling
  16. Implement request throttling
  17. Add caching layers (Redis, memory cache)
  18. Scale horizontally

  19. Scenario Performance

  20. Reduce workflow steps
  21. Implement batch operations
  22. Use async/await properly
  23. Optimize validation logic