Performance Benchmark CLI Documentation¶
Overview¶
The Performance Benchmark CLI (PerformanceBenchmarkCli) is a comprehensive command-line tool for running, analyzing, and reporting on performance benchmarks for the Essert.MF system. It provides automated performance monitoring capabilities with support for baseline management, regression detection, and multiple report formats.
Key Features¶
- Comprehensive Benchmark Suites
- Standard Infrastructure Benchmarks (Database connectivity, repositories, services, scenarios)
- REST API Benchmarks (Endpoints, payloads, concurrent load, scenarios)
- Multiple Report Formats
- JSON (machine-readable results)
- HTML (interactive web reports with charts)
- PDF (printable performance reports)
- Markdown (documentation-friendly format)
- Console (terminal output)
- Baseline Management
- Save baseline results for comparison
- Detect performance regressions automatically
- Track performance trends over time
- Machine Context Tracking
- CPU, memory, disk information
- Database latency measurements
- OS and runtime version tracking
- CI/CD Integration
- Exit codes for automated testing
- Regression detection with configurable thresholds
- Automated report generation
Quick Start¶
Prerequisites¶
- .NET 9.0 SDK or later
- Access to Essert.MF database servers (configured in
appsettings.json) - Essert.MF solution built and ready
Basic Usage¶
# Navigate to the CLI directory
cd PerformanceBenchmarkCli
# Run all standard benchmarks (database infrastructure)
dotnet run -- run
# Run REST API benchmarks
dotnet run -- run --suite restapi
# Run and save as baseline
dotnet run -- run --baseline
# Compare with baseline
dotnet run -- compare --current ./Results/latest.json
# Generate reports from existing results
dotnet run -- report --input ./Results/latest.json --format html,pdf
# List all available results
dotnet run -- list
# Show current baseline
dotnet run -- baseline show
First Time Setup¶
- Configure Database Connections
Edit appsettings.json and update the connection strings:
{
"ConnectionStrings": {
"ProcessDb": "Server=YOUR_SERVER;Database=db_process;User=YOUR_USER;Password=YOUR_PASSWORD;",
"StatisticsDb": "Server=YOUR_SERVER;Database=db_statistics;User=YOUR_USER;Password=YOUR_PASSWORD;",
"ChangelogsDb": "Server=YOUR_SERVER;Database=db_changelogs;User=YOUR_USER;Password=YOUR_PASSWORD;",
"EssertDb": "Server=YOUR_SERVER;Database=db_essert;User=YOUR_USER;Password=YOUR_PASSWORD;",
"ProductParameterDb": "Server=YOUR_SERVER;Database=db_productparameter;User=YOUR_USER;Password=YOUR_PASSWORD;",
"RobotsDb": "Server=YOUR_SERVER;Database=db_robots;User=YOUR_USER;Password=YOUR_PASSWORD;",
"WpcDb": "Server=YOUR_SERVER;Database=db_wpc;User=YOUR_USER;Password=YOUR_PASSWORD;"
}
}
- Run Initial Benchmarks
- Set Baseline
- Generate Reports
Available Commands¶
The CLI provides the following commands:
run- Execute performance benchmarkscompare- Compare two benchmark resultsbaseline- Manage baseline files (set, show, clear)list- List available result filesreport- Generate reports from existing results
For detailed command documentation, see COMMANDS.md.
Benchmark Suites¶
Standard Suite (Infrastructure)¶
Tests database infrastructure performance: - Connectivity - Database connection tests (threshold: 100ms) - Repository - CRUD operations and queries (threshold: 500ms) - Service - CRC calculation and caching (threshold: 100ms) - Scenario - Complex workflows (threshold: 2000ms)
REST API Suite¶
Tests REST API endpoint performance: - Endpoints - 55 GET endpoint benchmarks (threshold: 20-200ms) - Payloads - 13 payload size benchmarks (Small/Medium/Large/XLarge) - Concurrent Load - 10 load tests (10/50/100 concurrent users) - Scenarios - 11 end-to-end workflow benchmarks
For detailed suite documentation, see BENCHMARK_SUITES.md.
Configuration¶
The CLI is configured via appsettings.json. Key configuration sections:
- Benchmark.Output - Results directory, retention policy
- Benchmark.Execution - Default iterations, warmup, memory diagnostics
- Benchmark.Suites - Suite-specific thresholds and settings
- Benchmark.Reports - Report format configurations
- Benchmark.Analysis - Regression detection settings
- ConnectionStrings - Database connection strings
For complete configuration documentation, see CONFIGURATION.md.
Report Formats¶
The CLI supports multiple report formats for different use cases:
- JSON - Machine-readable, ideal for CI/CD and programmatic analysis
- HTML - Interactive web reports with charts and detailed metrics
- PDF - Printable reports for documentation and presentations
- Markdown - Documentation-friendly format for GitHub/wiki
- Console - Rich terminal output for interactive sessions
For detailed report documentation, see REPORTS.md.
CI/CD Integration¶
The CLI is designed for automated performance monitoring in CI/CD pipelines.
Exit Codes¶
0- Success, no regressions detected1- Error (configuration, database connection, etc.)2- Performance regression detected (exceeds threshold)
GitHub Actions Example¶
- name: Run Performance Benchmarks
run: |
cd PerformanceBenchmarkCli
dotnet run -- run --suite restapi --format markdown
- name: Compare with Baseline
run: |
cd PerformanceBenchmarkCli
dotnet run -- compare --current ./Results/latest.json --threshold 10
Azure DevOps Example¶
- task: DotNetCoreCLI@2
displayName: 'Run Performance Benchmarks'
inputs:
command: 'run'
projects: 'PerformanceBenchmarkCli/*.csproj'
arguments: '-- run --suite restapi'
- task: DotNetCoreCLI@2
displayName: 'Compare with Baseline'
inputs:
command: 'run'
projects: 'PerformanceBenchmarkCli/*.csproj'
arguments: '-- compare --current ./Results/latest.json --threshold 10'
continueOnError: false
When to Run Benchmarks¶
Run performance benchmarks in these scenarios:
- Before merging performance-critical changes
- After infrastructure updates (database, network, hardware)
- During capacity planning to understand system limits
- When investigating performance regressions
- As part of release validation to ensure quality
- Scheduled runs (nightly/weekly) for trend analysis
Results Directory Structure¶
./Results/
├── baseline.json # Current baseline
├── MACHINE-20250118-143022.json # Timestamped results
├── MACHINE-20250118-143022.html # HTML report
├── MACHINE-20250118-143022.pdf # PDF report
├── MACHINE-20250118-143022.md # Markdown report
└── archive/ # Old results (>90 days)
Performance Thresholds¶
Default performance thresholds by benchmark type:
| Suite | Threshold | Status Classification |
|---|---|---|
| Connectivity | 100ms | Excellent: <50ms, Good: <100ms, Warning: <200ms, Critical: ≥200ms |
| Repository | 500ms | Excellent: <250ms, Good: <500ms, Warning: <1000ms, Critical: ≥1000ms |
| Service | 100ms | Excellent: <50ms, Good: <100ms, Warning: <200ms, Critical: ≥200ms |
| Scenario | 2000ms | Excellent: <1000ms, Good: <2000ms, Warning: <4000ms, Critical: ≥4000ms |
| REST API (simple) | 50ms | Excellent: <20ms, Good: <50ms, Warning: <100ms, Critical: ≥100ms |
| REST API (complex) | 200ms | Excellent: <100ms, Good: <200ms, Warning: <500ms, Critical: ≥500ms |
Thresholds can be customized in appsettings.json.
Troubleshooting¶
Common Issues¶
"No baseline file found"
- Run dotnet run -- baseline set --file <results-file> to create a baseline
"Database connection failed"
- Verify connection strings in appsettings.json
- Ensure database server is accessible
- Check firewall rules and network connectivity
"Out of memory during benchmarks"
- Reduce iterations in configuration: Benchmark.Execution.DefaultIterations
- Disable memory diagnostics: Benchmark.Execution.IncludeMemoryDiagnostics: false
"PDF generation failed" - Ensure QuestPDF license acceptance (first run) - Check disk space for large reports
Verbose Logging¶
Enable verbose logging for troubleshooting:
Or update appsettings.json:
Related Documentation¶
- Command Reference - Detailed command documentation
- Configuration Guide - Complete configuration reference
- Benchmark Suites - Available benchmark suites
- Report Formats - Report format documentation
Support¶
For issues or questions:
1. Check the troubleshooting section above
2. Review the detailed documentation in this directory
3. Check the main project documentation in /docs
4. Contact the development team
Version¶
Version: 1.0.0 Last Updated: 2025-01-18 Compatibility: .NET 9.0, Essert.MF v1.0+