Performance Benchmark CLI - Command Reference¶
Overview¶
The Performance Benchmark CLI provides a comprehensive set of commands for running, comparing, and reporting on performance benchmarks. This document details all available commands with examples and use cases.
Command Syntax¶
Or if built as executable:
Global Options¶
These options work with all commands:
| Option | Description |
|---|---|
--config <file> |
Path to configuration file (default: appsettings.json) |
--output <dir> |
Output directory for results (default: ./Results) |
--verbose |
Enable verbose logging and detailed error messages |
--quiet |
Minimal console output |
--no-color |
Disable colored console output |
--help, -h |
Show help information |
--version, -v |
Show version information |
Examples:
# Use custom configuration file
dotnet run -- run --config appsettings.Production.json
# Save results to custom directory
dotnet run -- run --output /path/to/results
# Enable verbose logging
dotnet run -- run --verbose
# Minimal output for CI/CD
dotnet run -- run --quiet
Commands¶
run¶
Execute performance benchmarks.
Syntax:
Options:
| Option | Description |
|---|---|
--suite <name> |
Benchmark suite to run (standard, restapi) |
--iterations <n> |
Number of iterations per benchmark |
--warmup <n> |
Number of warmup iterations |
--baseline |
Save results as baseline for future comparisons |
--format <formats> |
Output formats (comma-separated: json, html, pdf, markdown, console) |
Examples:
# Run standard (infrastructure) benchmarks
dotnet run -- run
dotnet run -- run --suite standard
# Run REST API benchmarks
dotnet run -- run --suite restapi
# Custom iterations
dotnet run -- run --iterations 20 --warmup 5
# Save as baseline
dotnet run -- run --baseline
# Generate specific report formats
dotnet run -- run --format html,pdf,markdown
# Quick test run
dotnet run -- run --iterations 3 --warmup 1 --format console
Exit Codes:
0- Success1- Error (configuration, database connection, etc.)
Output:
PerfBench
Performance Benchmark CLI for Essert.MF
Version 1.0.0
Running standard benchmarks...
[Progress bars and status indicators]
┌─────────────┬───────┬────────────────┐
│ Category │ Count │ Status │
├─────────────┼───────┼────────────────┤
│ Connectivity│ 7 │ 7E 0G 0W 0C │
│ Service │ 12 │ 10E 2G 0W 0C │
│ Repository │ 45 │ 40E 5G 0W 0C │
│ Scenario │ 8 │ 8E 0G 0W 0C │
└─────────────┴───────┴────────────────┘
Summary:
Total Benchmarks: 72
✓ Excellent: 65
✓ Good: 7
⚠ Warning: 0
✗ Critical: 0
✓ Results exported to: ./Results/MACHINE-20250118-143022.json
Use Cases:
- Daily development:
dotnet run -- run --iterations 5 --format console - Pre-commit check:
dotnet run -- run --suite restapi --baseline - Production baseline:
dotnet run -- run --iterations 50 --baseline --format html,pdf - CI/CD validation:
dotnet run -- run --quiet --format markdown
compare¶
Compare two benchmark results to detect performance regressions.
Syntax:
Options:
| Option | Description |
|---|---|
--baseline <file> |
Baseline result file (default: auto-detect from Results/baseline.json) |
--current <file> |
Current result file to compare (required) |
--threshold <percent> |
Regression threshold percentage (default: from config) |
--format <formats> |
Output formats for comparison report |
Examples:
# Compare with baseline (auto-detected)
dotnet run -- compare --current ./Results/latest.json
# Specify baseline explicitly
dotnet run -- compare --baseline ./Results/baseline.json --current ./Results/latest.json
# Custom regression threshold (5% instead of default 10%)
dotnet run -- compare --current ./Results/latest.json --threshold 5
# Generate comparison reports
dotnet run -- compare --current ./Results/latest.json --format html,markdown
Exit Codes:
0- Success, no regressions1- Error (file not found, invalid data, etc.)2- Performance regression detected
Output:
Comparing benchmark results...
Comparison Summary:
┌───────────┬────────────────────┬────────────────────┐
│ │ Baseline │ Current │
├───────────┼────────────────────┼────────────────────┤
│ Machine │ DEV-MACHINE-01 │ DEV-MACHINE-01 │
│ Timestamp │ 2025-01-17 14:30 │ 2025-01-18 09:15 │
│ CPU │ Intel i7 (3.6GHz) │ Intel i7 (3.6GHz) │
│ Memory │ 16,384 MB │ 16,384 MB │
└───────────┴────────────────────┴────────────────────┘
┌─────────┬───────┬────────────┐
│ Status │ Count │ Percentage │
├─────────┼───────┼────────────┤
│ ✓ Faster│ 45 │ 62.5% │
│ ➡ Same │ 25 │ 34.7% │
│ ⚠ Slower│ 2 │ 2.8% │
└─────────┴───────┴────────────┘
Average Change: -3.2% ⬇
Key Findings:
• Database connection time improved by 15%
• Repository queries 5% faster on average
• WPC search endpoint 12% slower
Recommendations:
• Investigate WPC search performance regression
• Review recent changes to WPC query logic
Regression Detection:
If regressions exceed the threshold:
⚠ Performance Regressions Detected (2):
┌──────────────────────────┬──────────┬──────────┬────────┐
│ Benchmark │ Baseline │ Current │ Change │
├──────────────────────────┼──────────┼──────────┼────────┤
│ WPC Search Complex Query │ 245ms │ 312ms │ +27.3% │
│ Product Version Lookup │ 89ms │ 106ms │ +19.1% │
└──────────────────────────┴──────────┴──────────┴────────┘
Exit code: 2 (regression detected)
Use Cases:
- Pre-merge validation:
dotnet run -- compare --current ./Results/latest.json - CI/CD gating:
dotnet run -- compare --current ./Results/pr-123.json --threshold 10 - Performance investigation:
dotnet run -- compare --baseline ./Results/good.json --current ./Results/slow.json --format html
baseline¶
Manage baseline result files for regression detection.
Syntax:
Subcommands:
baseline set¶
Set a result file as the baseline.
Example:
Output:
baseline show¶
Display information about the current baseline.
Output:
Current Baseline:
┌──────────────────┬────────────────────────────┐
│ Property │ Value │
├──────────────────┼────────────────────────────┤
│ File │ baseline.json │
│ Machine │ DEV-MACHINE-01 │
│ Timestamp │ 2025-01-18 14:30:22 │
│ Total Benchmarks │ 72 │
│ Excellent │ 65 │
│ Good │ 7 │
│ Warning │ 0 │
│ Critical │ 0 │
└──────────────────┴────────────────────────────┘
baseline clear¶
Remove the current baseline file.
Output:
Use Cases:
- Initial setup:
dotnet run -- run --baseline(run and set baseline in one command) - Update baseline:
dotnet run -- baseline set --file ./Results/latest.json - Check baseline:
dotnet run -- baseline show - Reset baseline:
dotnet run -- baseline clear
list¶
List all available benchmark result files.
Syntax:
Options:
| Option | Description |
|---|---|
--sort <field> |
Sort by: timestamp, machine, status (future feature) |
--filter <pattern> |
Filter by machine name or date pattern (future feature) |
Examples:
# List all results
dotnet run -- list
# Future: Filter by machine
dotnet run -- list --filter "DEV-MACHINE-01"
# Future: Sort by timestamp
dotnet run -- list --sort timestamp
Output:
Available Benchmark Results:
Found 12 result file(s):
┌────────────────┬─────────────────────┬────────────┬────────────┬──────────────────────────────┐
│ Machine │ Timestamp │ Benchmarks │ Status │ File │
├────────────────┼─────────────────────┼────────────┼────────────┼──────────────────────────────┤
│ DEV-MACHINE-01 │ 2025-01-18 14:30:22 │ 72 │ 65E 7G │ DEV-MACHINE-01-20250118... │
│ DEV-MACHINE-01 │ 2025-01-17 09:15:33 │ 72 │ 60E 10G 2W │ DEV-MACHINE-01-20250117... │
│ CI-RUNNER-03 │ 2025-01-17 03:00:15 │ 72 │ 70E 2G │ CI-RUNNER-03-20250117... │
└────────────────┴─────────────────────┴────────────┴────────────┴──────────────────────────────┘
Status Legend:
- E = Excellent
- G = Good
- W = Warning
- C = Critical
Use Cases:
- Find recent results:
dotnet run -- list - Select baseline: Review list, then
dotnet run -- baseline set --file <selected-file> - Compare runs: Find two result files for comparison
report¶
Generate reports from existing benchmark result files.
Syntax:
Options:
| Option | Description |
|---|---|
--input <file> |
Input result file (required) |
--format <formats> |
Output formats (comma-separated: html, pdf, markdown, console) |
--output <dir> |
Output directory for generated reports |
Examples:
# Generate HTML and PDF reports
dotnet run -- report --input ./Results/latest.json --format html,pdf
# Generate markdown for documentation
dotnet run -- report --input ./Results/baseline.json --format markdown
# Display to console
dotnet run -- report --input ./Results/latest.json --format console
# Custom output directory
dotnet run -- report --input ./Results/latest.json --format html --output /path/to/reports
Output:
Generating reports...
✓ Generated HTML report: MACHINE-20250118-143022.html
✓ Generated PDF report: MACHINE-20250118-143022.pdf
✓ Generated Markdown report: MACHINE-20250118-143022.md
✓ Successfully generated 3 report(s)
Use Cases:
- Post-run reporting: After
runcommand, generate additional report formats - Documentation:
dotnet run -- report --input baseline.json --format markdownfor wiki/docs - Presentations:
dotnet run -- report --input latest.json --format pdffor meetings - Re-generate reports: Generate reports from old results without re-running benchmarks
Common Workflows¶
Daily Development Workflow¶
# Quick test run
dotnet run -- run --iterations 5 --format console
# If results look good, update baseline
dotnet run -- baseline set --file ./Results/latest.json
Pre-Commit Workflow¶
# Run benchmarks
dotnet run -- run --suite restapi
# Compare with baseline
dotnet run -- compare --current ./Results/latest.json
# If no regressions (exit code 0), commit changes
CI/CD Workflow¶
# Run benchmarks (quiet mode)
dotnet run -- run --quiet --format json,markdown
# Compare with baseline (fail build if regression)
dotnet run -- compare --current ./Results/latest.json --threshold 10
# Exit code 2 will fail the build
# Generate reports for PR comment
dotnet run -- report --input ./Results/latest.json --format markdown
Production Baseline Workflow¶
# Run comprehensive benchmarks
dotnet run -- run --iterations 50 --warmup 10
# Generate all reports
dotnet run -- report --input ./Results/latest.json --format html,pdf,markdown
# Set as production baseline
dotnet run -- baseline set --file ./Results/latest.json
Investigation Workflow¶
# List available results
dotnet run -- list
# Compare two specific runs
dotnet run -- compare \
--baseline ./Results/good-run.json \
--current ./Results/slow-run.json \
--format html
# Open HTML report to investigate differences
Exit Codes¶
All commands return exit codes for automation:
| Exit Code | Meaning | Commands |
|---|---|---|
0 |
Success | All commands |
1 |
Error (config, database, file not found) | All commands |
2 |
Performance regression detected | compare |
CI/CD Usage:
# Bash script example
dotnet run -- compare --current ./Results/latest.json
EXIT_CODE=$?
if [ $EXIT_CODE -eq 2 ]; then
echo "Performance regression detected!"
exit 1
elif [ $EXIT_CODE -ne 0 ]; then
echo "Error running comparison"
exit 1
fi
echo "Performance check passed"
Tips and Best Practices¶
Iteration Count Selection¶
- Quick test: 3-5 iterations (~1-2 minutes)
- Development: 10 iterations (~5 minutes)
- CI/CD: 5-10 iterations (~3-5 minutes)
- Baseline: 20-50 iterations (~15-30 minutes)
Report Format Selection¶
- Development:
console(fast feedback) - CI/CD:
json,markdown(automation, PR comments) - Documentation:
markdown,html(wiki, docs) - Presentations:
pdf,html(meetings, stakeholders)
Baseline Management¶
- Set baseline after significant improvements
- Update baseline when infrastructure changes (hardware, database)
- Keep old baselines for historical comparisons
- Use descriptive names when copying baseline files
Performance Investigation¶
- List results:
dotnet run -- list - Compare runs:
dotnet run -- compare --baseline good.json --current slow.json - Generate HTML report:
dotnet run -- report --input comparison.json --format html - Review detailed metrics in the HTML report
Troubleshooting¶
"No baseline file found"¶
"Database connection failed"¶
"Result file not found"¶
# Solution: List available files
dotnet run -- list
# Then use correct file path
dotnet run -- compare --current ./Results/CORRECT-FILE.json
"Permission denied" writing reports¶
# Solution: Specify writable output directory
dotnet run -- report --input latest.json --output /tmp/reports
Related Documentation¶
- Main README - Overview and quick start
- Configuration Guide - Detailed configuration options
- Benchmark Suites - Available benchmark suites
- Report Formats - Report format documentation