This folder contains micro benchmarks that test the performance of PowerShell Engine.
- A good suite of benchmarks Something that measures only the thing that we are interested in and produces accurate, stable and repeatable results.
- A set of machine with the same configurations.
- Automation for regression detection.
- This project is internal visible to
System.Management.Automation. We want to be able to target some internal APIs to get measurements on specific scoped scenarios, such as measuring the time to compile AST to a delegate by the compiler. - This project makes
ProjectReferenceto other PowerShell assemblies. This makes it easy to run benchmarks with the changes made in the codebase. To run benchmarks with a specific version of PowerShell, just replace theProjectReferencewith aPackageReferenceto theMicrosoft.PowerShell.SDKNuGet package of the corresponding version.
You can run the benchmarks directly using dotnet run in this directory:
-
To run the benchmarks in Interactive Mode, where you will be asked which benchmark(s) to run:
dotnet run -c Release -f net6.0 -
To list all available benchmarks (read more):
dotnet run -c Release -f net6.0 --list [flat/tree] -
To filter the benchmarks using a glob pattern applied to
namespace.typeName.methodName(read more]):dotnet run -c Release -f net6.0 --filter *script* --list flat -
To profile the benchmarked code and produce an ETW Trace file (read more)
dotnet run -c Release -f net6.0 --filter *script* --profiler ETW
You can also use the function Start-Benchmarking from the module perf.psm1 to run the benchmarks:
Start-Benchmarking [-TargetFramework <string>] [-List <string>] [-Filter <string[]>] [-Artifacts <string>] [-KeepFiles] [<CommonParameters>]
Start-Benchmarking [-TargetPSVersion <string>] [-Filter <string[]>] [-Artifacts <string>] [-KeepFiles] [<CommonParameters>]
Start-Benchmarking -Runtime <string[]> [-Filter <string[]>] [-Artifacts <string>] [-KeepFiles] [<CommonParameters>]Run Get-Help Start-Benchmarking -Full to see the description of each parameter.
We use the tool ResultsComparer to compare the provided benchmark results.
See the README.md for ResultsComparer for more details.
The module perf.psm1 also provides Compare-BenchmarkResult that wraps ResultsComparer.
Here is an example of using it:
## Run benchmarks targeting the current code base
PS:1> Start-Benchmarking -Filter *script* -Artifacts C:\arena\tmp\BenchmarkDotNet.Artifacts\current\
## Run benchmarks targeting the 7.1.3 version of PS package
PS:2> Start-Benchmarking -Filter *script* -Artifacts C:\arena\tmp\BenchmarkDotNet.Artifacts\7.1.3 -TargetPSVersion 7.1.3
## Compare the results using 5% threshold
PS:3> Compare-BenchmarkResult -BaseResultPath C:\arena\tmp\BenchmarkDotNet.Artifacts\7.1.3\ -DiffResultPath C:\arena\tmp\BenchmarkDotNet.Artifacts\current\ -Threshold 1%
summary:
better: 4, geomean: 1.057
total diff: 4
No Slower results for the provided threshold = 1% and noise filter = 0.3ns.
| Faster | base/diff | Base Median (ns) | Diff Median (ns) | Modality|
| -------------------------------------------------------------------------------- | ---------:| ----------------:| ----------------:| --------:|
| Engine.Scripting.InvokeMethod(Script: "$fs=New-Object -ComObject scripting.files | 1.07 | 50635.77 | 47116.42 | |
| Engine.Scripting.InvokeMethod(Script: "$sh=New-Object -ComObject Shell.Applicati | 1.07 | 1063085.23 | 991602.08 | |
| Engine.Scripting.InvokeMethod(Script: "'String'.GetType()") | 1.06 | 1329.93 | 1252.51 | |
| Engine.Scripting.InvokeMethod(Script: "[System.IO.Path]::HasExtension('')") | 1.02 | 1322.04 | 1297.72 | |
No file given