Module Core_bench.Bench
Core_bench is a micro-benchmarking library for OCaml that can measure execution costs of operations that take 1ns to about 100ms. Core_bench tries to measure execution costs of such short-lived computations precisely while trying to account for delayed GC costs and noise introduced by other activity on the system.
The easiest way to get started is using an example:
open! Core
open Core_bench
let () =
Random.self_init ();
let x = Random.float 10.0 in
let y = Random.float 10.0 in
Command.run (Bench.make_command [
Bench.Test.create ~name:"Float add" (fun () ->
ignore (x +. y));
Bench.Test.create ~name:"Float mul" (fun () ->
ignore (x *. y));
Bench.Test.create ~name:"Float div" (fun () ->
ignore (x /. y));
])When compiled this gives you an executable:
$ ./z.exe -ascii
Estimated testing time 30s (3 benchmarks x 10s). Change using -quota SECS.
Name Time/Run mWd/Run Percentage
----------- ---------- --------- ------------
Float add 2.50ns 2.00w 41.70%
Float mul 2.55ns 2.00w 42.52%
Float div 5.99ns 2.00w 100.00%If any of the functions resulted in allocation on the major heap (mjWd) or promotions (Prom), columns corresponding to those would be automatically displayed. Columns that do not have significant values are not displayed by default. The most common options one would want to change are the `-q` flag which controls the time quota for testing and enabling/disabling specific columns. For example:
$ ./z.exe -ascii -q 1 cycles
Estimated testing time 3s (3 benchmarks x 1s). Change using -quota SECS.
Name Time/Run Cycls/Run mWd/Run Percentage
----------- ---------- ----------- --------- ------------
Float add 2.50ns 8.49c 2.00w 41.78%
Float mul 2.77ns 9.40c 2.00w 46.29%
Float div 5.99ns 20.31c 2.00w 100.00%If you drop the `-ascii` flag, the output table uses extended Ascii characters. These display well on most modern terminals, but not on ocamldoc.
The simplest benchmark specification is just a unit -> unit thunk and a name:
Bench.Test.create ~name:"Float add" (fun () -> ignore (x +. y));One can also create indexed benchmarks, which can be helpful in understanding non-linearities in the execution profiles of functions. For example:
open! Core
open Core_bench
let () =
Command.run (Bench.make_command [
Bench.Test.create_indexed
~name:"Array.create"
~args:[1; 10; 100; 200; 300; 400]
(fun len ->
Staged.stage (fun () -> ignore(Array.create ~len 0)));
])this produces:
$ ./z.exe -ascii -q 3
Estimated testing time 18s (6 benchmarks x 3s). Change using -quota SECS.
Name Time/Run mWd/Run mjWd/Run Percentage
------------------ ------------ --------- ---------- ------------
Array.create:1 27.23ns 2.00w 1.08%
Array.create:10 38.79ns 11.00w 1.53%
Array.create:100 124.05ns 101.00w 4.91%
Array.create:200 188.13ns 201.00w 7.44%
Array.create:300 1_887.20ns 301.00w 74.64%
Array.create:400 2_528.43ns 401.00w 100.00%Executables produced using Bench.make_command are self documenting (use the `-?` flag). The documentation in the executable also closely corresponds to the functionality exposed through the .mli and is a great way to interactively explore what the various options do.
- see https://github.com/janestreet/core_bench/wiki
Core_bench wiki
module Test : sig ... endTest.tare benchmarked by calls to bench.
module Variable : sig ... endVariable.ts represent variables than can be used as predictors or the responder when specifying a regression.
module Quota : sig ... endA quota can be specified as an amount of wall time, or a number of times to run the function.
module Run_config : sig ... endRun_config.tspecifies how a benchmark should be run.
module Display_config : sig ... endDisplay_config.tspecifies how the output tables should be formatted.
module Analysis_config : sig ... endEach
Analysis_config.tspecifies a regression run byCore_bench. This module also provides several typical regressions that one might want to run.
module Analysis_result : Core_bench__.Analysis_result_intf.Analysis_resultResults of a benchmark analysis, including all the regressions.
module Measurement : sig ... endA
Measurement.trepresents the result of measuring execution of aTest.t. It is used as input for subsequent analysis.
val make_command : Test.t list -> Core.Command.tmake_command testsis the easiest way to generate a command-line program that runs a list of benchmarks. Heretests : Test.t listare the benchmarks that should be run. This returns aCommand.twhich provides a command-line interface for running the benchmarks. See notes above for an example.
val bench : ?run_config:Run_config.t -> ?analysis_configs:Analysis_config.t list -> ?display_config:Display_config.t -> ?save_to_file:(Measurement.t -> string) -> ?libname:string -> Test.t list -> unitbench testswill run, analyze and display the specifiedtests. Use this when one needs more control over the execution parameters that what is exposed throughmake_command.benchcan also save the measurements of each test to the filename returned bysave_to_file.
val measure : ?run_config:Run_config.t -> Test.t list -> Measurement.t listmeasureis a fragment of the functionality ofbench.measure testswill run the specifiedtestsand return the resulting measurement results.
val analyze : ?analysis_configs:Analysis_config.t list -> Measurement.t -> Analysis_result.t Core.Or_error.tanalyzeis a fragment of the functionality ofbench.analyze ~analysis_configs mwill analyze the measurementmusing the regressions specified.
val display : ?libname:string -> ?display_config:Display_config.t -> Analysis_result.t list -> unitdisplayis a fragment of the functionality ofbench.display resultswill display a tabular summary ofresultson the terminal.
val make_command_ext : summary:string -> ((Analysis_config.t list * Display_config.t * [ `From_file of string list | `Run of (Measurement.t -> string) option * Run_config.t ]) -> unit) Core.Command.Param.t -> Core.Command.tmake_command_extis useful for creatingCommand.ts that have command line flags in addition to those provided bymake_command.