A calibrator contains a snapshot of machine-specific information that is used to
convert between TSC values and clock time. This information needs to be calibrated
periodically such that it stays updated w.r.t. changes in the CPU's time-stamp-counter
frequency, which can vary depending on load, heat etc. (Also see the comment in the
.ml
file)
Calibration at the rate of 0.1, 1 or 2 secs produces errors (measured as the
difference between Time.now
and the reported time here) on the order of 1-2us.
Given the precision of 52bit float mantissa values, this is very close to least error
one can have on these values. Calibration once per 10sec produces errors that are
+/-4us. Calibration once per minute produces errors that are +/-15us and calibration
once in 3mins produces errors +/-30us. (It is worth remarking that the error has a
positive bias of 1us -- i.e. the error dances around the 1us mark, rather than around
0. It is unclear where this bias is introduced, though it probably does not matter for
most applications.)
This module maintains an instance of t
internal to the module. The internal
instance of t
can be updated via calls to calibrate ()
, i.e. without specifying
the t
parameter. In all the functions below that take an optional Calibrator.t
argument, the internal instance is used when no calibrator is explicitly specified.
create ()
creates an uninitialized calibrator instance. Creating a calibrator
takes about 3ms. One needs a recently calibrated Calibrator.t
and the TSC value
from the same machine to meaningfully convert the TSC value to a Time.t
.
calibrate ~t
updates t
by measuring the current value of the TSC and
Time.now
.