module Scheduler: Schedulertypet =Raw_scheduler.t
val t : unit -> tt () returns the async scheduler.  If the scheduler hasn't been created yet, this
    will create it and acquire the async lock.val go : ?raise_unhandled_exn:bool -> unit -> Core.Std.never_returnsgo ?raise_unhandled_exn () passes control to Async, at which point Async starts
    running handlers, one by one without interruption, until there are no more handlers to
    run.  When Async is out of handlers it blocks until the outside world schedules more
    of them.  Because of this, Async programs do not exit until shutdown is called.
    go () calls handle_signal Sys.sigpipe, which causes the SIGPIPE signal to be
    ignored.  Low-level syscalls (e.g. write) still raise EPIPE.
    If any async job raises an unhandled exception that is not handled by any monitor,
    async execution ceases.  Then, by default, async pretty prints the exception, and
    exits with status 1.  If you don't want this, pass ~raise_unhandled_exn:true, which
    will cause the unhandled exception to be raised to the caller of go ().
val go_main : ?raise_unhandled_exn:bool ->
       ?file_descr_watcher:Import.Config.File_descr_watcher.t ->
       ?max_num_open_file_descrs:int ->
       ?max_num_threads:int -> main:(unit -> unit) -> unit -> Core.Std.never_returnsgo_main is like go, except that one supplies a main function that will be run to
    initialize the async computation, and that go_main will fail if any async has been
    used prior to go_main being called.  Moreover it allows to configure more static
    options of the scheduler.type'awith_options =?monitor:Import.Monitor.t -> ?priority:Import.Priority.t -> 'a
val within_context : Import.Execution_context.t -> (unit -> 'a) -> ('a, unit) Core.Std.Result.twithin_context context f runs f () right now with the specified execution
    context.  If f raises, then the exception is sent to the monitor of context, and
    Error () is returned.val within' : ((unit -> 'a Import.Deferred.t) -> 'a Import.Deferred.t)
       with_optionswithin' f ~monitor ~priority runs f () right now, with the specified
    block group, monitor, and priority set as specified.  They will be reset to their
    original values when f returns.  If f raises, then the result of within' will
    never become determined, but the exception will end up in the specified monitor.val within : ((unit -> unit) -> unit) with_optionswithin is like within', but doesn't require thunk to return a deferred.val within_v : ((unit -> 'a) -> 'a option) with_optionswithin_v is like within, but allows a value to be returned by f.val with_local : 'a Core.Std.Univ_map.Key.t -> 'a option -> f:(unit -> 'b) -> 'bwith_local key value ~f runs f right now with the binding key = value.  All
    calls to find_local key in f and computations started from f will return
    value.val find_local : 'a Core.Std.Univ_map.Key.t -> 'a optionfind_local key returns the value associated to key in the current execution
    context.val schedule' : ((unit -> 'a Import.Deferred.t) -> 'a Import.Deferred.t)
       with_optionswithin', but instead of running thunk right now, adds
    it to the async queue to be run with other async jobs.val schedule : ((unit -> unit) -> unit) with_optionsval preserve_execution_context : ('a -> unit) -> ('a -> unit) Core.Std.Staged.tpreserve_execution_context t f saves the current execution context and returns a
    function g such that g a runs f a in the saved execution context.  g a becomes
    determined when f a becomes determined.val preserve_execution_context' : ('a -> 'b Import.Deferred.t) ->
       ('a -> 'b Import.Deferred.t) Core.Std.Staged.t
val cycle_start : unit -> Core.Std.Time.tcycle_start () returns the result of Time.now () called at the beginning of
    cycle.val cycle_times : unit -> Core.Std.Time.Span.t Import.Stream.tcycle_times () returns a stream that will have one element for each cycle that Async
    runs, with the amount of time that the cycle took (as determined by calls to Time.now
    at the beginning and end of the cycle).val report_long_cycle_times : ?cutoff:Core.Std.Time.Span.t -> unit -> unitreport_long_cycle_times ?cutoff () sets up something that will print a warning to
    stderr whenever there is an async cycle that is too long, as specified by cutoff,
    whose default is 1s.val cycle_count : unit -> intcycle_count () returns the total number of async cycles that have happened.val force_current_cycle_to_end : unit -> unitforce_current_cycle_to_end () causes no more normal priority jobs to run in the
    current cycle, and for the end-of-cycle jobs (i.e. writes) to run, and then for the
    cycle to end.val is_running : unit -> boolis_running () returns true if the scheduler has been started.val set_max_num_jobs_per_priority_per_cycle : int -> unitset_max_num_jobs_per_priority_per_cycle int sets the maximum number of jobs that
    will be done at each priority within each async cycle.  The default is 500.val set_max_inter_cycle_timeout : Core.Std.Time.Span.t -> unitset_max_inter_cycle_timeout span sets the maximum amount of time the scheduler will
    remain blocked (on epoll or select) between cycles.val set_check_invariants : bool -> unitset_check_invariants do_check sets whether async should check invariants of its
    internal data structures.  set_check_invariants true can substantially slow down
    your program.val set_detect_invalid_access_from_thread : bool -> unitset_detect_invalid_access_from_thread do_check sets whether async routines should
    check if they are being accessed from some thread other than the thread currently
    holding the async lock, which is not allowed and can lead to very confusing
    behavior.val set_record_backtraces : bool -> unitset_record_backtraces do_record sets whether async should keep in the execution
    context the history of stack backtraces (obtained via Backtrace.get) that led to the
    current job.  If an async job has an unhandled exception, this backtrace history will
    be recorded in the exception.  In particular the history will appean in an unhandled
    exception that reaches the main monitor.  This can have a substantial performance
    impact, both in running time and space usage.type 'b folder = {
   | 
folder :  | 
fold_fields ~init folder folds folder over each field in the scheduler.  The
    fields themselves are not exposed -- folder must be a polymorphic function that
    can work on any field.  So, it's only useful for generic operations, e.g. getting
    the size of each field.val fold_fields : init:'b -> 'b folder -> 'b
val is_ready_to_initialize : unit -> bool
val reset_in_forked_process : unit -> unitreset_in_forked_process at the
    start of execution in the child process.  After that, the child can do async stuff and
    then start the async scheduler.val add_busy_poller : (unit -> [ `Continue_polling | `Stop_polling of 'a ]) -> 'a Import.Deferred.tBusy polling is useful for a situation like a shared-memory ringbuffer being used for IPC. One can poll the ringbuffer with a busy poller, and then when data is detected, fill some ivar that causes async code to handle the data.
    add_busy_poller poll adds poll to the busy loop.  poll will be called
    continuously, once per iteration of the busy loop, until it returns `Stop_polling a
    at which point the result of add_busy_poller will become determined.  poll will
    hold the async lock while running, so it is fine to do ordinary async operations,
    e.g. fill an ivar.  The busy loop will run an ordinary async cycle if any of the
    pollers add jobs.
    poll will run in monitor in effect when add_busy_poller was called; exceptions
    raised by poll will be sent asynchronously to that monitor.  If poll raises, it
    will still be run on subsequent iterations of the busy loop.
val handle_thread_pool_stuck : (Core.Std.Time.Span.t -> unit) -> unithandle_thread_pool_stuck f causes f to run whenever async detects its thread pool
    is stuck (i.e. hasn't completed a job for over a second and has no available threads).
    Async checks every second.  By default, if thread pool has been stuck for less than
    60s, async will eprintf a message.  If more than 60s, async will send an exception
    to the main monitor, which will abort the program unless there is a custom handler for
    the main monitor.
    Calling handle_thread_pool_stuck replaces whatever behavior was previously there.
val sexp_of_t : t -> Sexplib.Sexp.tt () returns the async scheduler.  If the scheduler hasn't been created yet, this
    will create it and acquire the async lock.go ?raise_unhandled_exn () passes control to Async, at which point Async starts
    running handlers, one by one without interruption, until there are no more handlers to
    run.  When Async is out of handlers it blocks until the outside world schedules more
    of them.  Because of this, Async programs do not exit until shutdown is called.
    go () calls handle_signal Sys.sigpipe, which causes the SIGPIPE signal to be
    ignored.  Low-level syscalls (e.g. write) still raise EPIPE.
    If any async job raises an unhandled exception that is not handled by any monitor,
    async execution ceases.  Then, by default, async pretty prints the exception, and
    exits with status 1.  If you don't want this, pass ~raise_unhandled_exn:true, which
    will cause the unhandled exception to be raised to the caller of go ().
default is false
go_main is like go, except that one supplies a main function that will be run to
    initialize the async computation, and that go_main will fail if any async has been
    used prior to go_main being called.  Moreover it allows to configure more static
    options of the scheduler.
default is false
default to Config
default to Config
default to Config
within_context context f runs f () right now with the specified execution
    context.  If f raises, then the exception is sent to the monitor of context, and
    Error () is returned.
within' f ~monitor ~priority runs f () right now, with the specified
    block group, monitor, and priority set as specified.  They will be reset to their
    original values when f returns.  If f raises, then the result of within' will
    never become determined, but the exception will end up in the specified monitor.
within is like within', but doesn't require thunk to return a deferred.
within_v is like within, but allows a value to be returned by f.
with_local key value ~f runs f right now with the binding key = value.  All
    calls to find_local key in f and computations started from f will return
    value.
find_local key returns the value associated to key in the current execution
    context.
Just like within', but instead of running thunk right now, adds
    it to the async queue to be run with other async jobs.
Just like schedule', but doesn't require thunk to return a deferred.
preserve_execution_context t f saves the current execution context and returns a
    function g such that g a runs f a in the saved execution context.  g a becomes
    determined when f a becomes determined.
cycle_start () returns the result of Time.now () called at the beginning of
    cycle.
cycle_times () returns a stream that will have one element for each cycle that Async
    runs, with the amount of time that the cycle took (as determined by calls to Time.now
    at the beginning and end of the cycle).
report_long_cycle_times ?cutoff () sets up something that will print a warning to
    stderr whenever there is an async cycle that is too long, as specified by cutoff,
    whose default is 1s.
cycle_count () returns the total number of async cycles that have happened.
force_current_cycle_to_end () causes no more normal priority jobs to run in the
    current cycle, and for the end-of-cycle jobs (i.e. writes) to run, and then for the
    cycle to end.
is_running () returns true if the scheduler has been started.
set_max_num_jobs_per_priority_per_cycle int sets the maximum number of jobs that
    will be done at each priority within each async cycle.  The default is 500.
set_max_inter_cycle_timeout span sets the maximum amount of time the scheduler will
    remain blocked (on epoll or select) between cycles.
set_check_invariants do_check sets whether async should check invariants of its
    internal data structures.  set_check_invariants true can substantially slow down
    your program.
set_detect_invalid_access_from_thread do_check sets whether async routines should
    check if they are being accessed from some thread other than the thread currently
    holding the async lock, which is not allowed and can lead to very confusing
    behavior.
set_record_backtraces do_record sets whether async should keep in the execution
    context the history of stack backtraces (obtained via Backtrace.get) that led to the
    current job.  If an async job has an unhandled exception, this backtrace history will
    be recorded in the exception.  In particular the history will appean in an unhandled
    exception that reaches the main monitor.  This can have a substantial performance
    impact, both in running time and space usage.
fold_fields ~init folder folds folder over each field in the scheduler.  The
    fields themselves are not exposed -- folder must be a polymorphic function that
    can work on any field.  So, it's only useful for generic operations, e.g. getting
    the size of each field.
If a process that has already created, but not started, the async scheduler would like
    to fork, and would like the child to have a clean async, i.e. not inherit any of the
    async work that was done in the parent, it can call reset_in_forked_process at the
    start of execution in the child process.  After that, the child can do async stuff and
    then start the async scheduler.
Async supports "busy polling", which runs a thread that busy loops running
    user-supplied polling functions.  The busy-loop thread is distinct from async's
    scheduler thread.
Busy polling is useful for a situation like a shared-memory ringbuffer being used for IPC. One can poll the ringbuffer with a busy poller, and then when data is detected, fill some ivar that causes async code to handle the data.
    add_busy_poller poll adds poll to the busy loop.  poll will be called
    continuously, once per iteration of the busy loop, until it returns `Stop_polling a
    at which point the result of add_busy_poller will become determined.  poll will
    hold the async lock while running, so it is fine to do ordinary async operations,
    e.g. fill an ivar.  The busy loop will run an ordinary async cycle if any of the
    pollers add jobs.
    poll will run in monitor in effect when add_busy_poller was called; exceptions
    raised by poll will be sent asynchronously to that monitor.  If poll raises, it
    will still be run on subsequent iterations of the busy loop.
handle_thread_pool_stuck f causes f to run whenever async detects its thread pool
    is stuck (i.e. hasn't completed a job for over a second and has no available threads).
    Async checks every second.  By default, if thread pool has been stuck for less than
    60s, async will eprintf a message.  If more than 60s, async will send an exception
    to the main monitor, which will abort the program unless there is a custom handler for
    the main monitor.
    Calling handle_thread_pool_stuck replaces whatever behavior was previously there.