io_stats Overall IO statistics for all writers
stderr are writers for file descriptors 1 and 2. They are lazy because
we don't want to create them in all programs that happen to link with Async.
stderr is created, they both are created. Furthermore, if
they point to the same inode, then they will be the same writer to
can be confusing, because
fd (force stderr) will be
And subsequent modifications of
Fd.stderr will have no effect on
Unfortunately, the sharing is necessary because Async uses OS threads to do write() syscalls using the writer buffer. When calling a program that redirects stdout and stderr to the same file, as in:
foo.exe >/tmp/z.file 2>&1
Writer.stderr weren't the same writer, then they could have
threads simultaneously writing to the same file, which could easily cause data loss.
create ?buf_len ?syscall ?buffer_age_limit fd creates a new writer. The file
descriptor fd should not be in use for writing by anything else.
By default, a write system call occurs at the end of a cycle in which bytes were written. One can supply ~syscall:(`Periodic span) to get better performance. This batches writes together, doing the write system call periodically according to the supplied span.
A writer can asynchronously fail if the underlying write syscall returns an error, e.g. EBADF, EPIPE, ECONNRESET, ....
buffer_age_limit specifies how backed up you can get before raising an exception.
The default is
`Unlimited for files, and 2 minutes for other kinds of file
descriptors. You can supply
`Unlimited to turn off buffer-age checks.
raise_when_consumer_leaves specifies whether the writer should raise an exception
when the consumer receiving bytes from the writer leaves, i.e. in Unix, the write
syscall returns EPIPE or ECONNRESET. If
not raise_when_consumer_leaves, then the
writer will silently drop all writes after the consumer leaves, and the writer will
eventually fail with a writer-buffer-older-than error if the application remains open
set_raise_when_consumer_leaves t bool sets the
raise_when_consumer_leaves flag of
t, which determies how
t responds to a write system call raising EPIPE and
with_file ~file f opens
file for writing, creates a writer
t, and runs
f t to
obtain a deferred
d becomes determined, the writer is closed. When the
close completes, the result of
with_file becomes determined with the value of
There is no need to call
Writer.flushed to ensure that
with_file waits for the
writer to be flushed before closing it.
Writer.close will already wait for the
write_gen t a writes
a to writer
length specifying the number of bytes
a directly into the
t's buffer. If one
has a type that has
blit_to_bigstring functions, like:
module A : sig
val length : t -> int
val blit_to_bigstring : (t, Bigstring.t) Blit.blit
then one can use
write_gen to implement a custom analog of
module Write_a : sig
val write : ?pos:int -> ?len:int -> A.t -> Writer.t -> unit
end = struct
let write ?pos ?len a writer =
?pos ?len writer a
If it is difficult to write only part of a value, one can choose to not support
module Write_a : sig
val write : A.t -> Writer.t -> unit
end = struct
let write a writer =
write ?pos ?len t s adds a job to the writer's queue of pending writes. The
contents of the string are copied to an internal buffer before
write returns, so
clients can do whatever they want with
s after that.
write_char t c writes the character
newline t is
write_char t '\n'
write_line t s is
write t s; newline t.
write_byte t i writes one 8-bit integer (as the single character with that code).
The given integer is taken modulo 256.
write_sexp t sexp writes to
t the string representation of
followed by a terminating character as per
~terminate_with:Newline, the terminating character is a newline. With
~terminate_with:Space_if_needed, if a space is needed to ensure that the sexp reader
knows that it has reached the end of the sexp, then the terminating character will be
a space; otherwise, no terminating character is added. A terminating space is needed
if the string representation doesn't end in
write_bin_prot writes out a value using its bin_prot sizer/writer pair. The format
is the "size-prefixed binary protocol", in which the length of the data is written
before the data itself. This is the format that Reader.read_bin_prot reads.
Writes out a value using its bin_prot writer. Unlike
write_bin_prot, this doesn't
prefix the output with the size of the bin_prot blob.
size is the expected size.
This function will raise if the bin_prot writer writes an amount other than
Serialize data using marshal and write it to the writer
write_ functions, all functions starting with
flushing or closing of the writer after returning before it is safe to modify the
bigstrings which were directly or indirectly passed to these functions. The reason is
that these bigstrings will be read from directly when writing; their contents is not
copied to internal buffers.
This is important if users need to send the same large data string to a huge number of clients simultaneously (e.g. on a cluster), because these functions then avoid needlessly exhausting memory by sharing the data.
schedule_iobuf_consume is like
schedule_iobuf_peek, and additionally advances the
iobuf beyond the portion that has been written. Until the result is determined, it is
not safe to assume whether the iobuf has been advanced yet or not.
schedule_iovecs t iovecs like schedule_iovec, but takes a whole queue
I/O-vectors as argument. The queue is guaranteed to be empty when this function
returns and can be modified. It is not safe to change the bigstrings underlying the
I/O-vectors until the writer has been successfully flushed or closed after this
flushed t returns a deferred that will become determined when all prior writes
complete (i.e. the
write() system call returns). If a prior write fails, then the
deferred will never become determined.
It is OK to call
flushed t after
t has been closed.
send t s writes a string to the channel that can be read back
close ?force_close t waits for the writer to be flushed, and then calls
on the underlying file descriptor.
force_close causes the
Unix.close to happen
even if the flush hangs. By default
Deferred.never () for files
after (sec 5) for other types of file descriptors (e.g. sockets). If the close
is forced, data in the writer's buffer may not be written to the file descriptor. You
can check this by calling
close will raise an exception if the
Unix.close on the underlying file descriptor
It is required to call
close on a writer in order to close the underlying file
descriptor. Not doing so will cause a file descriptor leak. It also will cause a
space leak, because until the writer is closed, it is held on to in order to flush the
writer on shutdown.
It is an error to call other operations on
close t has been called, except
that calls of
close subsequent to the original call to
close will return the same
deferred as the original call.
close_started t becomes determined as soon as
close is called.
close_finished t becomes determined after
t's underlying file descriptor has been
closed, i.e. it is the same as the result of
close_finished differs from
close in that it does not have the side effect of initiating a close.
is_closed t returns
close t has been called.
is_open t is
not (is_closed t)
with_close t ~f runs
f (), and closes
f finishes or raises.
bytes_to_write t returns how many bytes have been requested to write but have not
yet been written.
with_file_atomic ?temp_file ?perm ?fsync file ~f creates a writer to a temp file,
feeds that writer to
f, and when the result of
f becomes determined, atomically
moves (i.e. uses
Unix.rename) the temp file to
file currently exists,
it will be replaced, even if it is read only. The temp file will be
temp_file if supplied) suffixed by a unique random sequence of six characters. The
temp file may need to be removed in case of a crash so it may be prudent to choose a
temp file that can be easily found by cleanup tools.
true, the temp file will be flushed to disk before it takes the place
of the target file, thus guaranteeing that the target file will always be in a sound
state, even after a machine crash. Since synchronization is extremely slow, this is
not the default. Think carefully about the event of machine crashes and whether you
may need this option!
We intend for
with_file_atomic to preserve the behavior of the
open system call,
file does not exist, we will apply the umask to
file does exist,
perm will default to the file's current permissions rather than 0o666.
save is a special case of
with_file_atomic that atomically writes the given
string to the specified file.
save_sexp is a special case of
with_file_atomic that atomically writes the
given sexp to the specified file.
save_sexp t sexp writes
t, followed by a newline. To read a file
save_sexp, one would typically use
Reader.load_sexp, which deals
with the additional whitespace and works nicely with converting the sexp to a
save_sexps works similarly to
save_sexp, but saves a sequence of sexps instead,
separated by newlines. There is a corresponding
Reader.load_sexps for reading back
transfer' t pipe_r f repeatedly reads values from
pipe_r and feeds them to
which should in turn write them to
t. It provides pushback to
pipe_r by not
t cannot keep up with the data being pushed in.
By default, each read from
pipe_r reads all the values in
pipe_r. One can supply
max_num_values_per_read to limit the number of values per read.
transfer' stops and the result becomes determined when
pipe_r reaches its EOF, when
t is closed, or when
leaves. In the latter two cases,
pipe_r's writer to ensure that the bytes have
been flushed to
t before returning. It also waits on
transfer t pipe_r f is equivalent to:
transfer' t pipe_r (fun q -> Queue.iter q ~f; Deferred.unit)
of_pipe info pipe_w returns a writer
t such that data written to
t will appear
pipe_w. If either
pipe_w are closed, the other is closed as well.
of_pipe is implemented by attaching
t to the write-end of a Unix pipe, and
shuttling bytes from the read-end of the Unix pipe to
behave_nicely_in_pipeline ~writers () causes the program to silently exit status
zero if any of the consumers of
writers go away. It also sets the buffer age to
unlimited, in case there is a human (e.g. using
less) on the other side of the