How stdbuf works

stdbuf(1) is a little command line utility that changes how a program buffers its standard input or output streams. It sometimes doesn’t work as expected, so I became curious about what was going on under the hood.

A quick stdbuf primer

I mostly use stdbuf to flip a program’s stdout stream to be line buffered so I can see its output more quickly. It’s common for stdout to use a fixed-sized buffer (commonly 8192 bytes) by default, but to be line buffered if a TTY is being used (meaning the buffer will be flushed when a line terminator character is encountered). That’s the default behaviour you get from glibc. And it’s not a bad default—it gives you quick feedback when running a program interactively, but avoids unnecessary syscalls otherwise.

However, there are times when stdout buffering can be a nuisance. Consider the following program that prints “tick” once a second for 3 seconds:

// tick.c
#include <stdio.h>
#include <unistd.h>

int main() {
for (int i = 0; i < 3; i++) {

If you redirect its output to a file and look at the contents of that file every second, you can see that the file remains empty until the program terminates:

$ gcc -o tick tick.c
$ ./tick > out & bash -c 'for i in {1..4}; do wc -l out; sleep 1; done'
[1] 3758
0 out
0 out
0 out
3 out

That’s because the output is getting caught in the default stdout buffer set up by libc. But if you run the program with stdbuf -oL (o for the stdout stream, and L for line buffered), the output is visible right away after each line is printed:

$ stdbuf -oL ./tick > out & bash -c 'for i in {1..4}; do wc -l out; sleep 1; done'
[1] 3768
1 out
2 out
3 out
3 out

But how does stdbuf make another program change its behaviour?

Given that stdout is often automatically set to line buffered mode when using a TTY, I had assumed that stdbuf would trick the program it runs into thinking its output was a TTY. It turns out that’s not the case.

So what does it do? In short, it uses LD_PRELOAD to inject a sneaky shared library into the program that changes the buffering mode of the stream when the program starts up. Here’s a quick walk through the code in stdbuf that makes that happen.

The buffering mode is set in a special environment variable

After parsing the command line options, stdbuf starts out by setting an environment variables for each stream that declares the buffering mode you want. So, if you’d asked for stdout to be line buffered, it’d set _STDBUF_O=L (O for stdout, and L for line buffered). Here’s the relevant code (source):

if (*stdbuf[i].optarg == 'L')
ret = asprintf (&var, "%s%c=L", "_STDBUF_",
toupper (stdbuf[i].optc));
ret = asprintf (&var, "%s%c=%" PRIuMAX, "_STDBUF_",
toupper (stdbuf[i].optc),
(uintmax_t) stdbuf[i].size);
if (ret < 0)
xalloc_die ();

if (putenv (var) != 0)
error (EXIT_CANCELED, errno,
_("failed to update the environment with %s"),
quote (var));

The environment variable’s name is _STDBUF_ followed the first letter of the stream name, and the value is set to L for line buffering or a number of bytes for block buffering. In case asprintf looks unfamiliar, it’s just a variation of sprintf that automatically allocates memory for you. putenv, as the name suggests, adds that variable to the environment. Later on, when the program you passed to the stdbuf commaned is executed, it will inherit these environment variables, which is how it knows which buffering mode to activate.

LD_PRELOAD is set to inject a shared library

Next, stdbuf sets another environment variable: LD_PRELOAD. It’s set to the path to a shared library called, which comes bundled with stdbuf.

LD_PRELOAD tells the Linux dynamic linker to load a given shared library when a program is run. So it’s basically a way of injecting a dynamic library into a given program. (You can do this on macOS too, but using DYLD_INSERT_LIBRARIES instead of LD_PRELOAD.) Here’s where stdbuf does that (source):

if (old_libs)
ret = asprintf (&LD_PRELOAD, "LD_PRELOAD=%s:%s", old_libs, libstdbuf);
ret = asprintf (&LD_PRELOAD, "LD_PRELOAD=%s", libstdbuf);

if (ret < 0)
xalloc_die ();

free (libstdbuf);

ret = putenv (LD_PRELOAD);

The program is run by calling exec(3)

With those environment variables set, stdbuf runs the program that was provided as a command line argument with execvp(3) (source):

execvp (*argv, argv);

int exit_status = (errno == ENOENT ? EXIT_ENOENT : EXIT_CANNOT_INVOKE);
error (0, errno, _("failed to run command %s"), quote (argv[0]));
exit (exit_status);
} wakes up and sets the buffering mode

Now that the stdbuf process has been replaced by the executable you asked stdbuf to run, the shared library is loaded. We can verify that with strace(1):

$ strace -e execve,openat stdbuf -oL ./tick 2>&1 | grep -E 'execve|libstdbuf'
execve("/usr/bin/stdbuf", ["stdbuf", "-oL", "./tick"], 0xffffffd714c0 /* 23 vars */) = 0
execve("./tick", ["./tick"], 0xaaab07cd7260 /* 25 vars */) = 0
openat(AT_FDCWD, "/usr/libexec/coreutils/", O_RDONLY|O_CLOEXEC) = 3 contains a constructor function that’s run when the library is loaded. It reads those _STDBUF_{E,I,O} environment variables to figure out which modes to set. The __attribute__ ((constructor)) bit is the GCC-specific syntax for declaring a dynamic library constructor function (source):

__attribute__ ((constructor)) static void
stdbuf (void)
char *e_mode = getenv ("_STDBUF_E");
char *i_mode = getenv ("_STDBUF_I");
char *o_mode = getenv ("_STDBUF_O");
if (e_mode) /* Do first so can write errors to stderr */
apply_mode (stderr, e_mode);
if (i_mode)
apply_mode (stdin, i_mode);
if (o_mode)
apply_mode (stdout, o_mode);

Then it sets the mode on the stream with a call to setvbuf(3) (source):

if (setvbuf (stream, buf, setvbuf_mode, size) != 0)
fprintf (stderr, _("could not set buffering of %s to mode %s\n"),
fileno_to_name (fileno (stream)), mode);
free (buf);

And that’s it! All the stdbuf(1) program really does is set some environment variables then call execvp(3). So we can actually get the same behaviour by setting those environment variables ourselves:

$ LD_PRELOAD=/usr/libexec/coreutils/ \
> _STDBUF_O=1 \
> ./tick > out & bash -c 'for i in {1..4}; do wc -l out; sleep 1; done'

[1] 10362
0 out
1 out
2 out
3 out

Why this doesn’t always work

This trick of calling setvbuf(3) in a shared library constructor works well in some cases, but there are several situations that it can’t handle:

  1. If the program sets the buffering mode itself by calling setvbuf(3) or setbuf(3), all the hard work did will be undone.
  2. If the program buffers output via some other mechanism (e.g. by allocating its own buffer and only calling write(3) when it’s full), this approach won’t work at all.
  3. If the program doens’t use libc for standard IO, calling setvbuf(3) won’t have any effect. For instance Go doesn’t use libc for IO, so stdbuf won’t affect most Go programs.

If you hit one of those cases and don’t mind installing a separate package, there is actually another utility called unbuffer that takes a different approach. It creates a pty and attaches it to stdout to trick the program into thinking it’s running in interactive mode, which often flips stdout to being line buffered mode automatically.


I couldn’t wrap this little adventure up without including the absolute best part of the stdbuf codebase: the command line option parsing (source):

switch (c)
/* Old McDonald had a farm ei... */
case 'e':
case 'i':
case 'o':