Vim vs. Emacs: The Working Directory

Vim and Emacs have different internals models for the current working directory, and these models influence the overall workflow for each editor. They decide how files are opened, how shell commands are executed, and how the build system is operated. These effects even reach outside the editor to influence the overall structure of the project being edited.

In the traditional unix model, which was eventually adopted everywhere else, each process has a particular working directory tracked by the operating system. When a process makes a request to the operating system using a relative path — a path that doesn’t begin with a slash — the operating system uses the process’ working directory to convert the path into an absolute path. When a process forks, its child starts in the same directory. A process can change its working directory at any time using chdir(2), though most programs never need to do it. The most obvious way this system call is exposed to regular users is through the shell’s built-in cd command.

Vim’s spiritual heritage is obviously rooted in vi, one of the classic unix text editors, and the most elaborate text editor standardized by POSIX. Like vi, Vim closely follows the unix model for working directories. At any given time Vim has exactly one working directory. Shell commands that are run within Vim will start in Vim’s working directory. Like a shell, the cd ex command changes and queries Vim’s working directory.

Emacs eschews this model and instead each buffer has its own working directory tracked using a buffer-local variable, default-directory. Emacs internally simulates working directories for its buffers like an operating system, resolving absolute paths itself, giving credence to the idea that Emacs is an operating system (“lacking only a decent editor”). Perhaps this model comes from ye olde lisp machines?

In contrast, Emacs’ M-x cd command manipulates the local variable and has no effect on the Emacs process’ working directory. In fact, Emacs completely hides its operating system working directory from Emacs Lisp. This can cause some trouble if that hidden working directory happens to be sitting on filesystem you’d like to unmount.

Vim can be configured to simulate Emacs’ model with its autochdir option. When set, Vim will literally chdir(2) each time the user changes buffers, switches windows, etc. To the user, this feels just like Emacs’ model, but this is just a convenience, and the core working directory model is still the same.

Single instance editors

For most of my Emacs career, I’ve stuck to running a single, long-lived Emacs instance no matter how many different tasks I’m touching simultaneously. I start the Emacs daemon shortly after logging in, and it continues running until I log out — typically only when the machine is shut down. It’s common to have multiple Emacs windows (frames) for different tasks, but they’re all bound to the same daemon process.

While with care it’s possible to have a complex, rich Emacs configuration that doesn’t significantly impact Emacs’ startup time, the general consensus is that Emacs is slow to start. But since it has a really solid daemon, this doesn’t matter: hardcore Emacs users only ever start Emacs occasionally. The rest of the time they’re launching emacsclient and connecting to the daemon. Outside of system administration, it’s the most natural way to use Emacs.

The case isn’t so clear for Vim. Vim is so fast that many users fire it up on demand and exit when they’ve finished the immediate task. At the other end of the spectrum, others advocate using a single instance of Vim like running a single Emacs daemon. In my initial dive into Vim, I tried the single-instance, Emacs way of doing things. I set autochdir out of necessity and pretended each buffer had its own working directory.

At least for me, this isn’t the right way to use Vim, and it all comes down to working directories. I want Vim to be anchored at the project root with one Vim instance per project. Everything is smoother when it happens in the context of the project’s root directory, from opening files, to running shell commands (ctags in particular), to invoking the build system. With autochdir, these actions are difficult to do correctly, particularly the last two.

Invoking the build

I suspect the Emacs’ model of per-buffer working directories has, in a Sapir-Whorf sort of way, been responsible for leading developers towards poorly-designed, recursive Makefiles. Without a global concept of working directory, it’s inconvenient to invoke the build system (M-x compile) in some particular grandparent directory that is the root of the project. If each directory has its own Makefile, it usually makes sense to invoke make in the same directory as the file being edited.

Over the years I’ve been reinventing the same solution to this problem, and it wasn’t until I spent time with Vim and its alternate working directory model that I truly understood the problem. Emacs itself has long had a solution lurking deep in its bowels, unseen by daylight: dominating files. The function I’m talking about is locate-dominating-file:

(locate-dominating-file FILE NAME)

Look up the directory hierarchy from FILE for a directory containing NAME. Stop at the first parent directory containing a file NAME, and return the directory. Return nil if not found. Instead of a string, NAME can also be a predicate taking one argument (a directory) and returning a non-nil value if that directory is the one for which we’re looking.

The trouble of invoking the build system at the project root is that Emacs doesn’t really have a concept of a project root. It doesn’t know where it is or how to find it. The vi model inherited by Vim is to leave the working directory at the project root. While Vim can simulate Emacs’ working directory model, Emacs cannot (currently) simulate Vim’s model.

Instead, by identifying a file name unique to the project’s root (i.e. a “dominating” file) such as Makefile or build.xml, then locate-dominating-file can discover the project root. All that’s left is wrapping M-x compile so that default-directory is temporarily adjusted to the project’s root.

That looks very roughly like this (and needs more work):

(defun my-compile ()
  (interactive)
  (let ((default-directory (locate-dominating-file "." "Makefile")))
    (compile "make")))

It’s a pattern I’ve used again and again and again, working against the same old friction. By running one Vim instance per project at the project’s root, I get the correct behavior for free.

A Tutorial on Portable Makefiles

In my first decade writing Makefiles, I developed the bad habit of liberally using GNU Make’s extensions. I didn’t know the line between GNU Make and the portable features guaranteed by POSIX. Usually it didn’t matter much, but it would become an annoyance when building on non-Linux systems, such as on the various BSDs. I’d have to specifically install GNU Make, then remember to invoke it (i.e. as gmake) instead of the system’s make.

I’ve since become familiar and comfortable with make’s official specification, and I’ve spend the last year writing strictly portable Makefiles. Not only has are my builds now portable across all unix-like systems, my Makefiles are cleaner and more robust. Many of the common make extensions — conditionals in particular — lead to fragile, complicated Makefiles and are best avoided anyway. It’s important to be able to trust your build system to do its job correctly.

This tutorial should be suitable for make beginners who have never written their own Makefiles before, as well as experienced developers who want to learn how to write portable Makefiles. Regardless, in order to understand the examples you must be familiar with the usual steps for building programs on the command line (compiler, linker, object files, etc.). I’m not going to suggest any fancy tricks nor provide any sort of standard starting template. Makefiles should be dead simple when the project is small, and grow in a predictable, clean fashion alongside the project.

I’m not going to cover every feature. You’ll need to read the specification for yourself to learn it all. This tutorial will go over the important features as well as the common conventions. It’s important to follow established conventions so that people using your Makefiles will know what to expect and how to accomplish the basic tasks.

If you’re running Debian, or a Debian derivative such as Ubuntu, the bmake and freebsd-buildutils packages will provide the bmake and fmake programs respectively. These alternative make implementations are very useful for testing your Makefiles’ portability, should you accidentally make use of a GNU Make feature. It’s not perfect since each implements some of the same extensions as GNU Make, but it will catch some common mistakes.

What’s in a Makefile?

I am free, no matter what rules surround me. If I find them tolerable, I tolerate them; if I find them too obnoxious, I break them. I am free because I know that I alone am morally responsible for everything I do. ―Robert A. Heinlein

At make’s core are one or more dependency trees, constructed from rules. Each vertex in the tree is called a target. The final products of the build (executable, document, etc.) are the tree roots. A Makefile specifies the dependency trees and supplies the shell commands to produce a target from its prerequisites.

In this illustration, the “.c” files are source files that are written by hand, not generated by commands, so they have no prerequisites. The syntax for specifying one or more edges in this dependency tree is simple:

target [target...]: [prerequisite...]

While technically multiple targets can be specified in a single rule, this is unusual. Typically each target is specified in its own rule. To specify the tree in the illustration above:

game: graphics.o physics.o input.o
graphics.o: graphics.c
physics.o: physics.c
input.o: input.c

The order of these rules doesn’t matter. The entire Makefile is parsed before any actions are taken, so the tree’s vertices and edges can be specified in any order. There’s one exception: the first non-special target in a Makefile is the default target. This target is selected implicitly when make is invoked without choosing a target. It should be something sensible, so that a user can blindly run make and get a useful result.

A target can be specified more than once. Any new prerequisites are appended to the previously-given prerequisites. For example, this Makefile is identical to the previous, though it’s typically not written this way:

game: graphics.o
game: physics.o
game: input.o
graphics.o: graphics.c
physics.o: physics.c
input.o: input.c

There are six special targets that are used to change the behavior of make itself. All have uppercase names and start with a period. Names fitting this pattern are reserved for use by make. According to the standard, in order to get reliable POSIX behavior, the first non-comment line of the Makefile must be .POSIX. Since this is a special target, it’s not a candidate for the default target, so game will remain the default target:

.POSIX:
game: graphics.o physics.o input.o
graphics.o: graphics.c
physics.o: physics.c
input.o: input.c

In practice, even a simple program will have header files, and sources that include a header file should also have an edge on the dependency tree for it. If the header file changes, targets that include it should also be rebuilt.

.POSIX:
game: graphics.o physics.o input.o
graphics.o: graphics.c graphics.h
physics.o: physics.c physics.h
input.o: input.c input.h graphics.h physics.h

Adding commands to rules

We’ve constructed a dependency tree, but we still haven’t told make how to actually build any targets from its prerequisites. The rules also need to specify the shell commands that produce a target from its prerequisites.

If you were to create the source files in the example and invoke make, you will find that it actually does know how to build the object files. This is because make is initially configured with certain inference rules, a topic which will be covered later. For now, we’ll add the .SUFFIXES special target to the top, erasing all the built-in inference rules.

Commands immediately follow the target/prerequisite line in a rule. Each command line must start with a tab character. This can be awkward if your text editor isn’t configured for it, and it will be awkward if you try to copy the examples from this page.

Each line is run in its own shell, so be mindful of using commands like cd, which won’t affect later lines.

The simplest thing to do is literally specify the same commands you’d type at the shell:

.POSIX:
.SUFFIXES:
game: graphics.o physics.o input.o
    cc -o game graphics.o physics.o input.o
graphics.o: graphics.c graphics.h
    cc -c graphics.c
physics.o: physics.c physics.h
    cc -c physics.c
input.o: input.c input.h graphics.h physics.h
    cc -c input.c

Invoking make and choosing targets

I tried to walk into Target, but I missed. ―Mitch Hedberg

When invoking make, it accepts zero or more targets from the dependency tree, and it will build these targets — e.g. run the commands in the target’s rule — if the target is out-of-date. A target is out-of-date if it is older than any of its prerequisites.

# build the "game" binary (default target)
$ make

# build just the object files
$ make graphics.o physics.o input.o

This effect cascades up the dependency tree and causes further targets to be rebuilt until all of the requested targets are up-to-date. There’s a lot of room for parallelism since different branches of the tree can be updated independently. It’s common for make implementations to support parallel builds with the -j option. This is non-standard, but it’s a fantastic feature that doesn’t require anything special in the Makefile to work correctly.

Similar to parallel builds is make’s -k (“keep going”) option, which is standard. This tells make not to stop on the first error, and to continue updating targets that are unaffected by the error. This is nice for fully populating Vim’s quickfix list or Emacs’ compilation buffer.

It’s common to have multiple targets that should be built by default. If the first rule selects the default target, how do we solve the problem of needing multiple default targets? The convention is to use phony targets. These are called “phony” because there is no corresponding file, and so phony targets are never up-to-date. It’s convention for a phony “all” target to be the default target.

I’ll make game a prerequisite of a new “all” target. More real targets could be added as necessary to turn them into defaults. Users of this Makefile will also expect make all to build the entire project.

Another common phony target is “clean” which removes all of the built files. Users will expect make clean to delete all generated files.

.POSIX:
.SUFFIXES:
all: game
game: graphics.o physics.o input.o
    cc -o game graphics.o physics.o input.o
graphics.o: graphics.c graphics.h
    cc -c graphics.c
physics.o: physics.c physics.h
    cc -c physics.c
input.o: input.c input.h graphics.h physics.h
    cc -c input.c
clean:
    rm -f game graphics.o physics.o input.o

Customize the build with macros

So far the Makefile hardcodes cc as the compiler, and doesn’t use any compiler flags (warnings, optimization, hardening, etc.). The user should be able to easily control all these things, but right now they’d have to edit the entire Makefile to do so. Perhaps the user has both gcc and clang installed, and wants to choose one or the other without changing which is installed as cc.

To solve this, make has macros that expand into strings when referenced. The convention is to use the macro named CC when talking about the C compiler, CFLAGS when talking about flags passed to the C compiler, LDFLAGS for flags passed to the C compiler when linking, and LDLIBS for flags about libraries when linking. The Makefile should supply defaults as needed.

A macro is expanded with $(...). It’s valid (and normal) to reference a macro that hasn’t been defined, which will be an empty string. This will be the case with LDFLAGS below.

Macro values can contain other macros, which will be expanded recursively each time the macro is expanded. Some make implementations allow the name of the macro being expanded to itself be a macro, which is turing complete, but this behavior is non-standard.

.POSIX:
.SUFFIXES:
CC     = cc
CFLAGS = -W -O
LDLIBS = -lm

all: game
game: graphics.o physics.o input.o
    $(CC) $(LDFLAGS) -o game graphics.o physics.o input.o $(LDLIBS)
graphics.o: graphics.c graphics.h
    $(CC) -c $(CFLAGS) graphics.c
physics.o: physics.c physics.h
    $(CC) -c $(CFLAGS) physics.c
input.o: input.c input.h graphics.h physics.h
    $(CC) -c $(CFLAGS) input.c
clean:
    rm -f game graphics.o physics.o input.o

Macros are overridden by macro definitions given as command line arguments in the form name=value. This allows the user to select their own build configuration. This is one of make’s most powerful and under-appreciated features.

$ make CC=clang CFLAGS='-O3 -march=native'

If the user doesn’t want to specify these macros on every invocation, they can (cautiously) use make’s -e flag to set overriding macros definitions from the environment.

$ export CC=clang
$ export CFLAGS=-O3
$ make -e all

Some make implementations have other special kinds of macro assignment operators beyond simple assignment (=). These are unnecessary, so don’t worry about them.

Inference rules so that you can stop repeating yourself

The road itself tells us far more than signs do. ―Tom Vanderbilt, Traffic: Why We Drive the Way We Do

There’s repetition across the three different object files. Wouldn’t it be nice if there was a way to communicate this pattern? Fortunately there is, in the form of inference rules. It says that a target with a certain extension, with a prerequisite with another certain extension, is built a certain way. This will make more sense with an example.

In an inference rule, the target indicates the extensions. The $< macro expands to the prerequisite, which is essential to making inference rules work generically. Unfortunately this macro is not available in target rules, as much as that would be useful.

For example, here’s an inference rule that teaches make how to build an object file from a C source file. This particular rule is one that is pre-defined by make, so you’ll never need to write this one yourself. I’ll include it for completeness.

.c.o:
    $(CC) $(CFLAGS) -c $<

These extensions must be added to .SUFFIXES before they will work. With that, the commands for the rules about object files can be omitted.

.POSIX:
.SUFFIXES:
CC     = cc
CFLAGS = -W -O
LDLIBS = -lm

all: game
game: graphics.o physics.o input.o
    $(CC) $(LDFLAGS) -o game graphics.o physics.o input.o $(LDLIBS)
graphics.o: graphics.c graphics.h
physics.o: physics.c physics.h
input.o: input.c input.h graphics.h physics.h
clean:
    rm -f game graphics.o physics.o input.o

.SUFFIXES: .c .o
.c.o:
    $(CC) $(CFLAGS) -c $<

The first empty .SUFFIXES clears the suffix list. The second one adds .c and .o to the now-empty suffix list.

Other target conventions

Conventions are, indeed, all that shield us from the shivering void, though often they do so but poorly and desperately. ―Robert Aickman

Users usually expect an “install” target that installs the built program, libraries, man pages, etc. By convention this target should use the PREFIX and DESTDIR macros.

The PREFIX macro should default to /usr/local, and since it’s a macro the user can override it to install elsewhere, such as in their home directory. The user should override it for both building and installing, since the prefix may need to be built into the binary (e.g. -DPREFIX=$(PREFIX)).

The DESTDIR is macro is used for staged builds, so that it gets installed under a fake root directory for the sake of packaging. Unlike PREFIX, it will not actually be run from this directory.

.POSIX:
CC     = cc
CFLAGS = -W -O
LDLIBS = -lm
PREFIX = /usr/local

all: game
install: game
    mkdir -p $(DESTDIR)$(PREFIX)/bin
    mkdir -p $(DESTDIR)$(PREFIX)/share/man/man1
    cp -f game $(DESTDIR)$(PREFIX)/bin
    gzip < game.1 > $(DESTDIR)$(PREFIX)/share/man/man1/game.1.gz
game: graphics.o physics.o input.o
    $(CC) $(LDFLAGS) -o game graphics.o physics.o input.o $(LDLIBS)
graphics.o: graphics.c graphics.h
physics.o: physics.c physics.h
input.o: input.c input.h graphics.h physics.h
clean:
    rm -f game graphics.o physics.o input.o

You may also want to provide an “uninstall” phony target that does the opposite.

make PREFIX=$HOME/.local install

Other common targets are “mostlyclean” (like “clean” but don’t delete some slow-to-build targets), “distclean” (delete even more than “clean”), “test” or “check” (run the test suite), and “dist” (create a package).

Complexity and growing pains

One of make’s big weak points is scaling up as a project grows in size.

Recursive Makefiles

As your growing project is broken into subdirectories, you may be tempted to put a Makefile in each subdirectory and invoke them recursively.

Don’t use recursive Makefiles. It breaks the dependency tree across separate instances of make and typically results in a fragile build. There’s nothing good about it. Have one Makefile at the root of your project and invoke make there. You may have to teach your text editor how to do this.

When talking about files in subdirectories, just include the subdirectory in the name. Everything will work the same as far as make is concerned, including inference rules.

src/graphics.o: src/graphics.c
src/physics.o: src/physics.c
src/input.o: src/input.c

Out-of-source builds

Keeping your object files separate from your source files is a nice idea. When it comes to make, there’s good news and bad news.

The good news is that make can do this. You can pick whatever file names you like for targets and prerequisites.

obj/input.o: src/input.c

The bad news is that inference rules are not compatible with out-of-source builds. You’ll need to repeat the same commands for each rule as if inference rules didn’t exist. This is tedious for large projects, so you may want to have some sort of “configure” script, even if hand-written, to generate all this for you. This is essentially what CMake is all about. That, plus dependency management.

Dependency management

Another problem with scaling up is tracking the project’s ever-changing dependencies across all the source files. Missing a dependency means the build may not be correct unless you make clean first.

If you go the route of using a script to generate the tedious parts of the Makefile, both GCC and Clang have a nice feature for generating all the Makefile dependencies for you (-MM, -MT), at least for C and C++. There are lots of tutorials for doing this dependency generation on the fly as part of the build, but it’s fragile and slow. Much better to do it all up front and “bake” the dependencies into the Makefile so that make can do its job properly. If the dependencies change, rebuild your Makefile.

For example, here’s what it looks like invoking gcc’s dependency generator against the imaginary input.c for an out-of-source build:

$ gcc $CFLAGS -MM -MT '$(BUILD)/input.o' input.c
$(BUILD)/input.o: input.c input.h graphics.h physics.h

Notice the output is in Makefile’s rule format.

Unfortunately this feature strips the leading paths from the target, so, in practice, using it is always more complicated than it should be (e.g. it requires the use of -MT).

Microsoft’s Nmake

Microsoft has an implementation of make called Nmake, which comes with Visual Studio. It’s nearly a POSIX-compatible make, but necessarily breaks from the standard in some places. Their cl.exe compiler uses .obj as the object file extension and .exe for binaries, both of which differ from the unix world, so it has different built-in inference rules. Windows also lacks a Bourne shell and the standard unix tools, so all of the commands will necessarily be different.

There’s no equivalent of rm -f on Windows, so good luck writing a proper “clean” target. No, del /f isn’t the same.

So while it’s close to POSIX make, it’s not practical to write a Makefile that will simultaneously work properly with both POSIX make and Nmake. These need to be separate Makefiles.

May your Makefiles be portable

It’s nice to have reliable, portable Makefiles that just work anywhere. Code to the standards and you don’t need feature tests or other sorts of special treatment.

Introducing the Pokerware Secure Passphrase Generator

I recently developed Pokerware, an offline passphrase generator that operates in the same spirit as Diceware. The primary difference is that it uses a shuffled deck of playing cards as its entropy source rather than dice. Draw some cards and use them to select a uniformly random word from a list. Unless you’re some sort of tabletop gaming nerd, a deck of cards is more readily available than five 6-sided dice, which would typically need to be borrowed from the Monopoly board collecting dust on the shelf, then rolled two at a time.

There are various flavors of two different word lists here:

Hardware random number generators are difficult to verify and may not actually be as random as they promise, either intentionally or unintentionally. For the particularly paranoid, Diceware and Pokerware are an easily verifiable alternative for generating secure passphrases for cryptographic purposes. At any time, a deck of 52 playing cards is in one of 52! possible arrangements. That’s more than 225 bits of entropy. If you give your deck a thorough shuffle, it will be in an arrangement that has never been seen before and will never be seen again. Pokerware draws on some of these bits to generate passphrases.

The Pokerware list has 5,304 words (12.4 bits per word), compared to Diceware’s 7,776 words (12.9 bits per word). My goal was to invent a card-drawing scheme that would uniformly select from a list in the same sized ballpark as Diceware. Much smaller and you’d have to memorize more words for the same passphrase strength. Much larger and the words on the list would be more difficult to memorize, since the list would contain longer and less frequently used words. Diceware strikes a nice balance at five dice.

One important difference for me is that I like my Pokerware word lists a lot more than the two official Diceware lists. My lists only have simple, easy-to-remember words (for American English speakers, at least), without any numbers or other short non-words. Pokerware has two official lists, “formal” and “slang,” since my early testers couldn’t agree on which was better. Rather than make a difficult decision, I took the usual route of making no decision at all.

The “formal” list is derived in part from Google’s Ngram Viewer, with my own additional filters and tweaking. It’s called “formal” because the ngrams come from formal publications and represent more formal kinds of speech.

The “slang” list is derived from every reddit comment between December 2005 and May 2017, tamed by the same additional filters. I have this data on hand, so I may as well put it to use. I figured more casually-used words would be easier to remember. Due to my extra filtering, there’s actually a lot of overlap between these lists, so the differences aren’t too significant.

If you have your own word list, perhaps in a different language, you can use the Makefile in the repository to build your own Pokerware lookup table, both plain text and PDF. The PDF is generated using Groff macros.

Passphrase generation instructions

  1. Thoroughly shuffle the deck.

  2. Draw two cards. Sort them by value, then suit. Suits are in alphabetical order: Clubs, Diamonds, Hearts, Spades.

  3. Draw additional cards until you get a card that doesn’t match the face value of either of your initial two cards. Observe its suit.

  4. Using your two cards and observed suit, look up a word in the table.

  5. Place all cards back in the deck, shuffle, and repeat from step 2 until you have the desired number of words. Each word is worth 12.4 bits of entropy.

A word of warning about step 4: If you use software to do the word list lookup, beware that it might save your search/command history — and therefore your passphrase — to a file. For example, the less pager will store search history in ~/.lesshst. It’s easy to prevent that one:

$ LESSHISTFILE=- less pokerware-slang.txt

Example word generation

Suppose in step 2 you draw King of Hearts (KH/K♥) and Queen of Clubs (QC/Q♣).

In step 3 you first draw King of Diamonds (KD/K♦), discarding it because it matches the face value of one of your cards from step 2.

Next you draw Four of Spades (4S/4♠), taking spades as your extra suit.

In order, this gives you Queen of Clubs, King of Hearts, and Spades: QCKHS or Q♣K♥♠. This corresponds to “wizard” in the formal word list and would be the first word in your passphrase.

A deck of cards as an office tool

I now have an excuse to keep a deck of cards out on my desk at work. I’ve been using Diceware — or something approximating it since I’m not so paranoid about hardware RNGs — for passwords for over 8 years now. From now I’ll deal new passwords from an in-reach deck of cards. Though typically I need to tweak the results to meet outdated character-composition requirements.

Integer Overflow into Information Disclosure

Last week I was discussing CVE-2017-7529 with my intern. Specially crafted input to Nginx causes an integer overflow which has the potential to leak sensitive information. But how could an integer overflow be abused to trick a program into leaking information? To answer this question, I put together the simplest practical example I could imagine.

This small C program converts a vector image from a custom format (described below) into a Netpbm image, a conveniently simple format. The program defensively and carefully parses its input, but still makes a subtle, fatal mistake. This mistake not only leads to sensitive information disclosure, but, with a more sophisticated attack, could be used to execute arbitrary code.

After getting the hang of the interface for the program, I encourage you to take some time to work out an exploit yourself. Regardless, I’ll reveal a functioning exploit and explain how it works.

A new vector format

The input format is line-oriented and very similar to Netpbm itself. The first line is the header, starting with the magic number V2 (ASCII) followed by the image dimensions. The target output format is Netpbm’s “P2” (text gray scale) format, so the “V2” parallels it. The file must end with a newline.

V2 <width> <height>

What follows is drawing commands, one per line. For example, the s command sets the value of a particular pixel.

s <x> <y> <00–ff>

Since it’s not important for the demonstration, this is the only command I implemented. It’s easy to imagine additional commands to draw lines, circles, Bezier curves, etc.

Here’s an example (example.txt) that draws a single white point in the middle of the image:

V2 256 256
s 127 127 ff

The rendering tool reads standard input to standard output:

$ render < example.txt > example.pgm

Here’s what it looks like rendered:

However, you will notice that when you run the rendering tool, it prompts you for username and password. This is silly, of course, but it’s an excuse to get “sensitive” information into memory. It will accept any username/password combination where the username and password don’t match each other. The key is this: It’s possible to craft a valid image that leaks the the entered password.

Tour of the implementation

Without spoiling anything yet, let’s look at how this program works. The first thing to notice is that I’m using a custom “obstack” allocator instead of malloc() and free(). Real-world allocators have some defenses against this particular vulnerability. Plus a specific exploit would have to target a specific libc. By using my own allocator, the exploit will mostly be portable, making for a better and easier demonstration.

The allocator interface should be pretty self-explanatory, except for two details. This is an obstack allocator, so freeing an object also frees every object allocated after it. Also, it doesn’t call malloc() in the background. At initialization you give it a buffer from which to allocate all memory.

struct mstack {
    char *top;
    char *max;
    char buf[];
};

struct mstack *mstack_init(void *, size_t);
void          *mstack_alloc(struct mstack *, size_t);
void           mstack_free(struct mstack *, void *);

There are no vulnerabilities in these functions (I hope!). It’s just here for predictability.

Next here’s the “authentication” function. It reads a username and password combination from /dev/tty. It’s only an excuse to get a flag in memory for this capture-the-flag game. The username and password must be less than 32 characters each.

int
authenticate(struct mstack *m)
{
    FILE *tty = fopen("/dev/tty", "r+");
    if (!tty) {
        perror("/dev/tty");
        return 0;
    }

    char *user = mstack_alloc(m, 32);
    if (!user) {
        fclose(tty);
        return 0;
    }
    fputs("User: ", tty);
    fflush(tty);
    if (!fgets(user, 32, tty))
        user[0] = 0;

    char *pass = mstack_alloc(m, 32);
    int result = 0;
    if (pass) {
        fputs("Password: ", tty);
        fflush(tty);
        if (fgets(pass, 32, tty))
            result = strcmp(user, pass) != 0;
    }

    fclose(tty);
    mstack_free(m, user);
    return result;
}

Next here’s a little version of calloc() for the custom allocator. Hmm, I wonder why is this called “naive” …

void *
naive_calloc(struct mstack *m, unsigned long nmemb, unsigned long size)
{
    void *p = mstack_alloc(m, nmemb * size);
    if (p)
        memset(p, 0, nmemb * size);
    return p;
}

Next up is a paranoid wrapper for strtoul() that defensively checks its inputs. If it’s out of range of an unsigned long, it bails out. If there’s trailing garbage, it bails out. If there’s no number at all, it bails out. If you make prolonged eye contact, it bails out.

unsigned long
safe_strtoul(char *nptr, char **endptr, int base)
{
    errno = 0;
    unsigned long n = strtoul(nptr, endptr, base);
    if (errno) {
        perror(nptr);
        exit(EXIT_FAILURE);
    } else if (nptr == *endptr) {
        fprintf(stderr, "Expected an integer\n");
        exit(EXIT_FAILURE);
    } else if (!isspace(**endptr)) {
        fprintf(stderr, "Invalid character '%c'\n", **endptr);
        exit(EXIT_FAILURE);
    }
    return n;
}

The main() function parses the header using this wrapper and allocates some zeroed memory:

    unsigned long width = safe_strtoul(p, &p, 10);
    unsigned long height = safe_strtoul(p, &p, 10);
    unsigned char *pixels = naive_calloc(m, width, height);
    if (!pixels) {
        fputs("Not enough memory\n", stderr);
        exit(EXIT_FAILURE);
    }

Then there’s a command processing loop, also using safe_strtoul(). It carefully checks bounds against width and height. Finally it writes out a Netpbm, P2 (.pgm) format.

    printf("P2\n%ld %ld 255\n", width, height);
    for (unsigned long y = 0; y < height; y++) {
        for (unsigned long x = 0; x < width; x++)
            printf("%d ", pixels[y * width + x]);
        putchar('\n');
    }

The vulnerability is in something I’ve shown above. Can you find it?

Exploiting the renderer

Did you find it? If you’re on a platform with 64-bit long, here’s your exploit:

V2 16 1152921504606846977

And here’s an exploit for 32-bit long:

V2 16 268435457

Here’s how it looks in action. The most obvious result is that the program crashes:

$ echo V2 16 1152921504606846977 | ./mstack > capture.txt
User: coolguy
Password: mysecret
Segmentation fault

Here are the initial contents of capture.txt:

P2
16 1152921504606846977 255
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
109 121 115 101 99 114 101 116 10 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Where did those junk numbers come from in the image data? Plug them into an ASCII table and you’ll get “mysecret”. Despite allocating the image with naive_calloc(), the password has found its way into the image! How could this be?

What happened is that width * height overflows an unsigned long. (Well, technically speaking, unsigned integers are defined not to overflow in C, wrapping around instead, but it’s really the same thing.) In naive_calloc(), the overflow results in a value of 16, so it only allocates and clears 16 bytes. The requested allocation “succeeds” despite far exceeding the available memory. The caller has been given a lot less memory than expected, and the memory believed to have been allocated contains a password.

The final part that writes the output doesn’t multiply the integers and doesn’t need to test for overflow. It uses a nested loop instead, continuing along with the original, impossible image size.

How do we fix this? Add an overflow check at the beginning of the naive_calloc() function (making it no longer naive). This is what the real calloc() does.

    if (nmemb && size > -1UL / nmemb)
        return 0;

The frightening takeaway is that this check is very easy to forget. It’s a subtle bug with potentially disastrous consequences.

In practice, this sort of program wouldn’t have sensitive data resident in memory. Instead an attacker would target the program’s stack with those s commands — specifically the return pointers — and perform a ROP attack against the application. With the exploit header above and a platform where long the same size as a size_t, the program will behave as if all available memory has been allocated to the image, so the s command could be used to poke custom values anywhere in memory. This is a much more complicated exploit, and it has to contend with ASLR and random stack gap, but it’s feasible.

Rolling Shutter Simulation in C

The most recent Smarter Every Day (#172) explains a phenomenon that results from rolling shutter. You’ve likely seen this effect in some of your own digital photographs. When a CMOS digital camera captures a picture, it reads one row of the sensor at a time. If the subject of the picture is a fast-moving object (relative to the camera), then the subject will change significantly while the image is being captured, giving strange, unreal results:

In the Smarter Every Day video, Destin illustrates the effect by simulating rolling shutter using a short video clip. In each frame of the video, a few additional rows are locked in place, showing the effect in slow motion, making it easier to understand.

At the end of the video he thanks a friend for figuring out how to get After Effects to simulate rolling shutter. After thinking about this for a moment, I figured I could easily accomplish this myself with just a bit of C, without any libraries. The video above this paragraph is the result.

I previously described a technique to edit and manipulate video without any formal video editing tools. A unix pipeline is sufficient for doing minor video editing, especially without sound. The program at the front of the pipe decodes the video into a raw, uncompressed format, such as YUV4MPEG or PPM. The tools in the middle losslessly manipulate this data to achieve the desired effect (watermark, scaling, etc.). Finally, the tool at the end encodes the video into a standard format.

$ decode video.mp4 | xform-a | xform-b | encode out.mp4

For the “decode” program I’ll be using ffmpeg now that it’s back in the Debian repositories. You can throw a video in virtually any format at it and it will write PPM frames to standard output. For the encoder I’ll be using the x264 command line program, though ffmpeg could handle this part as well. Without any filters in the middle, this example will just re-encode a video:

$ ffmpeg -i input.mp4 -f image2pipe -vcodec ppm pipe:1 | \
    x264 -o output.mp4 /dev/stdin

The filter tools in the middle only need to read and write in the raw image format. They’re a little bit like shaders, and they’re easy to write. In this case, I’ll write C program that simulates rolling shutter. The filter could be written in any language that can read and write binary data from standard input to standard output.

Update: It appears that input PPM streams are a rather recent feature of libavformat (a.k.a lavf, used by x264). Support for PPM input first appeared in libavformat 3.1 (released June 26th, 2016). If you’re using an older version of libavformat, you’ll need to stick ppmtoy4m in front of x264 in the processing pipeline.

$ ffmpeg -i input.mp4 -f image2pipe -vcodec ppm pipe:1 | \
    ppmtoy4m | \
    x264 -o output.mp4 /dev/stdin

Video filtering in C

In the past, my go to for raw video data has been loose PPM frames and YUV4MPEG streams (via ppmtoy4m). Fortunately, over the years a lot of tools have gained the ability to manipulate streams of PPM images, which is a much more convenient format. Despite being raw video data, YUV4MPEG is still a fairly complex format with lots of options and annoying colorspace concerns. PPM is simple RGB without complications. The header is just text:

P6
<width> <height>
<maxdepth>
<width * height * 3 binary RGB data>

The maximum depth is virtually always 255. A smaller value reduces the image’s dynamic range without reducing the size. A larger value involves byte-order issues (endian). For video frame data, the file will typically look like:

P6
1920 1080
255
<frame RGB>

Unfortunately the format is actually a little more flexible than this. Except for the new line (LF, 0x0A) after the maximum depth, the whitespace is arbitrary and comments starting with # are permitted. Since the tools I’m using won’t produce comments, I’m going to ignore that detail. I’ll also assume the maximum depth is always 255.

Here’s the structure I used to represent a PPM image, just one frame of video. I’m using a flexible array member to pack the data at the end of the structure.

struct frame {
    size_t width;
    size_t height;
    unsigned char data[];
};

Next a function to allocate a frame:

static struct frame *
frame_create(size_t width, size_t height)
{
    struct frame *f = malloc(sizeof(*f) + width * height * 3);
    f->width = width;
    f->height = height;
    return f;
}

We’ll need a way to write the frames we’ve created.

static void
frame_write(struct frame *f)
{
    printf("P6\n%zu %zu\n255\n", f->width, f->height);
    fwrite(f->data, f->width * f->height, 3, stdout);
}

Finally, a function to read a frame, reusing an existing buffer if possible. The most complex part of the whole program is just parsing the PPM header. The %*c in the scanf() specifically consumes the line feed immediately following the maximum depth.

static struct frame *
frame_read(struct frame *f)
{
    size_t width, height;
    if (scanf("P6 %zu%zu%*d%*c", &width, &height) < 2) {
        free(f);
        return 0;
    }
    if (!f || f->width != width || f->height != height) {
        free(f);
        f = frame_create(width, height);
    }
    fread(f->data, width * height, 3, stdin);
    return f;
}

Since this program will only be part of a pipeline, I’m not worried about checking the results of fwrite() and fread(). The process will be killed by the shell if something goes wrong with the pipes. However, if we’re out of video data and get an EOF, scanf() will fail, indicating the EOF, which is normal and can be handled cleanly.

An identity filter

That’s all the infrastructure we need to built an identity filter that passes frames through unchanged:

int main(void)
{
    struct frame *frame = 0;
    while ((frame = frame_read(frame)))
        frame_write(frame);
}

Processing a frame is just matter of adding some stuff to the body of the while loop.

A rolling shutter filter

For the rolling shutter filter, in addition to the input frame we need an image to hold the result of the rolling shutter. Each input frame will be copied into the rolling shutter frame, but a little less will be copied from each frame, locking a little bit more of the image in place.

int
main(void)
{
    int shutter_step = 3;
    size_t shutter = 0;
    struct frame *f = frame_read(0);
    struct frame *out = frame_create(f->width, f->height);
    while (shutter < f->height && (f = frame_read(f))) {
        size_t offset = shutter * f->width * 3;
        size_t length = f->height * f->width * 3 - offset;
        memcpy(out->data + offset, f->data + offset, length);
        frame_write(out);
        shutter += shutter_step;
    }
    free(out);
    free(f);
}

The shutter_step controls how many rows are capture per frame of video. Generally capturing one row per frame is too slow for the simulation. For a 1080p video, that’s 1,080 frames for the entire simulation: 18 seconds at 60 FPS or 36 seconds at 30 FPS. If this program were to accept command line arguments, controlling the shutter rate would be one of the options.

Putting it all together:

$ ffmpeg -i input.mp4 -f image2pipe -vcodec ppm pipe:1 | \
    ./rolling-shutter | \
    x264 -o output.mp4 /dev/stdin

Here are some of the results for different shutter rates: 1, 3, 5, 8, 10, and 15 rows per frame. Feel free to right-click and “View Video” to see the full resolution video.

Source and original input

This post contains the full source in parts, but here it is all together:

Here’s the original video, filmed by my wife using her Nikon D5500, in case you want to try it for yourself:

It took much longer to figure out the string-pulling contraption to slowly spin the fan at a constant rate than it took to write the C filter program.

On Hacker News, morecoffee shared a video of the second order effect (direct link), where the rolling shutter speed changes over time.

A deeper analysis of rolling shutter: Playing detective with rolling shutter photos.

Stack Clashing for Fun and Profit

Stack clashing has been in the news lately due to some recently discovered vulnerablities along with proof-of-concept exploits. As the announcement itself notes, this is not a new issue, though this appears to be the first time it’s been given this particular name. I do know of one “good” use of stack clashing, where it’s used for something productive than as part of an attack. In this article I’ll explain how it works.

You can find the complete code for this article here, ready to run:

But first, what is a stack clash? Here’s a rough picture of the typical way process memory is laid out. The stack starts at a high memory address and grows downwards. Code and static data sit at low memory, with a brk pointer growing upward to make small allocations. In the middle is the heap, where large allocations and memory mappings take place.

Below the stack is a slim guard page that divides the stack and the region of memory reserved for the heap. Reading or writing to that memory will trap, causing the program to crash or some special action to be taken. The goal is to prevent the stack from growing into the heap, which could cause all sorts of trouble, like security issues.

The problem is that this thin guard page isn’t enough. It’s possible to put a large allocation on the stack, never read or write to it, and completely skip over the guard page, such that the heap and stack overlap without detection.

Once this happens, writes into the heap will change memory on the stack and vice versa. If an attacker can cause the program to make such a large allocation on the stack, then legitimate writes into memory on the heap can manipulate local variables or return pointers, changing the program’s control flow. This can bypass buffer overflow protections, such as stack canaries.

Binary trees and coroutines

Now, I’m going to abruptly change topics to discuss binary search trees. We’ll get back to stack clash in a bit. Suppose we have a binary tree which we would like to iterate depth-first. For this demonstration, here’s the C interface to the binary tree.

struct tree {
    struct tree *left;
    struct tree *right;
    char *key;
    char *value;
};

void  tree_insert(struct tree **, char *k, char *v);
char *tree_find(struct tree *, char *k);
void  tree_visit(struct tree *, void (*f)(char *, char *));
void  tree_destroy(struct tree *);

An empty tree is the NULL pointer, hence the double-pointer for insert. In the demonstration it’s an unbalanced search tree, but this could very well be a balanced search tree with the addition of another field on the structure.

For the traversal, first visit the root node, then traverse its left tree, and finally traverse its right tree. It makes for a simple, recursive definition — the sort of thing you’d teach a beginner. Here’s a definition that accepts a callback, which the caller will use to visit each key/value in the tree. This really is as simple as it gets.

void
tree_visit(struct tree *t, void (*f)(char *, char *))
{
    if (t) {
        f(t->key, t->value);
        tree_visit(t->left, f);
        tree_visit(t->right, f);
    }
}

Unfortunately this isn’t so convenient for the caller, who has to split off a callback function that lacks context, then hand over control to the traversal function.

void
printer(char *k, char *v)
{
    printf("%s = %s\n", k, v);
}

void
print_tree(struct tree *tree)
{
    tree_visit(tree, printer);
}

Usually it’s much nicer for the caller if instead it’s provided an iterator, which the caller can invoke at will. Here’s an interface for it, just two functions.

struct tree_it *tree_iterator(struct tree *);
int             tree_next(struct tree_it *, char **k, char **v);

The first constructs an iterator object, and the second one visits a key/value pair each time it’s called. It returns 0 when traversal is complete, automatically freeing any resources associated with the iterator.

The caller now looks like this:

    char *k, *v;
    struct tree_it *it = tree_iterator(tree);
    while (tree_next(it, &k, &v))
        printf("%s = %s\n", k, v);

Notice I haven’t defined struct tree_it. That’s because I’ve got four different implementations, each taking a different approach. The last one will use stack clashing.

Manual State Tracking

With just the standard facilities provided by C, there’s a some manual bookkeeping that has to take place in order to convert the recursive definition into an iterator. Depth-first traversal is a stack-oriented process, and with recursion the stack is implicit in the call stack. As an iterator, the traversal stack needs to be managed explicitly. The iterator needs to keep track of the path it took so that it can backtrack, which means keeping track of parent nodes as well as which branch was taken.

Here’s my little implementation, which, to keep things simple, has a hard depth limit of 32. It’s structure definition includes a stack of node pointers, and 2 bits of information per visited node, stored across a 64-bit integer.

struct tree_it {
    struct tree *stack[32];
    unsigned long long state;
    int nstack;
};

struct tree_it *
tree_iterator(struct tree *t)
{
    struct tree_it *it = malloc(sizeof(*it));
    it->stack[0] = t;
    it->state = 0;
    it->nstack = 1;
    return it;
}

The 2 bits track three different states for each visited node:

  1. Visit the current node
  2. Traverse the left tree
  3. Traverse the right tree

It works out to the following. Don’t worry too much about trying to understand how this works. My point is to demonstrate that converting the recursive definition into an iterator complicates the implementation.

int
tree_next(struct tree_it *it, char **k, char **v)
{
    while (it->nstack) {
        int shift = (it->nstack - 1) * 2;
        int state = 3u & (it->state >> shift);
        struct tree *t = it->stack[it->nstack - 1];
        it->state += 1ull << shift;
        switch (state) {
            case 0:
                *k = t->key;
                *v = t->value;
                if (t->left) {
                    it->stack[it->nstack++] = t->left;
                    it->state &= ~(3ull << (shift + 2));
                }
                return 1;
            case 1:
                if (t->right) {
                    it->stack[it->nstack++] = t->right;
                    it->state &= ~(3ull << (shift + 2));
                }
                break;
            case 2:
                it->nstack--;
                break;
        }
    }
    free(it);
    return 0;
}

Wouldn’t it be nice to keep both the recursive definition while also getting an iterator? There’s an exact solution to that: coroutines.

Coroutines

C doesn’t come with coroutines, but there are a number of libraries available. We can also build our own coroutines. One way to do that is with user contexts (<ucontext.h>) provided by the X/Open System Interfaces Extension (XSI), an extension to POSIX. This set of functions allow programs to create their own call stacks and switch between them. That’s the key ingredient for coroutines. Caveat: These functions aren’t widely available, and probably shouldn’t be used in new code.

Here’s my iterator structure definition.

#define _XOPEN_SOURCE 600
#include <ucontext.h>

struct tree_it {
    char *k;
    char *v;
    ucontext_t coroutine;
    ucontext_t yield;
};

It needs one context for the original stack and one context for the iterator’s stack. Each time the iterator is invoked, it the program will switch to the other stack, find the next value, then switch back. This process is called yielding. Values are passed between context using the k (key) and v (value) fields on the iterator.

Before I get into initialization, here’s the actual traversal coroutine. It’s nearly the same as the original recursive definition except for the swapcontext(). This is the yield, pausing execution and sending control back to the caller. The current context is saved in the first argument, and the second argument becomes the current context.

static void
coroutine(struct tree *t, struct tree_it *it)
{
    if (t) {
        it->k = t->key;
        it->v = t->value;
        swapcontext(&it->coroutine, &it->yield);
        coroutine(t->left, it);
        coroutine(t->right, it);
    }
}

While the actual traversal is simple again, initialization is more complicated. The first problem is that there’s no way to pass pointer arguments to the coroutine. Technically only int arguments are permitted. (All the online tutorials get this wrong.) To work around this problem, I smuggle the arguments in as global variables. This would cause problems should two different threads try to create iterators at the same time, even on different trees.

static struct tree *tree_arg;
static struct tree_it *tree_it_arg;

static void
coroutine_init(void)
{
    coroutine(tree_arg, tree_it_arg);
}

The stack has to be allocated manually, which I do with a call to malloc(). Nothing fancy is needed, though this means the new stack won’t have a guard page. For the stack size, I use the suggested value of SIGSTKSZ. The makecontext() function is what creates the new context from scratch, but the new context must first be initialized with getcontext(), even though that particular snapshot won’t actually be used.

struct tree_it *
tree_iterator(struct tree *t)
{
    struct tree_it *it = malloc(sizeof(*it));
    it->coroutine.uc_stack.ss_sp = malloc(SIGSTKSZ);
    it->coroutine.uc_stack.ss_size = SIGSTKSZ;
    it->coroutine.uc_link = &it->yield;
    getcontext(&it->coroutine);
    makecontext(&it->coroutine, coroutine_init, 0);
    tree_arg = t;
    tree_it_arg = it;
    return it;
}

Notice I gave it a function pointer, a lot like I’m starting a new thread. This is no coincidence. There’s a lot of similarity between coroutines and multiple threads, as you’ll soon see.

Finally the iterator function itself. Since NULL isn’t a valid key, it initializes the key to NULL before yielding to the iterator context. If the iterator has no more nodes to visit, it doesn’t set the key, which can be detected when control returns.

int
tree_next(struct tree_it *it, char **k, char **v)
{
    it->k = 0;
    swapcontext(&it->yield, &it->coroutine);
    if (it->k) {
        *k = it->k;
        *v = it->v;
        return 1;
    } else {
        free(it->coroutine.uc_stack.ss_sp);
        free(it);
        return 0;
    }
}

That’s all it takes to create and operate a coroutine in C, provided you’re on a system with these XSI extensions.

Semaphores

Instead of a coroutine, we could just use actual threads and a couple of semaphores to synchronize them. This is a heavy implementation and also probably shouldn’t be used in practice, but at least it’s fully portable.

Here’s the structure definition:

struct tree_it {
    struct tree *t;
    char *k;
    char *v;
    sem_t visitor;
    sem_t main;
    pthread_t thread;
};

The main thread will wait on one semaphore and the iterator thread will wait on the other. This should sound very familiar.

The actual traversal function looks the same, but with sem_post() and sem_wait() as the yield.

static void
visit(struct tree *t, struct tree_it *it)
{
    if (t) {
        it->k = t->key;
        it->v = t->value;
        sem_post(&it->main);
        sem_wait(&it->visitor);
        visit(t->left, it);
        visit(t->right, it);
    }
}

There’s a separate function to initialize the iterator context again.

static void *
thread_entrance(void *arg)
{
    struct tree_it *it = arg;
    sem_wait(&it->visitor);
    visit(it->t, it);
    sem_post(&it->main);
    return 0;
}

Creating the iterator only requires initializing the semaphores and creating the thread:

struct tree_it *
tree_iterator(struct tree *t)
{
    struct tree_it *it = malloc(sizeof(*it));
    it->t = t;
    sem_init(&it->visitor, 0, 0);
    sem_init(&it->main, 0, 0);
    pthread_create(&it->thread, 0, thread_entrance, it);
    return it;
}

The iterator function looks just like the coroutine version.

int
tree_next(struct tree_it *it, char **k, char **v)
{
    it->k = 0;
    sem_post(&it->visitor);
    sem_wait(&it->main);
    if (it->k) {
        *k = it->k;
        *v = it->v;
        return 1;
    } else {
        pthread_join(it->thread, 0);
        sem_destroy(&it->main);
        sem_destroy(&it->visitor);
        free(it);
        return 0;
    }
}

Overall, this is almost identical to the coroutine version.

Coroutines using stack clashing

Finally I can tie this back into the topic at hand. Without either XSI extensions or Pthreads, we can (usually) create coroutines by abusing setjmp() and longjmp(). Technically this violates two of the C’s rules and relies on undefined behavior, but it generally works. This is not my own invention, and it dates back to at least 2010.

From the very beginning, C has provided a crude “exception” mechanism that allows the stack to be abruptly unwound back to a previous state. It’s a sort of non-local goto. Call setjmp() to capture an opaque jmp_buf object to be used in the future. This function returns 0 this first time. Hand that value to longjmp() later, even in a different function, and setjmp() will return again, this time with a non-zero value.

It’s technically unsuitable for coroutines because the jump is a one-way trip. The unwound stack invalidates any jmp_buf that was created after the target of the jump. In practice, though, you can still use these jumps, which is one rule being broken.

That’s where stack clashing comes into play. In order for it to be a proper coroutine, it needs to have its own stack. But how can we do that with these primitive C utilities? Extend the stack to overlap the heap, call setjmp() to capture a coroutine on it, then return. Generally we can get away with using longjmp() to return to this heap-allocated stack.

Here’s my iterator definition for this one. Like the XSI context struct, this has two jmp_buf “contexts.” The stack holds the iterator’s stack buffer so that it can be freed, and the gap field will be used to prevent the optimizer from spoiling our plans.

struct tree_it {
    char *k;
    char *v;
    char *stack;
    volatile char *gap;
    jmp_buf coroutine;
    jmp_buf yield;
};

The coroutine looks familiar again. This time the yield is performed with setjmmp() and longjmp(), just like swapcontext(). Remember that setjmp() returns twice, hence the branch. The longjmp() never returns.

static void
coroutine(struct tree *t, struct tree_it *it)
{
    if (t) {
        it->k = t->key;
        it->v = t->value;
        if (!setjmp(it->coroutine))
            longjmp(it->yield, 1);
        coroutine(t->left, it);
        coroutine(t->right, it);
    }
}

Next is the tricky part to cause the stack clash. First, allocate the new stack with malloc() so that we can get its address. Then use a local variable on the stack to determine how much the stack needs to grow in order to overlap with the allocation. Taking the difference between these pointers is illegal as far as the language is concerned, making this the second rule I’m breaking. I can imagine an implementation where the stack and heap are in two separate kinds of memory, and it would be meaningless to take the difference. I don’t actually have to imagine very hard, because this is actually how it used to work on the 8086 with its segmented memory architecture.

struct tree_it *
tree_iterator(struct tree *t)
{
    struct tree_it *it = malloc(sizeof(*it));
    it->stack = malloc(STACK_SIZE);
    char marker;
    char gap[&marker - it->stack - STACK_SIZE];
    it->gap = gap; // prevent optimization
    if (!setjmp(it->yield))
        coroutine_init(t, it);
    return it;
}

I’m using a variable-length array (VLA) named gap to indirectly control the stack pointer, moving it over the heap. I’m assuming the stack grows downward, since otherwise the sign would be wrong.

The compiler is smart and will notice I’m not actually using gap, and it’s happy to throw it away. In fact, it’s vitally important that I don’t touch it since the guard page, along with a bunch of unmapped memory, is actually somewhere in the middle of that array. I only want the array for its side effect, but that side effect isn’t officially supported, which means the optimizer doesn’t need to consider it in its decisions. To inhibit the optimizer, I store the array’s address where someone might potentially look at it, meaning the array has to exist.

Finally, the iterator function looks just like the others, again.

int
tree_next(struct tree_it *it, char **k, char **v)
{
    it->k = 0;
    if (!setjmp(it->yield))
        longjmp(it->coroutine, 1);
    if (it->k) {
        *k = it->k;
        *v = it->v;
        return 1;
    } else {
        free(it->stack);
        free(it);
        return 0;
    }
}

And that’s it: a nasty hack using a stack clash to create a context for a setjmp()+longjmp() coroutine.

Building and Installing Software in $HOME

For more than 5 years now I’ve kept a private “root” filesystem within my home directory under $HOME/.local/. Within are the standard /usr directories, such as bin/, include/, lib/, etc., containing my own software, libraries, and man pages. These are first-class citizens, indistinguishable from the system-installed programs and libraries. With one exception (setuid programs), none of this requires root privileges.

Installing software in $HOME serves two important purposes, both of which are indispensable to me on a regular basis.

This prevents me from installing packaged software myself through the system’s package manager. Building and installing the software myself in my home directory, without involvement from the system administrator, neatly works around this issue. As a software developer, it’s already perfectly normal for me to build and run custom software, and this is just an extension of that behavior.

In the most desperate situation, all I need from the sysadmin is a decent C compiler and at least a minimal POSIX environment. I can bootstrap anything I might need, both libraries and programs, including a better C compiler along the way. This is one major strength of open source software.

I have noticed one alarming trend: Both GCC (since 4.8) and Clang are written in C++, so it’s becoming less and less reasonable to bootstrap a C++ compiler from a C compiler, or even from a C++ compiler that’s more than a few years old. So you may also need your sysadmin to supply a fairly recent C++ compiler if you want to bootstrap an environment that includes C++. I’ve had to avoid some C++ software (such as CMake) for this reason.

In theory this is what /usr/local is all about. It’s typically the location for software not managed by the system’s package manager. However, I think it’s cleaner to put this in $HOME/.local, so long as other system users don’t need it.

For example, I have an installation of each version of Emacs between 24.3 (the oldest version worth supporting) through the latest stable release, each suffixed with its version number, under $HOME/.local. This is useful for quickly running a test suite under different releases.

$ git clone https://github.com/skeeto/elfeed
$ cd elfeed/
$ make EMACS=emacs24.3 clean test
...
$ make EMACS=emacs25.2 clean test
...

Another example is NetHack, which I prefer to play with a couple of custom patches (Menucolors, wchar). The install to $HOME/.local is also captured as a patch.

$ tar xzf nethack-343-src.tar.gz
$ cd nethack-3.4.3/
$ patch -p1 < ~/nh343-menucolor.diff
$ patch -p1 < ~/nh343-wchar.diff
$ patch -p1 < ~/nh343-home-install.diff
$ sh sys/unix/setup.sh
$ make -j$(nproc) install

Normally NetHack wants to be setuid (e.g. run as the “games” user) in order to restrict access to high scores, saves, and bones — saved levels where a player died, to be inserted randomly into other players’ games. This prevents cheating, but requires root to set up. Fortunately, when I install NetHack in my home directory, this isn’t a feature I actually care about, so I can ignore it.

Mutt is in a similar situation, since it wants to install a special setgid program (mutt_dotlock) that synchronizes mailbox access. All MUAs need something like this.

Everything described below is relevant to basically any modern unix-like system: Linux, BSD, etc. I personally install software in $HOME across a variety of systems and, fortunately, it mostly works the same way everywhere. This is probably in large part due to everyone standardizing around the GCC and GNU binutils interfaces, even if the system compiler is actually LLVM/Clang.

Configuring for $HOME installs

Out of the box, installing things in $HOME/.local won’t do anything useful. You need to set up some environment variables in your shell configuration (i.e. .profile, .bashrc, etc.) to tell various programs, such as your shell, about it. The most obvious variable is $PATH:

export PATH=$HOME/.local/bin:$PATH

Notice I put it in the front of the list. This is because I want my home directory programs to override system programs with the same name. For what other reason would I install a program with the same name if not to override the system program?

In the simplest situation this is good enough, but in practice you’ll probably need to set a few more things. If you install libraries in your home directory and expect to use them just as if they were installed on the system, you’ll need to tell the compiler where else to look for those headers and libraries, both for C and C++.

export C_INCLUDE_PATH=$HOME/.local/include
export CPLUS_INCLUDE_PATH=$HOME/.local/include
export LIBRARY_PATH=$HOME/.local/lib

This is like the -I compiler option and the -L linker option, except you won’t need to use them explicitly. Some software uses pkg-config to determine its compiler and linker flags, and your home directory will contain some of the needed information. So set that up too:

export PKG_CONFIG_PATH=$HOME/.local/lib/pkgconfig

Run-time linker

Finally, when you install libraries in your home directory, the run-time dynamic linker will need to know where to find them. There are three ways to deal with this:

  1. The crude, easy way: LD_LIBRARY_PATH.
  2. The elegant, difficult way: ELF runpath.
  3. Screw it, just statically link the bugger. (Not always possible.)

For the crude way, point the run-time linker at your lib/ and you’re done:

export LD_LIBRARY_PATH=$HOME/.local/lib

However, this is like using a shotgun to kill a fly. If you install a library in your home directory that is also installed on the system, and then run a system program, it may be linked against your library rather than the library installed on the system as was originally intended. This could have detrimental effects.

The precision method is to set the ELF “runpath” value. It’s like a per-binary LD_LIBRARY_PATH. The run-time linker uses this path first in its search for libraries, and it will only have an effect on that particular program/library. This also applies to dlopen().

Some software will configure the runpath by default, but usually you need to configure this yourself with the linker -rpath option in LDFLAGS. It’s used directly like this:

$ gcc -Wl,-rpath=$HOME/.local/lib -o foo bar.o baz.o -lquux

Verify with readelf:

$ readelf -d foo | grep runpath
Library runpath: [/home/username/.local/lib]

ELF supports a special $ORIGIN “variable” set to the binary’s location. This allows the program and associated libraries to be installed anywhere without changes, so long as they have the same relative position to each other . (Note the quotes to prevent shell interpolation.)

$ gcc -Wl,-rpath='$ORIGIN/../lib' -o foo bar.o baz.o -lquux

There is one situation where runpath won’t work: when you want a system-installed program to find a home directory library with dlopen() — e.g. as an extension to that program. You either need to ensure it uses a relative or absolute path (i.e. the argument to dlopen() contains a slash) or you must use LD_LIBRARY_PATH.

Personally, I always use the Worse is Better LD_LIBRARY_PATH shotgun. Occasionally it’s caused some annoying issues, but the vast majority of the time it gets the job done with little fuss. This is just my personal development environment, after all, not a production server.

Manual pages

Another potentially tricky issue is man pages. When a program or library installs a man page in your home directory, it would certainly be nice to access it with man <topic> just like it was installed on the system. Fortunately, Debian and Debian-derived systems, using a mechanism I haven’t yet figured out, discover home directory man pages automatically without any assistance. No configuration needed.

It’s more complicated on other systems, such as the BSDs. You’ll need to set the MANPATH variable to include $HOME/.local/share/man. It’s unset by default and it overrides the system settings, which means you need to manually include the system paths. The manpath program can help with this … if it’s available.

export MANPATH=$HOME/.local/share/man:$(manpath)

I haven’t figured out a portable way to deal with this issue, so I mostly ignore it.

How to install software in $HOME

While I’ve poo-pooed autoconf in the past, the standard configure script usually makes it trivial to build and install software in $HOME. The key ingredient is the --prefix option:

$ tar xzf name-version.tar.gz
$ cd name-version/
$ ./configure --prefix=$HOME/.local
$ make -j$(nproc)
$ make install

Most of the time it’s that simple! If you’re linking against your own libraries and want to use runpath, it’s a little more complicated:

$ ./configure --prefix=$HOME/.local \
              LDFLAGS="-Wl,-rpath=$HOME/.local/lib"

For CMake, there’s CMAKE_INSTALL_PREFIX:

$ cmake -DCMAKE_INSTALL_PREFIX=$HOME/.local ..

The CMake builds I’ve seen use ELF runpath by default, and no further configuration may be required to make that work. I’m sure that’s not always the case, though.

Some software is just a single, static, standalone binary with everything baked in. It doesn’t need to be given a prefix, and installation is as simple as copying the binary into place. For example, Enchive works like this:

$ git clone https://github.com/skeeto/enchive
$ cd enchive/
$ make
$ cp enchive ~/.local/bin

Some software uses its own unique configuration interface. I can respect that, but it does add some friction for users who now have something additional and non-transferable to learn. I demonstrated a NetHack build above, which has a configuration much more involved than it really should be. Another example is LuaJIT, which uses make variables that must be provided consistently on every invocation:

$ tar xzf LuaJIT-2.0.5.tar.gz
$ cd LuaJIT-2.0.5/
$ make -j$(nproc) PREFIX=$HOME/.local
$ make PREFIX=$HOME/.local install

(You can use the “install” target to both build and install, but I wanted to illustrate the repetition of PREFIX.)

Some libraries aren’t so smart about pkg-config and need some handholding — for example, ncurses. I mention it because it’s required for both Vim and Emacs, among many others, so I’m often building it myself. It ignores --prefix and needs to be told a second time where to install things:

$ ./configure --prefix=$HOME/.local \
              --enable-pc-files \
              --with-pkg-config-libdir=$PKG_CONFIG_PATH

Another issue is that a whole lot of software has been hardcoded for ncurses 5.x (i.e. ncurses5-config), and it requires hacks/patching to make it behave properly with ncurses 6.x. I’ve avoided ncurses 6.x for this reason.

Learning through experience

I could go on and on like this, discussing the quirks for the various libraries and programs that I use. Over the years I’ve gotten used to many of these issues, committing the solutions to memory. Unfortunately, even within the same version of a piece of software, the quirks can change between major operating system releases, so I’m continuously learning my way around new issues. It’s really given me an appreciation for all the hard work that package maintainers put into customizing and maintaining software builds to fit properly into a larger ecosystem.

Switching to the Mutt Email Client

Note: The way I manage my email wouldn’t really work for most people, so don’t read this as a recommendation. This is just a discussion of how I prefer to use email.

It was almost four years ago I switched from webmail to a customized email configuration based on Notmuch and Emacs. Notmuch served as both as a native back-end that provided indexing and tagging, as well as a front-end, written in Emacs Lisp. It dramatically improved my email experience, and I wished I had done it earlier. I’ve really enjoyed having so much direct control over my email.

However, I’m always fiddling with things — fiddling feels a lot more productive than it actually is — and last month I re-invented my email situation, this time switching to a combination of Mutt, Vim, mu, and tmux. The entirety of my email interface now resides inside a terminal, and I’m enjoying it even more. I feel I’ve “leveled up” again in my email habits.

On the server-side I also switched from Exim to Postfix and procmail, making the server configuration a whole lot simpler. Including SpamAssassin, it’s just three lines added to the default Debian configuration. It leaves a lot less room for error, and I could rebuild it from scratch with little trouble if there was an emergency. My previous configuration required quite a bit of system configuration, such as relying on incron to sort incoming mail, particularly spam, but procmail now does this job more cleanly.

Towards Robustness

Over the years I’ve gotten less patient when it comes to dealing with breaking changes in software, and I’ve gotten more conservative about system stability. Continuously updating my configurations and habits to the latest software changes was an interesting challenge earlier in my career, but today there are much better uses of my time. Debian Stable, my preferred operating system, runs at pretty much the perfect pace for me.

Following these changing preferences, one of the biggest motivations for my recent email change was to make my email setup more robust and stable. Until now, email was tied tightly to Emacs, with a configuration drawing directly from MELPA, pulling in the bleeding edge version of every package I use. Breaking changes arrive at unexpected times, and occasionally the current version of a package temporarily doesn’t work. Usually it’s because the developer pushed a bad commit right before the latest MELPA build, and so the package is broken for a few hours or days. I’ve been guilty of this myself. MELPA Stable is intended to address these issues, but it seems to break more often than normal MELPA. For example, at the time of this writing, Evil is not installable via MELPA Stable due to an unmet dependency.

Tying something as vital as email to this Rube Goldberg machine made me nervous. Access to my email depended on a number of independent systems of various levels of stability to mostly work correctly. My switch to Mutt cut this down to just a couple of very stable systems.

format=flowed

I’ve long believed HTML email is an abomination that should never have been invented. Text is the ideal format for email, and there are a number of specifications to make it work well across different systems. One of those standards is RFC 3676, colloquially named format=flowed, or just f=f.

Messages encoded with f=f allow mail clients to safely reflow the paragraphs to nicely fit the user’s display, whether that display be thinner or wider than the sender’s original message. It’s also completely compatible with mail clients that don’t understand format=flowed, which will display the message as the sender originally wrapped it.

The gist of f=f is that messages can have both “soft” and “hard” line breaks. If a line ends with a space, then it’s a soft line break. The mail client can safely reflow lines separated by a soft line break. Without the trailing space, it’s a hard line break, which prohibits flowing with the next line. The last line of a paragraph ends with a hard line break. It’s also used for text that shouldn’t reflow, such as code samples.

I’ll illustrate using an underscore in place of a space, so that you can see it:

This is a message in the format=flowed style, allowing_
mail clients to flow this message nicely in displays of_
different widths.

> This is an example of a quote block in a message,_
> which is supported by the format=flowed specification.
>> It also supports nested quote blocks, which means_
>> this paragraph won't flow into the previous.

The RFC covers edge cases that require special “space-stuffing” rules, but, when editing a text email in an editor, you only need to think about soft and hard line breaks. In my case, Mutt takes care of the rest of the details.

Unfortunately Emacs’s lacks decent support for f=f, though I’m sure a minor mode could be written to make it work well. On the other hand, Vim has been playing an increasing role in my day-to-day editing, and it has excellent built-in support for f=f. Since I’m now using Vim to compose all of my email, I get it for free.

First, I tell Mutt that I want to use f=f in my .muttrc:

set text_flowed

Then in Vim, I add the w flag to formatoptions, which tells it to wrap paragraphs using soft line breaks.

set fo+=w

If I want to inspect my f=f formatting, I temporarily enable the list option, which displays a $ for all newlines.

set list

Although few people would notice a difference, I feel a little bad for not using f=f all these years! A few people may have endured some ugly, non-flowing emails from me. My only condolance is that at least it wasn’t HTML.

It’s not all roses, though. When I reply to a message, Mutt doesn’t insert the quoted text as f=f into my reply, so I have to massage it into f=f myself. Also, just as GitHub doesn’t support Markdown in email responses, neither does it support f=f. When I reply to issues by email, GitHub won’t nicely reflow my carefully crafted f=f message, needlessly making email responses an inferior option.

Features unneeded

One reason I didn’t choose this particular email arrangement 4 years ago was that PGP support was one of my prime requirements. Mutt has solid PGP support, but, with a Maildir setup (i.e. not IMAP), I’d have to use the key on the server, which was out of the question. Since I no longer care about PGP, my email requirements are more relaxed.

Over the years wasn’t making much use of Notmuch’s tagging system. I only used two tags: “unread” and “inbox” (e.g. read, but still needs attention). Otherwise I’d use Notmuch’s powerful search to find what I wanted. I still needed to keep track of the tags I was using, so the Notmuch index, nearly as large as the email messages themselves, became part of my mail backup.

The Maildir format itself supports some flags: passed (P), replied (R), seen (S), trashed (T), draft (D), and flagged (F). These are stored in the message’s filename. In my new configuration, the “seen” tag (inversely) takes the place of Notmuch’s “unread” tag. The “flagged” tag takes place of the “inbox” tag. Normally in Mutt you’d use mailboxes — i.e. Maildir subdirectories — for something like this, but I prefer all my mail to sit in one big bucket. Search, don’t sort.

Since the two flags are part of the filename, I no longer need to include a tag database (i.e. the entire Notmuch index) in the backup, and my mail backups are much smaller. I could continue to use Notmuch for searching, but I’ve settled on mu instead. When I perform a search, mu writes the results to a temporary Maildir using symbolic links, which I visit with Mutt. The mu index is transient and doesn’t need to be backed up.

Mu also manages my contacts alias list. It can produce a Mutt-style alias file based on the contents of my Maildir:

mu cfind --format=mutt-alias > aliases

It’s been really nice to have all my email sitting around as nothing more than a big pile of files like this. I’ve begun writing little scripts to harvest data from it, too.

Configuration files

As with all my personal configuration files, you can see my .muttrc online. The first few weeks I was tweaking this file hourly, but I’ve now got it basically the way I want.

null program

Chris Wellons