Introducing the Pokerware Secure Passphrase Generator

I recently developed Pokerware, an offline passphrase generator that operates in the same spirit as Diceware. The primary difference is that it uses a shuffled deck of playing cards as its entropy source rather than dice. Draw some cards and use them to select a uniformly random word from a list. Unless you’re some sort of tabletop gaming nerd, a deck of cards is more readily available than five 6-sided dice, which would typically need to be borrowed from the Monopoly board collecting dust on the shelf, then rolled two at a time.

There are various flavors of two different word lists here:

Hardware random number generators are difficult to verify and may not actually be as random as they promise, either intentionally or unintentionally. For the particularly paranoid, Diceware and Pokerware are an easily verifiable alternative for generating secure passphrases for cryptographic purposes. At any time, a deck of 52 playing cards is in one of 52! possible arrangements. That’s more than 225 bits of entropy. If you give your deck a thorough shuffle, it will be in an arrangement that has never been seen before and will never be seen again. Pokerware draws on some of these bits to generate passphrases.

The Pokerware list has 5,304 words (12.4 bits per word), compared to Diceware’s 7,776 words (12.9 bits per word). My goal was to invent a card-drawing scheme that would uniformly select from a list in the same sized ballpark as Diceware. Much smaller and you’d have to memorize more words for the same passphrase strength. Much larger and the words on the list would be more difficult to memorize, since the list would contain longer and less frequently used words. Diceware strikes a nice balance at five dice.

One important difference for me is that I like my Pokerware word lists a lot more than the two official Diceware lists. My lists only have simple, easy-to-remember words (for American English speakers, at least), without any numbers or other short non-words. Pokerware has two official lists, “formal” and “slang,” since my early testers couldn’t agree on which was better. Rather than make a difficult decision, I took the usual route of making no decision at all.

The “formal” list is derived in part from Google’s Ngram Viewer, with my own additional filters and tweaking. It’s called “formal” because the ngrams come from formal publications and represent more formal kinds of speech.

The “slang” list is derived from every reddit comment between December 2005 and May 2017, tamed by the same additional filters. I have this data on hand, so I may as well put it to use. I figured more casually-used words would be easier to remember. Due to my extra filtering, there’s actually a lot of overlap between these lists, so the differences aren’t too significant.

If you have your own word list, perhaps in a different language, you can use the Makefile in the repository to build your own Pokerware lookup table, both plain text and PDF. The PDF is generated using Groff macros.

Passphrase generation instructions

  1. Thoroughly shuffle the deck.

  2. Draw two cards. Sort them by value, then suit. Suits are in alphabetical order: Clubs, Diamonds, Hearts, Spades.

  3. Draw additional cards until you get a card that doesn’t match the face value of either of your initial two cards. Observe its suit.

  4. Using your two cards and observed suit, look up a word in the table.

  5. Place all cards back in the deck, shuffle, and repeat from step 2 until you have the desired number of words. Each word is worth 12.4 bits of entropy.

A word of warning about step 4: If you use software to do the word list lookup, beware that it might save your search/command history — and therefore your passphrase — to a file. For example, the less pager will store search history in ~/.lesshst. It’s easy to prevent that one:

$ LESSHISTFILE=- less pokerware-slang.txt

Example word generation

Suppose in step 2 you draw King of Hearts (KH/K♥) and Queen of Clubs (QC/Q♣).

In step 3 you first draw King of Diamonds (KD/K♦), discarding it because it matches the face value of one of your cards from step 2.

Next you draw Four of Spades (4S/4♠), taking spades as your extra suit.

In order, this gives you Queen of Clubs, King of Hearts, and Spades: QCKHS or Q♣K♥♠. This corresponds to “wizard” in the formal word list and would be the first word in your passphrase.

A deck of cards as an office tool

I now have an excuse to keep a deck of cards out on my desk at work. I’ve been using Diceware — or something approximating it since I’m not so paranoid about hardware RNGs — for passwords for over 8 years now. From now I’ll deal new passwords from an in-reach deck of cards. Though typically I need to tweak the results to meet outdated character-composition requirements.

Integer Overflow into Information Disclosure

Last week I was discussing CVE-2017-7529 with my intern. Specially crafted input to Nginx causes an integer overflow which has the potential to leak sensitive information. But how could an integer overflow be abused to trick a program into leaking information? To answer this question, I put together the simplest practical example I could imagine.

This small C program converts a vector image from a custom format (described below) into a Netpbm image, a conveniently simple format. The program defensively and carefully parses its input, but still makes a subtle, fatal mistake. This mistake not only leads to sensitive information disclosure, but, with a more sophisticated attack, could be used to execute arbitrary code.

After getting the hang of the interface for the program, I encourage you to take some time to work out an exploit yourself. Regardless, I’ll reveal a functioning exploit and explain how it works.

A new vector format

The input format is line-oriented and very similar to Netpbm itself. The first line is the header, starting with the magic number V2 (ASCII) followed by the image dimensions. The target output format is Netpbm’s “P2” (text gray scale) format, so the “V2” parallels it. The file must end with a newline.

V2 <width> <height>

What follows is drawing commands, one per line. For example, the s command sets the value of a particular pixel.

s <x> <y> <00–ff>

Since it’s not important for the demonstration, this is the only command I implemented. It’s easy to imagine additional commands to draw lines, circles, Bezier curves, etc.

Here’s an example (example.txt) that draws a single white point in the middle of the image:

V2 256 256
s 127 127 ff

The rendering tool reads standard input to standard output:

$ render < example.txt > example.pgm

Here’s what it looks like rendered:

However, you will notice that when you run the rendering tool, it prompts you for username and password. This is silly, of course, but it’s an excuse to get “sensitive” information into memory. It will accept any username/password combination where the username and password don’t match each other. The key is this: It’s possible to craft a valid image that leaks the the entered password.

Tour of the implementation

Without spoiling anything yet, let’s look at how this program works. The first thing to notice is that I’m using a custom “obstack” allocator instead of malloc() and free(). Real-world allocators have some defenses against this particular vulnerability. Plus a specific exploit would have to target a specific libc. By using my own allocator, the exploit will mostly be portable, making for a better and easier demonstration.

The allocator interface should be pretty self-explanatory, except for two details. This is an obstack allocator, so freeing an object also frees every object allocated after it. Also, it doesn’t call malloc() in the background. At initialization you give it a buffer from which to allocate all memory.

struct mstack {
    char *top;
    char *max;
    char buf[];
};

struct mstack *mstack_init(void *, size_t);
void          *mstack_alloc(struct mstack *, size_t);
void           mstack_free(struct mstack *, void *);

There are no vulnerabilities in these functions (I hope!). It’s just here for predictability.

Next here’s the “authentication” function. It reads a username and password combination from /dev/tty. It’s only an excuse to get a flag in memory for this capture-the-flag game. The username and password must be less than 32 characters each.

int
authenticate(struct mstack *m)
{
    FILE *tty = fopen("/dev/tty", "r+");
    if (!tty) {
        perror("/dev/tty");
        return 0;
    }

    char *user = mstack_alloc(m, 32);
    if (!user) {
        fclose(tty);
        return 0;
    }
    fputs("User: ", tty);
    fflush(tty);
    if (!fgets(user, 32, tty))
        user[0] = 0;

    char *pass = mstack_alloc(m, 32);
    int result = 0;
    if (pass) {
        fputs("Password: ", tty);
        fflush(tty);
        if (fgets(pass, 32, tty))
            result = strcmp(user, pass) != 0;
    }

    fclose(tty);
    mstack_free(m, user);
    return result;
}

Next here’s a little version of calloc() for the custom allocator. Hmm, I wonder why is this called “naive” …

void *
naive_calloc(struct mstack *m, unsigned long nmemb, unsigned long size)
{
    void *p = mstack_alloc(m, nmemb * size);
    if (p)
        memset(p, 0, nmemb * size);
    return p;
}

Next up is a paranoid wrapper for strtoul() that defensively checks its inputs. If it’s out of range of an unsigned long, it bails out. If there’s trailing garbage, it bails out. If there’s no number at all, it bails out. If you make prolonged eye contact, it bails out.

unsigned long
safe_strtoul(char *nptr, char **endptr, int base)
{
    errno = 0;
    unsigned long n = strtoul(nptr, endptr, base);
    if (errno) {
        perror(nptr);
        exit(EXIT_FAILURE);
    } else if (nptr == *endptr) {
        fprintf(stderr, "Expected an integer\n");
        exit(EXIT_FAILURE);
    } else if (!isspace(**endptr)) {
        fprintf(stderr, "Invalid character '%c'\n", **endptr);
        exit(EXIT_FAILURE);
    }
    return n;
}

The main() function parses the header using this wrapper and allocates some zeroed memory:

    unsigned long width = safe_strtoul(p, &p, 10);
    unsigned long height = safe_strtoul(p, &p, 10);
    unsigned char *pixels = naive_calloc(m, width, height);
    if (!pixels) {
        fputs("Not enough memory\n", stderr);
        exit(EXIT_FAILURE);
    }

Then there’s a command processing loop, also using safe_strtoul(). It carefully checks bounds against width and height. Finally it writes out a Netpbm, P2 (.pgm) format.

    printf("P2\n%ld %ld 255\n", width, height);
    for (unsigned long y = 0; y < height; y++) {
        for (unsigned long x = 0; x < width; x++)
            printf("%d ", pixels[y * width + x]);
        putchar('\n');
    }

The vulnerability is in something I’ve shown above. Can you find it?

Exploiting the renderer

Did you find it? If you’re on a platform with 64-bit long, here’s your exploit:

V2 16 1152921504606846977

And here’s an exploit for 32-bit long:

V2 16 268435457

Here’s how it looks in action. The most obvious result is that the program crashes:

$ echo V2 16 1152921504606846977 | ./mstack > capture.txt
User: coolguy
Password: mysecret
Segmentation fault

Here are the initial contents of capture.txt:

P2
16 1152921504606846977 255
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
109 121 115 101 99 114 101 116 10 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Where did those junk numbers come from in the image data? Plug them into an ASCII table and you’ll get “mysecret”. Despite allocating the image with naive_calloc(), the password has found its way into the image! How could this be?

What happened is that width * height overflows an unsigned long. (Well, technically speaking, unsigned integers are defined not to overflow in C, wrapping around instead, but it’s really the same thing.) In naive_calloc(), the overflow results in a value of 16, so it only allocates and clears 16 bytes. The requested allocation “succeeds” despite far exceeding the available memory. The caller has been given a lot less memory than expected, and the memory believed to have been allocated contains a password.

The final part that writes the output doesn’t multiply the integers and doesn’t need to test for overflow. It uses a nested loop instead, continuing along with the original, impossible image size.

How do we fix this? Add an overflow check at the beginning of the naive_calloc() function (making it no longer naive). This is what the real calloc() does.

    if (nmemb && size > -1UL / nmemb)
        return 0;

The frightening takeaway is that this check is very easy to forget. It’s a subtle bug with potentially disastrous consequences.

In practice, this sort of program wouldn’t have sensitive data resident in memory. Instead an attacker would target the program’s stack with those s commands — specifically the return pointers — and perform a ROP attack against the application. With the exploit header above and a platform where long the same size as a size_t, the program will behave as if all available memory has been allocated to the image, so the s command could be used to poke custom values anywhere in memory. This is a much more complicated exploit, and it has to contend with ASLR and random stack gap, but it’s feasible.

Rolling Shutter Simulation in C

The most recent Smarter Every Day (#172) explains a phenomenon that results from rolling shutter. You’ve likely seen this effect in some of your own digital photographs. When a CMOS digital camera captures a picture, it reads one row of the sensor at a time. If the subject of the picture is a fast-moving object (relative to the camera), then the subject will change significantly while the image is being captured, giving strange, unreal results:

In the Smarter Every Day video, Destin illustrates the effect by simulating rolling shutter using a short video clip. In each frame of the video, a few additional rows are locked in place, showing the effect in slow motion, making it easier to understand.

At the end of the video he thanks a friend for figuring out how to get After Effects to simulate rolling shutter. After thinking about this for a moment, I figured I could easily accomplish this myself with just a bit of C, without any libraries. The video above this paragraph is the result.

I previously described a technique to edit and manipulate video without any formal video editing tools. A unix pipeline is sufficient for doing minor video editing, especially without sound. The program at the front of the pipe decodes the video into a raw, uncompressed format, such as YUV4MPEG or PPM. The tools in the middle losslessly manipulate this data to achieve the desired effect (watermark, scaling, etc.). Finally, the tool at the end encodes the video into a standard format.

$ decode video.mp4 | xform-a | xform-b | encode out.mp4

For the “decode” program I’ll be using ffmpeg now that it’s back in the Debian repositories. You can throw a video in virtually any format at it and it will write PPM frames to standard output. For the encoder I’ll be using the x264 command line program, though ffmpeg could handle this part as well. Without any filters in the middle, this example will just re-encode a video:

$ ffmpeg -i input.mp4 -f image2pipe -vcodec ppm pipe:1 | \
    x264 -o output.mp4 /dev/stdin

The filter tools in the middle only need to read and write in the raw image format. They’re a little bit like shaders, and they’re easy to write. In this case, I’ll write C program that simulates rolling shutter. The filter could be written in any language that can read and write binary data from standard input to standard output.

Update: It appears that input PPM streams are a rather recent feature of libavformat (a.k.a lavf, used by x264). Support for PPM input first appeared in libavformat 3.1 (released June 26th, 2016). If you’re using an older version of libavformat, you’ll need to stick ppmtoy4m in front of x264 in the processing pipeline.

$ ffmpeg -i input.mp4 -f image2pipe -vcodec ppm pipe:1 | \
    ppmtoy4m | \
    x264 -o output.mp4 /dev/stdin

Video filtering in C

In the past, my go to for raw video data has been loose PPM frames and YUV4MPEG streams (via ppmtoy4m). Fortunately, over the years a lot of tools have gained the ability to manipulate streams of PPM images, which is a much more convenient format. Despite being raw video data, YUV4MPEG is still a fairly complex format with lots of options and annoying colorspace concerns. PPM is simple RGB without complications. The header is just text:

P6
<width> <height>
<maxdepth>
<width * height * 3 binary RGB data>

The maximum depth is virtually always 255. A smaller value reduces the image’s dynamic range without reducing the size. A larger value involves byte-order issues (endian). For video frame data, the file will typically look like:

P6
1920 1080
255
<frame RGB>

Unfortunately the format is actually a little more flexible than this. Except for the new line (LF, 0x0A) after the maximum depth, the whitespace is arbitrary and comments starting with # are permitted. Since the tools I’m using won’t produce comments, I’m going to ignore that detail. I’ll also assume the maximum depth is always 255.

Here’s the structure I used to represent a PPM image, just one frame of video. I’m using a flexible array member to pack the data at the end of the structure.

struct frame {
    size_t width;
    size_t height;
    unsigned char data[];
};

Next a function to allocate a frame:

static struct frame *
frame_create(size_t width, size_t height)
{
    struct frame *f = malloc(sizeof(*f) + width * height * 3);
    f->width = width;
    f->height = height;
    return f;
}

We’ll need a way to write the frames we’ve created.

static void
frame_write(struct frame *f)
{
    printf("P6\n%zu %zu\n255\n", f->width, f->height);
    fwrite(f->data, f->width * f->height, 3, stdout);
}

Finally, a function to read a frame, reusing an existing buffer if possible. The most complex part of the whole program is just parsing the PPM header. The %*c in the scanf() specifically consumes the line feed immediately following the maximum depth.

static struct frame *
frame_read(struct frame *f)
{
    size_t width, height;
    if (scanf("P6 %zu%zu%*d%*c", &width, &height) < 2) {
        free(f);
        return 0;
    }
    if (!f || f->width != width || f->height != height) {
        free(f);
        f = frame_create(width, height);
    }
    fread(f->data, width * height, 3, stdin);
    return f;
}

Since this program will only be part of a pipeline, I’m not worried about checking the results of fwrite() and fread(). The process will be killed by the shell if something goes wrong with the pipes. However, if we’re out of video data and get an EOF, scanf() will fail, indicating the EOF, which is normal and can be handled cleanly.

An identity filter

That’s all the infrastructure we need to built an identity filter that passes frames through unchanged:

int main(void)
{
    struct frame *frame = 0;
    while ((frame = frame_read(frame)))
        frame_write(frame);
}

Processing a frame is just matter of adding some stuff to the body of the while loop.

A rolling shutter filter

For the rolling shutter filter, in addition to the input frame we need an image to hold the result of the rolling shutter. Each input frame will be copied into the rolling shutter frame, but a little less will be copied from each frame, locking a little bit more of the image in place.

int
main(void)
{
    int shutter_step = 3;
    size_t shutter = 0;
    struct frame *f = frame_read(0);
    struct frame *out = frame_create(f->width, f->height);
    while (shutter < f->height && (f = frame_read(f))) {
        size_t offset = shutter * f->width * 3;
        size_t length = f->height * f->width * 3 - offset;
        memcpy(out->data + offset, f->data + offset, length);
        frame_write(out);
        shutter += shutter_step;
    }
    free(out);
    free(f);
}

The shutter_step controls how many rows are capture per frame of video. Generally capturing one row per frame is too slow for the simulation. For a 1080p video, that’s 1,080 frames for the entire simulation: 18 seconds at 60 FPS or 36 seconds at 30 FPS. If this program were to accept command line arguments, controlling the shutter rate would be one of the options.

Putting it all together:

$ ffmpeg -i input.mp4 -f image2pipe -vcodec ppm pipe:1 | \
    ./rolling-shutter | \
    x264 -o output.mp4 /dev/stdin

Here are some of the results for different shutter rates: 1, 3, 5, 8, 10, and 15 rows per frame. Feel free to right-click and “View Video” to see the full resolution video.

Source and original input

This post contains the full source in parts, but here it is all together:

Here’s the original video, filmed by my wife using her Nikon D5500, in case you want to try it for yourself:

It took much longer to figure out the string-pulling contraption to slowly spin the fan at a constant rate than it took to write the C filter program.

On Hacker News, morecoffee shared a video of the second order effect (direct link), where the rolling shutter speed changes over time.

A deeper analysis of rolling shutter: Playing detective with rolling shutter photos.

Stack Clashing for Fun and Profit

Stack clashing has been in the news lately due to some recently discovered vulnerablities along with proof-of-concept exploits. As the announcement itself notes, this is not a new issue, though this appears to be the first time it’s been given this particular name. I do know of one “good” use of stack clashing, where it’s used for something productive than as part of an attack. In this article I’ll explain how it works.

You can find the complete code for this article here, ready to run:

But first, what is a stack clash? Here’s a rough picture of the typical way process memory is laid out. The stack starts at a high memory address and grows downwards. Code and static data sit at low memory, with a brk pointer growing upward to make small allocations. In the middle is the heap, where large allocations and memory mappings take place.

Below the stack is a slim guard page that divides the stack and the region of memory reserved for the heap. Reading or writing to that memory will trap, causing the program to crash or some special action to be taken. The goal is to prevent the stack from growing into the heap, which could cause all sorts of trouble, like security issues.

The problem is that this thin guard page isn’t enough. It’s possible to put a large allocation on the stack, never read or write to it, and completely skip over the guard page, such that the heap and stack overlap without detection.

Once this happens, writes into the heap will change memory on the stack and vice versa. If an attacker can cause the program to make such a large allocation on the stack, then legitimate writes into memory on the heap can manipulate local variables or return pointers, changing the program’s control flow. This can bypass buffer overflow protections, such as stack canaries.

Binary trees and coroutines

Now, I’m going to abruptly change topics to discuss binary search trees. We’ll get back to stack clash in a bit. Suppose we have a binary tree which we would like to iterate depth-first. For this demonstration, here’s the C interface to the binary tree.

struct tree {
    struct tree *left;
    struct tree *right;
    char *key;
    char *value;
};

void  tree_insert(struct tree **, char *k, char *v);
char *tree_find(struct tree *, char *k);
void  tree_visit(struct tree *, void (*f)(char *, char *));
void  tree_destroy(struct tree *);

An empty tree is the NULL pointer, hence the double-pointer for insert. In the demonstration it’s an unbalanced search tree, but this could very well be a balanced search tree with the addition of another field on the structure.

For the traversal, first visit the root node, then traverse its left tree, and finally traverse its right tree. It makes for a simple, recursive definition — the sort of thing you’d teach a beginner. Here’s a definition that accepts a callback, which the caller will use to visit each key/value in the tree. This really is as simple as it gets.

void
tree_visit(struct tree *t, void (*f)(char *, char *))
{
    if (t) {
        f(t->key, t->value);
        tree_visit(t->left, f);
        tree_visit(t->right, f);
    }
}

Unfortunately this isn’t so convenient for the caller, who has to split off a callback function that lacks context, then hand over control to the traversal function.

void
printer(char *k, char *v)
{
    printf("%s = %s\n", k, v);
}

void
print_tree(struct tree *tree)
{
    tree_visit(tree, printer);
}

Usually it’s much nicer for the caller if instead it’s provided an iterator, which the caller can invoke at will. Here’s an interface for it, just two functions.

struct tree_it *tree_iterator(struct tree *);
int             tree_next(struct tree_it *, char **k, char **v);

The first constructs an iterator object, and the second one visits a key/value pair each time it’s called. It returns 0 when traversal is complete, automatically freeing any resources associated with the iterator.

The caller now looks like this:

    char *k, *v;
    struct tree_it *it = tree_iterator(tree);
    while (tree_next(it, &k, &v))
        printf("%s = %s\n", k, v);

Notice I haven’t defined struct tree_it. That’s because I’ve got four different implementations, each taking a different approach. The last one will use stack clashing.

Manual State Tracking

With just the standard facilities provided by C, there’s a some manual bookkeeping that has to take place in order to convert the recursive definition into an iterator. Depth-first traversal is a stack-oriented process, and with recursion the stack is implicit in the call stack. As an iterator, the traversal stack needs to be managed explicitly. The iterator needs to keep track of the path it took so that it can backtrack, which means keeping track of parent nodes as well as which branch was taken.

Here’s my little implementation, which, to keep things simple, has a hard depth limit of 32. It’s structure definition includes a stack of node pointers, and 2 bits of information per visited node, stored across a 64-bit integer.

struct tree_it {
    struct tree *stack[32];
    unsigned long long state;
    int nstack;
};

struct tree_it *
tree_iterator(struct tree *t)
{
    struct tree_it *it = malloc(sizeof(*it));
    it->stack[0] = t;
    it->state = 0;
    it->nstack = 1;
    return it;
}

The 2 bits track three different states for each visited node:

  1. Visit the current node
  2. Traverse the left tree
  3. Traverse the right tree

It works out to the following. Don’t worry too much about trying to understand how this works. My point is to demonstrate that converting the recursive definition into an iterator complicates the implementation.

int
tree_next(struct tree_it *it, char **k, char **v)
{
    while (it->nstack) {
        int shift = (it->nstack - 1) * 2;
        int state = 3u & (it->state >> shift);
        struct tree *t = it->stack[it->nstack - 1];
        it->state += 1ull << shift;
        switch (state) {
            case 0:
                *k = t->key;
                *v = t->value;
                if (t->left) {
                    it->stack[it->nstack++] = t->left;
                    it->state &= ~(3ull << (shift + 2));
                }
                return 1;
            case 1:
                if (t->right) {
                    it->stack[it->nstack++] = t->right;
                    it->state &= ~(3ull << (shift + 2));
                }
                break;
            case 2:
                it->nstack--;
                break;
        }
    }
    free(it);
    return 0;
}

Wouldn’t it be nice to keep both the recursive definition while also getting an iterator? There’s an exact solution to that: coroutines.

Coroutines

C doesn’t come with coroutines, but there are a number of libraries available. We can also build our own coroutines. One way to do that is with user contexts (<ucontext.h>) provided by the X/Open System Interfaces Extension (XSI), an extension to POSIX. This set of functions allow programs to create their own call stacks and switch between them. That’s the key ingredient for coroutines. Caveat: These functions aren’t widely available, and probably shouldn’t be used in new code.

Here’s my iterator structure definition.

#define _XOPEN_SOURCE 600
#include <ucontext.h>

struct tree_it {
    char *k;
    char *v;
    ucontext_t coroutine;
    ucontext_t yield;
};

It needs one context for the original stack and one context for the iterator’s stack. Each time the iterator is invoked, it the program will switch to the other stack, find the next value, then switch back. This process is called yielding. Values are passed between context using the k (key) and v (value) fields on the iterator.

Before I get into initialization, here’s the actual traversal coroutine. It’s nearly the same as the original recursive definition except for the swapcontext(). This is the yield, pausing execution and sending control back to the caller. The current context is saved in the first argument, and the second argument becomes the current context.

static void
coroutine(struct tree *t, struct tree_it *it)
{
    if (t) {
        it->k = t->key;
        it->v = t->value;
        swapcontext(&it->coroutine, &it->yield);
        coroutine(t->left, it);
        coroutine(t->right, it);
    }
}

While the actual traversal is simple again, initialization is more complicated. The first problem is that there’s no way to pass pointer arguments to the coroutine. Technically only int arguments are permitted. (All the online tutorials get this wrong.) To work around this problem, I smuggle the arguments in as global variables. This would cause problems should two different threads try to create iterators at the same time, even on different trees.

static struct tree *tree_arg;
static struct tree_it *tree_it_arg;

static void
coroutine_init(void)
{
    coroutine(tree_arg, tree_it_arg);
}

The stack has to be allocated manually, which I do with a call to malloc(). Nothing fancy is needed, though this means the new stack won’t have a guard page. For the stack size, I use the suggested value of SIGSTKSZ. The makecontext() function is what creates the new context from scratch, but the new context must first be initialized with getcontext(), even though that particular snapshot won’t actually be used.

struct tree_it *
tree_iterator(struct tree *t)
{
    struct tree_it *it = malloc(sizeof(*it));
    it->coroutine.uc_stack.ss_sp = malloc(SIGSTKSZ);
    it->coroutine.uc_stack.ss_size = SIGSTKSZ;
    it->coroutine.uc_link = &it->yield;
    getcontext(&it->coroutine);
    makecontext(&it->coroutine, coroutine_init, 0);
    tree_arg = t;
    tree_it_arg = it;
    return it;
}

Notice I gave it a function pointer, a lot like I’m starting a new thread. This is no coincidence. There’s a lot of similarity between coroutines and multiple threads, as you’ll soon see.

Finally the iterator function itself. Since NULL isn’t a valid key, it initializes the key to NULL before yielding to the iterator context. If the iterator has no more nodes to visit, it doesn’t set the key, which can be detected when control returns.

int
tree_next(struct tree_it *it, char **k, char **v)
{
    it->k = 0;
    swapcontext(&it->yield, &it->coroutine);
    if (it->k) {
        *k = it->k;
        *v = it->v;
        return 1;
    } else {
        free(it->coroutine.uc_stack.ss_sp);
        free(it);
        return 0;
    }
}

That’s all it takes to create and operate a coroutine in C, provided you’re on a system with these XSI extensions.

Semaphores

Instead of a coroutine, we could just use actual threads and a couple of semaphores to synchronize them. This is a heavy implementation and also probably shouldn’t be used in practice, but at least it’s fully portable.

Here’s the structure definition:

struct tree_it {
    struct tree *t;
    char *k;
    char *v;
    sem_t visitor;
    sem_t main;
    pthread_t thread;
};

The main thread will wait on one semaphore and the iterator thread will wait on the other. This should sound very familiar.

The actual traversal function looks the same, but with sem_post() and sem_wait() as the yield.

static void
visit(struct tree *t, struct tree_it *it)
{
    if (t) {
        it->k = t->key;
        it->v = t->value;
        sem_post(&it->main);
        sem_wait(&it->visitor);
        visit(t->left, it);
        visit(t->right, it);
    }
}

There’s a separate function to initialize the iterator context again.

static void *
thread_entrance(void *arg)
{
    struct tree_it *it = arg;
    sem_wait(&it->visitor);
    visit(it->t, it);
    sem_post(&it->main);
    return 0;
}

Creating the iterator only requires initializing the semaphores and creating the thread:

struct tree_it *
tree_iterator(struct tree *t)
{
    struct tree_it *it = malloc(sizeof(*it));
    it->t = t;
    sem_init(&it->visitor, 0, 0);
    sem_init(&it->main, 0, 0);
    pthread_create(&it->thread, 0, thread_entrance, it);
    return it;
}

The iterator function looks just like the coroutine version.

int
tree_next(struct tree_it *it, char **k, char **v)
{
    it->k = 0;
    sem_post(&it->visitor);
    sem_wait(&it->main);
    if (it->k) {
        *k = it->k;
        *v = it->v;
        return 1;
    } else {
        pthread_join(it->thread, 0);
        sem_destroy(&it->main);
        sem_destroy(&it->visitor);
        free(it);
        return 0;
    }
}

Overall, this is almost identical to the coroutine version.

Coroutines using stack clashing

Finally I can tie this back into the topic at hand. Without either XSI extensions or Pthreads, we can (usually) create coroutines by abusing setjmp() and longjmp(). Technically this violates two of the C’s rules and relies on undefined behavior, but it generally works. This is not my own invention, and it dates back to at least 2010.

From the very beginning, C has provided a crude “exception” mechanism that allows the stack to be abruptly unwound back to a previous state. It’s a sort of non-local goto. Call setjmp() to capture an opaque jmp_buf object to be used in the future. This function returns 0 this first time. Hand that value to longjmp() later, even in a different function, and setjmp() will return again, this time with a non-zero value.

It’s technically unsuitable for coroutines because the jump is a one-way trip. The unwound stack invalidates any jmp_buf that was created after the target of the jump. In practice, though, you can still use these jumps, which is one rule being broken.

That’s where stack clashing comes into play. In order for it to be a proper coroutine, it needs to have its own stack. But how can we do that with these primitive C utilities? Extend the stack to overlap the heap, call setjmp() to capture a coroutine on it, then return. Generally we can get away with using longjmp() to return to this heap-allocated stack.

Here’s my iterator definition for this one. Like the XSI context struct, this has two jmp_buf “contexts.” The stack holds the iterator’s stack buffer so that it can be freed, and the gap field will be used to prevent the optimizer from spoiling our plans.

struct tree_it {
    char *k;
    char *v;
    char *stack;
    volatile char *gap;
    jmp_buf coroutine;
    jmp_buf yield;
};

The coroutine looks familiar again. This time the yield is performed with setjmmp() and longjmp(), just like swapcontext(). Remember that setjmp() returns twice, hence the branch. The longjmp() never returns.

static void
coroutine(struct tree *t, struct tree_it *it)
{
    if (t) {
        it->k = t->key;
        it->v = t->value;
        if (!setjmp(it->coroutine))
            longjmp(it->yield, 1);
        coroutine(t->left, it);
        coroutine(t->right, it);
    }
}

Next is the tricky part to cause the stack clash. First, allocate the new stack with malloc() so that we can get its address. Then use a local variable on the stack to determine how much the stack needs to grow in order to overlap with the allocation. Taking the difference between these pointers is illegal as far as the language is concerned, making this the second rule I’m breaking. I can imagine an implementation where the stack and heap are in two separate kinds of memory, and it would be meaningless to take the difference. I don’t actually have to imagine very hard, because this is actually how it used to work on the 8086 with its segmented memory architecture.

struct tree_it *
tree_iterator(struct tree *t)
{
    struct tree_it *it = malloc(sizeof(*it));
    it->stack = malloc(STACK_SIZE);
    char marker;
    char gap[&marker - it->stack - STACK_SIZE];
    it->gap = gap; // prevent optimization
    if (!setjmp(it->yield))
        coroutine_init(t, it);
    return it;
}

I’m using a variable-length array (VLA) named gap to indirectly control the stack pointer, moving it over the heap. I’m assuming the stack grows downward, since otherwise the sign would be wrong.

The compiler is smart and will notice I’m not actually using gap, and it’s happy to throw it away. In fact, it’s vitally important that I don’t touch it since the guard page, along with a bunch of unmapped memory, is actually somewhere in the middle of that array. I only want the array for its side effect, but that side effect isn’t officially supported, which means the optimizer doesn’t need to consider it in its decisions. To inhibit the optimizer, I store the array’s address where someone might potentially look at it, meaning the array has to exist.

Finally, the iterator function looks just like the others, again.

int
tree_next(struct tree_it *it, char **k, char **v)
{
    it->k = 0;
    if (!setjmp(it->yield))
        longjmp(it->coroutine, 1);
    if (it->k) {
        *k = it->k;
        *v = it->v;
        return 1;
    } else {
        free(it->stack);
        free(it);
        return 0;
    }
}

And that’s it: a nasty hack using a stack clash to create a context for a setjmp()+longjmp() coroutine.

Building and Installing Software in $HOME

For more than 5 years now I’ve kept a private “root” filesystem within my home directory under $HOME/.local/. Within are the standard /usr directories, such as bin/, include/, lib/, etc., containing my own software, libraries, and man pages. These are first-class citizens, indistinguishable from the system-installed programs and libraries. With one exception (setuid programs), none of this requires root privileges.

Installing software in $HOME serves two important purposes, both of which are indispensable to me on a regular basis.

This prevents me from installing packaged software myself through the system’s package manager. Building and installing the software myself in my home directory, without involvement from the system administrator, neatly works around this issue. As a software developer, it’s already perfectly normal for me to build and run custom software, and this is just an extension of that behavior.

In the most desperate situation, all I need from the sysadmin is a decent C compiler and at least a minimal POSIX environment. I can bootstrap anything I might need, both libraries and programs, including a better C compiler along the way. This is one major strength of open source software.

I have noticed one alarming trend: Both GCC (since 4.8) and Clang are written in C++, so it’s becoming less and less reasonable to bootstrap a C++ compiler from a C compiler, or even from a C++ compiler that’s more than a few years old. So you may also need your sysadmin to supply a fairly recent C++ compiler if you want to bootstrap an environment that includes C++. I’ve had to avoid some C++ software (such as CMake) for this reason.

In theory this is what /usr/local is all about. It’s typically the location for software not managed by the system’s package manager. However, I think it’s cleaner to put this in $HOME/.local, so long as other system users don’t need it.

For example, I have an installation of each version of Emacs between 24.3 (the oldest version worth supporting) through the latest stable release, each suffixed with its version number, under $HOME/.local. This is useful for quickly running a test suite under different releases.

$ git clone https://github.com/skeeto/elfeed
$ cd elfeed/
$ make EMACS=emacs24.3 clean test
...
$ make EMACS=emacs25.2 clean test
...

Another example is NetHack, which I prefer to play with a couple of custom patches (Menucolors, wchar). The install to $HOME/.local is also captured as a patch.

$ tar xzf nethack-343-src.tar.gz
$ cd nethack-3.4.3/
$ patch -p1 < ~/nh343-menucolor.diff
$ patch -p1 < ~/nh343-wchar.diff
$ patch -p1 < ~/nh343-home-install.diff
$ sh sys/unix/setup.sh
$ make -j$(nproc) install

Normally NetHack wants to be setuid (e.g. run as the “games” user) in order to restrict access to high scores, saves, and bones — saved levels where a player died, to be inserted randomly into other players’ games. This prevents cheating, but requires root to set up. Fortunately, when I install NetHack in my home directory, this isn’t a feature I actually care about, so I can ignore it.

Mutt is in a similar situation, since it wants to install a special setgid program (mutt_dotlock) that synchronizes mailbox access. All MUAs need something like this.

Everything described below is relevant to basically any modern unix-like system: Linux, BSD, etc. I personally install software in $HOME across a variety of systems and, fortunately, it mostly works the same way everywhere. This is probably in large part due to everyone standardizing around the GCC and GNU binutils interfaces, even if the system compiler is actually LLVM/Clang.

Configuring for $HOME installs

Out of the box, installing things in $HOME/.local won’t do anything useful. You need to set up some environment variables in your shell configuration (i.e. .profile, .bashrc, etc.) to tell various programs, such as your shell, about it. The most obvious variable is $PATH:

export PATH=$HOME/.local/bin:$PATH

Notice I put it in the front of the list. This is because I want my home directory programs to override system programs with the same name. For what other reason would I install a program with the same name if not to override the system program?

In the simplest situation this is good enough, but in practice you’ll probably need to set a few more things. If you install libraries in your home directory and expect to use them just as if they were installed on the system, you’ll need to tell the compiler where else to look for those headers and libraries, both for C and C++.

export C_INCLUDE_PATH=$HOME/.local/include
export CPLUS_INCLUDE_PATH=$HOME/.local/include
export LIBRARY_PATH=$HOME/.local/lib

This is like the -I compiler option and the -L linker option, except you won’t need to use them explicitly. Some software uses pkg-config to determine its compiler and linker flags, and your home directory will contain some of the needed information. So set that up too:

export PKG_CONFIG_PATH=$HOME/.local/lib/pkgconfig

Run-time linker

Finally, when you install libraries in your home directory, the run-time dynamic linker will need to know where to find them. There are three ways to deal with this:

  1. The crude, easy way: LD_LIBRARY_PATH.
  2. The elegant, difficult way: ELF runpath.
  3. Screw it, just statically link the bugger. (Not always possible.)

For the crude way, point the run-time linker at your lib/ and you’re done:

export LD_LIBRARY_PATH=$HOME/.local/lib

However, this is like using a shotgun to kill a fly. If you install a library in your home directory that is also installed on the system, and then run a system program, it may be linked against your library rather than the library installed on the system as was originally intended. This could have detrimental effects.

The precision method is to set the ELF “runpath” value. It’s like a per-binary LD_LIBRARY_PATH. The run-time linker uses this path first in its search for libraries, and it will only have an effect on that particular program/library. This also applies to dlopen().

Some software will configure the runpath by default, but usually you need to configure this yourself with the linker -rpath option in LDFLAGS. It’s used directly like this:

$ gcc -Wl,-rpath=$HOME/.local/lib -o foo bar.o baz.o -lquux

Verify with readelf:

$ readelf -d foo | grep runpath
Library runpath: [/home/username/.local/lib]

ELF supports a special $ORIGIN “variable” set to the binary’s location. This allows the program and associated libraries to be installed anywhere without changes, so long as they have the same relative position to each other . (Note the quotes to prevent shell interpolation.)

$ gcc -Wl,-rpath='$ORIGIN/../lib' -o foo bar.o baz.o -lquux

There is one situation where runpath won’t work: when you want a system-installed program to find a home directory library with dlopen() — e.g. as an extension to that program. You either need to ensure it uses a relative or absolute path (i.e. the argument to dlopen() contains a slash) or you must use LD_LIBRARY_PATH.

Personally, I always use the Worse is Better LD_LIBRARY_PATH shotgun. Occasionally it’s caused some annoying issues, but the vast majority of the time it gets the job done with little fuss. This is just my personal development environment, after all, not a production server.

Manual pages

Another potentially tricky issue is man pages. When a program or library installs a man page in your home directory, it would certainly be nice to access it with man <topic> just like it was installed on the system. Fortunately, Debian and Debian-derived systems, using a mechanism I haven’t yet figured out, discover home directory man pages automatically without any assistance. No configuration needed.

It’s more complicated on other systems, such as the BSDs. You’ll need to set the MANPATH variable to include $HOME/.local/share/man. It’s unset by default and it overrides the system settings, which means you need to manually include the system paths. The manpath program can help with this … if it’s available.

export MANPATH=$HOME/.local/share/man:$(manpath)

I haven’t figured out a portable way to deal with this issue, so I mostly ignore it.

How to install software in $HOME

While I’ve poo-pooed autoconf in the past, the standard configure script usually makes it trivial to build and install software in $HOME. The key ingredient is the --prefix option:

$ tar xzf name-version.tar.gz
$ cd name-version/
$ ./configure --prefix=$HOME/.local
$ make -j$(nproc)
$ make install

Most of the time it’s that simple! If you’re linking against your own libraries and want to use runpath, it’s a little more complicated:

$ ./configure --prefix=$HOME/.local \
              LDFLAGS="-Wl,-rpath=$HOME/.local/lib"

For CMake, there’s CMAKE_INSTALL_PREFIX:

$ cmake -DCMAKE_INSTALL_PREFIX=$HOME/.local ..

The CMake builds I’ve seen use ELF runpath by default, and no further configuration may be required to make that work. I’m sure that’s not always the case, though.

Some software is just a single, static, standalone binary with everything baked in. It doesn’t need to be given a prefix, and installation is as simple as copying the binary into place. For example, Enchive works like this:

$ git clone https://github.com/skeeto/enchive
$ cd enchive/
$ make
$ cp enchive ~/.local/bin

Some software uses its own unique configuration interface. I can respect that, but it does add some friction for users who now have something additional and non-transferable to learn. I demonstrated a NetHack build above, which has a configuration much more involved than it really should be. Another example is LuaJIT, which uses make variables that must be provided consistently on every invocation:

$ tar xzf LuaJIT-2.0.5.tar.gz
$ cd LuaJIT-2.0.5/
$ make -j$(nproc) PREFIX=$HOME/.local
$ make PREFIX=$HOME/.local install

(You can use the “install” target to both build and install, but I wanted to illustrate the repetition of PREFIX.)

Some libraries aren’t so smart about pkg-config and need some handholding — for example, ncurses. I mention it because it’s required for both Vim and Emacs, among many others, so I’m often building it myself. It ignores --prefix and needs to be told a second time where to install things:

$ ./configure --prefix=$HOME/.local \
              --enable-pc-files \
              --with-pkg-config-libdir=$PKG_CONFIG_PATH

Another issue is that a whole lot of software has been hardcoded for ncurses 5.x (i.e. ncurses5-config), and it requires hacks/patching to make it behave properly with ncurses 6.x. I’ve avoided ncurses 6.x for this reason.

Learning through experience

I could go on and on like this, discussing the quirks for the various libraries and programs that I use. Over the years I’ve gotten used to many of these issues, committing the solutions to memory. Unfortunately, even within the same version of a piece of software, the quirks can change between major operating system releases, so I’m continuously learning my way around new issues. It’s really given me an appreciation for all the hard work that package maintainers put into customizing and maintaining software builds to fit properly into a larger ecosystem.

Switching to the Mutt Email Client

Note: The way I manage my email wouldn’t really work for most people, so don’t read this as a recommendation. This is just a discussion of how I prefer to use email.

It was almost four years ago I switched from webmail to a customized email configuration based on Notmuch and Emacs. Notmuch served as both as a native back-end that provided indexing and tagging, as well as a front-end, written in Emacs Lisp. It dramatically improved my email experience, and I wished I had done it earlier. I’ve really enjoyed having so much direct control over my email.

However, I’m always fiddling with things — fiddling feels a lot more productive than it actually is — and last month I re-invented my email situation, this time switching to a combination of Mutt, Vim, mu, and tmux. The entirety of my email interface now resides inside a terminal, and I’m enjoying it even more. I feel I’ve “leveled up” again in my email habits.

On the server-side I also switched from Exim to Postfix and procmail, making the server configuration a whole lot simpler. Including SpamAssassin, it’s just three lines added to the default Debian configuration. It leaves a lot less room for error, and I could rebuild it from scratch with little trouble if there was an emergency. My previous configuration required quite a bit of system configuration, such as relying on incron to sort incoming mail, particularly spam, but procmail now does this job more cleanly.

Towards Robustness

Over the years I’ve gotten less patient when it comes to dealing with breaking changes in software, and I’ve gotten more conservative about system stability. Continuously updating my configurations and habits to the latest software changes was an interesting challenge earlier in my career, but today there are much better uses of my time. Debian Stable, my preferred operating system, runs at pretty much the perfect pace for me.

Following these changing preferences, one of the biggest motivations for my recent email change was to make my email setup more robust and stable. Until now, email was tied tightly to Emacs, with a configuration drawing directly from MELPA, pulling in the bleeding edge version of every package I use. Breaking changes arrive at unexpected times, and occasionally the current version of a package temporarily doesn’t work. Usually it’s because the developer pushed a bad commit right before the latest MELPA build, and so the package is broken for a few hours or days. I’ve been guilty of this myself. MELPA Stable is intended to address these issues, but it seems to break more often than normal MELPA. For example, at the time of this writing, Evil is not installable via MELPA Stable due to an unmet dependency.

Tying something as vital as email to this Rube Goldberg machine made me nervous. Access to my email depended on a number of independent systems of various levels of stability to mostly work correctly. My switch to Mutt cut this down to just a couple of very stable systems.

format=flowed

I’ve long believed HTML email is an abomination that should never have been invented. Text is the ideal format for email, and there are a number of specifications to make it work well across different systems. One of those standards is RFC 3676, colloquially named format=flowed, or just f=f.

Messages encoded with f=f allow mail clients to safely reflow the paragraphs to nicely fit the user’s display, whether that display be thinner or wider than the sender’s original message. It’s also completely compatible with mail clients that don’t understand format=flowed, which will display the message as the sender originally wrapped it.

The gist of f=f is that messages can have both “soft” and “hard” line breaks. If a line ends with a space, then it’s a soft line break. The mail client can safely reflow lines separated by a soft line break. Without the trailing space, it’s a hard line break, which prohibits flowing with the next line. The last line of a paragraph ends with a hard line break. It’s also used for text that shouldn’t reflow, such as code samples.

I’ll illustrate using an underscore in place of a space, so that you can see it:

This is a message in the format=flowed style, allowing_
mail clients to flow this message nicely in displays of_
different widths.

> This is an example of a quote block in a message,_
> which is supported by the format=flowed specification.
>> It also supports nested quote blocks, which means_
>> this paragraph won't flow into the previous.

The RFC covers edge cases that require special “space-stuffing” rules, but, when editing a text email in an editor, you only need to think about soft and hard line breaks. In my case, Mutt takes care of the rest of the details.

Unfortunately Emacs’s lacks decent support for f=f, though I’m sure a minor mode could be written to make it work well. On the other hand, Vim has been playing an increasing role in my day-to-day editing, and it has excellent built-in support for f=f. Since I’m now using Vim to compose all of my email, I get it for free.

First, I tell Mutt that I want to use f=f in my .muttrc:

set text_flowed

Then in Vim, I add the w flag to formatoptions, which tells it to wrap paragraphs using soft line breaks.

set fo+=w

If I want to inspect my f=f formatting, I temporarily enable the list option, which displays a $ for all newlines.

set list

Although few people would notice a difference, I feel a little bad for not using f=f all these years! A few people may have endured some ugly, non-flowing emails from me. My only condolance is that at least it wasn’t HTML.

It’s not all roses, though. When I reply to a message, Mutt doesn’t insert the quoted text as f=f into my reply, so I have to massage it into f=f myself. Also, just as GitHub doesn’t support Markdown in email responses, neither does it support f=f. When I reply to issues by email, GitHub won’t nicely reflow my carefully crafted f=f message, needlessly making email responses an inferior option.

Features unneeded

One reason I didn’t choose this particular email arrangement 4 years ago was that PGP support was one of my prime requirements. Mutt has solid PGP support, but, with a Maildir setup (i.e. not IMAP), I’d have to use the key on the server, which was out of the question. Since I no longer care about PGP, my email requirements are more relaxed.

Over the years wasn’t making much use of Notmuch’s tagging system. I only used two tags: “unread” and “inbox” (e.g. read, but still needs attention). Otherwise I’d use Notmuch’s powerful search to find what I wanted. I still needed to keep track of the tags I was using, so the Notmuch index, nearly as large as the email messages themselves, became part of my mail backup.

The Maildir format itself supports some flags: passed (P), replied (R), seen (S), trashed (T), draft (D), and flagged (F). These are stored in the message’s filename. In my new configuration, the “seen” tag (inversely) takes the place of Notmuch’s “unread” tag. The “flagged” tag takes place of the “inbox” tag. Normally in Mutt you’d use mailboxes — i.e. Maildir subdirectories — for something like this, but I prefer all my mail to sit in one big bucket. Search, don’t sort.

Since the two flags are part of the filename, I no longer need to include a tag database (i.e. the entire Notmuch index) in the backup, and my mail backups are much smaller. I could continue to use Notmuch for searching, but I’ve settled on mu instead. When I perform a search, mu writes the results to a temporary Maildir using symbolic links, which I visit with Mutt. The mu index is transient and doesn’t need to be backed up.

Mu also manages my contacts alias list. It can produce a Mutt-style alias file based on the contents of my Maildir:

mu cfind --format=mutt-alias > aliases

It’s been really nice to have all my email sitting around as nothing more than a big pile of files like this. I’ve begun writing little scripts to harvest data from it, too.

Configuration files

As with all my personal configuration files, you can see my .muttrc online. The first few weeks I was tweaking this file hourly, but I’ve now got it basically the way I want.

Web Scraping into an E-book with BeautifulSoup and Pandoc

I recently learned how to use BeautifulSoup, a Python library for manipulating HTML and XML parse trees, and it’s been a fantastic addition to my virtual toolbelt. In the past when I’ve needed to process raw HTML, I’ve tried nasty hacks with Unix pipes, or routing the content through a web browser so that I could manipulate it via the DOM API. None of that worked very well, but now I finally have BeautifulSoup to fill that gap. It’s got a selector interface and, except for rendering, it’s basically as comfortable with HTML as JavaScript.

Today’s problem was that I wanted to read a recommended online book called Interviewing Leather, a story set “in a world where caped heroes fight dastardly villains on an everyday basis.” I say “online book” because the 39,403 word story is distributed as a series of 14 blog posts. I’d rather not read it on the website in a browser, instead preferring it in e-book form where it’s more comfortable. The last time I did this, I manually scraped the entire book into Markdown, spent a couple of weeks editing it for mistakes, and finally sent the Markdown to Pandoc to convert into an e-book.

For this book, I just want a quick-and-dirty scrape in order to shift formats. I’ve never read it and I may not even like it (update: I enjoyed it), so I definitely don’t want to spend much time on the conversion. Despite having fun with typing lately, I’d also prefer to keep all the formating — italics, etc. — without re-entering it all manually.

Fortunately Pandoc can consume HTML as input, so, in theory, I can feed it the original HTML and preserve all of the original markup. The challenge is that the HTML is spread across 14 pages surrounded by all the expected blog cruft. I need some way to extract the book content from each page, concatenate it together along with chapter headings, and send the result to Pandoc. Enter BeautifulSoup.

First, I need to construct the skeleton HTML document. Rather than code my own HTML, I’m going to build it with BeautifulSoup. I start by creating a completely empty document and adding a doctype to it.

from bs4 import BeautifulSoup, Doctype

doc = BeautifulSoup()
doc.append(Doctype('html'))

Next I create the html root element, then add the head and body elements. I also add a title element. The original content has fancy Unicode markup — left and right quotation marks, em dash, etc. — so it’s important to declare the page as UTF-8, since otherwise these characters are likely to be interpreted incorrectly. It always feels odd declaring the encoding within the content being encoded, but that’s just the way things are.

html = doc.new_tag('html', lang='en-US')
doc.append(html)
head = doc.new_tag('head')
html.append(head)
meta = doc.new_tag('meta', charset='utf-8')
head.append(meta)
title = doc.new_tag('title')
title.string = 'Interviewing Leather'
head.append(title)
body = doc.new_tag('body')
html.append(body)

If I print(doc.prettify()) then I see the skeleton I want:

<!DOCTYPE html>
<html lang="en-US">
 <head>
  <meta charset="utf-8"/>
  <title>
   Interviewing Leather
  </title>
 </head>
 <body>
 </body>
</html>

Next, I assemble a list of the individual blog posts. When I was actually writing the script, I first downloaded them locally with my favorite download tool, curl, and ran the script against local copies. I didn’t want to hit the web server each time I tested. (Note: I’ve truncated these URLs to fit in this article.)

chapters = [
    "https://banter-latte.com/2007/06/26/...",
    "https://banter-latte.com/2007/07/03/...",
    "https://banter-latte.com/2007/07/10/...",
    "https://banter-latte.com/2007/07/17/...",
    "https://banter-latte.com/2007/07/24/...",
    "https://banter-latte.com/2007/07/31/...",
    "https://banter-latte.com/2007/08/07/...",
    "https://banter-latte.com/2007/08/14/...",
    "https://banter-latte.com/2007/08/21/...",
    "https://banter-latte.com/2007/08/28/...",
    "https://banter-latte.com/2007/09/04/...",
    "https://banter-latte.com/2007/09/20/...",
    "https://banter-latte.com/2007/09/25/...",
    "https://banter-latte.com/2007/10/02/..."
]

I visit a few of these pages in my browser to determine which part of the page I want to extract. I want to look closely enough to see what I’m doing, but not too closely as to not spoil myself! Right clicking the content in the browser and selecting “Inspect Element” (Firefox) or “Inspect” (Chrome) pops up a pane to structurally navigate the page. “View Page Source” would work, too, especially since this is static content, but I find the developer pane easier to read. Plus it hides most of the content, revealing only the structure.

The content is contained in a div with the class entry-content. I can use a selector to isolate this element and extract its child p elements. However, it’s not quite so simple. Each chapter starts with a bit of commentary that’s not part of the book, and I don’t want to include in my extract. It’s separated from the real content by an hr element. There’s also a footer below another hr element, likely put there by someone who wasn’t paying attention to the page structure. It’s not quite the shining example of semantic markup, but it’s regular enough I can manage.

<body>
  <main class="site-main">
    <div class="entry-body">
      <div class="entry-content">
        <p>A little intro.</p>
        <p>Some more intro.</p>
        <hr/>
        <p>Actual book content.</p>
        <p>More content.</p>
        <hr/>
        <p>Footer navigation junk.</p>
      </div>
    </div>
  </main>
</body>

The next step is visiting each of these pages. I use enumerate since I want the chapter numbers when inserting h1 chapter elements. Pandoc will use these to build the table of contents.

for i, chapter in enumerate(chapters):
    # Construct h1 for the chapter
    header = doc.new_tag('h1')
    header.string = 'Chapter %d' % (i + 1,)
    body.append(header)

Next grab the page content using urllib and parse it with BeautifulSoup. I’m using a selector to locate the div with the book content.

    # Load chapter content
    with urllib.request.urlopen(chapter) as url:
        page = BeautifulSoup(url)
    content = page.select('.entry-content')[0]

Finally I iterate over the child elements of the div.entry-content element. I keep a running count of the hr element and only extract content when we’ve seen exactly one hr element.

    # Append content between hr elements
    hr_count = 0
    for child in content.children:
        if (child.name == 'hr'):
            hr_count += 1
        elif (child.name == 'p' and hr_count == 1):
            child.attrs = {}
            if (child.string == '#'):
                body.append(doc.new_tag('hr'))
            else:
                body.append(child)

If it’s a p element, I copy it into the output document, taking a moment to strip away any attributes present on the p tag, since, for some reason, some of these elements have old-fashioned alignment attributes in the original content.

The original content also uses the text “#” by itself in a p to separate sections rather than using the appropriate markup. Despite being semantically incorrect, I’m thankful for this since more hr elements would have complicated matters further. I convert these to the correct markup for the final document.

Finally I pretty print the result:

print(doc.prettify())

Alternatively I could pipe it through tidy.

$ python3 extract.py | tidy -indent -utf8 > output.html

A brief inspection with a browser indicates that everything seems to have come out correctly. I won’t know for sure, though, until I actually read through the whole book. Finally I have Pandoc perform the conversion.

$ pandoc -t epub3 -o output.epub output.html

And that’s it! It’s ready to read offline in my e-book reader of choice. The crude version of my script took around 15–20 minutes to write and test, so I had an e-book conversion in under 30 minutes. That’s about as long as I was willing to spend to get it. Tidying the script up for this article took a lot longer.

I don’t have permission to share the resulting e-book, but I can share my script so that you can generate your own, at least as long as it’s hosted at the same place with the same structure.

The Adversarial Implementation

When coding against a standard, whether it’s a programming language specification or an open API with multiple vendors, a common concern is the conformity of a particular construct to the standard. This cannot be determined simply by experimentation, since a piece of code may work correctly due only to the specifics of a particular implementation. It works today with this implementation, but it may not work tomorrow or with a different implementation. Sometimes an implementation will warn about the use of non-standard behavior, but this isn’t always the case.

When I’m reasoning about whether or not something is allowed, I like to imagine an adversarial implementation. If the standard allows some freedom, this implementation takes an imaginative or unique approach. It chooses non-obvious interpretations with possibly unexpected, but valid, results. This is nearly the opposite of djb’s hypothetical boringcc, though some of the ideas are similar.

Many argue that this is already the case with modern C and C++ optimizing compilers. Compiler writers are already creative with the standard in order to squeeze out more performance, even if it’s at odds with the programmer’s actual intentions. The most prominent example in C and C++ is strict aliasing, where the optimizer is deliberately blinded to certain kinds of aliasing because the standard allows it to be, eliminating some (possibly important) loads. This happens despite the compiler’s ability to trivially prove that two particular objects really do alias.

I want to be clear that I’m not talking about the nasal daemon kind of creativity. That’s not a helpful thought experiment. What I mean is this: Can I imagine a conforming implementation that breaks any assumptions made by the code?.

In practice, compilers typically have to bridge multiple specifications: the language standard, the platform ABI, and operating system interface (process startup, syscalls, etc.). This really ties its hands on how creative it can be with any one of the specifications. Depending on the situation, the imaginary adversarial implementation isn’t necessarily running on any particular platform. If our program is expected to have a long life, useful for many years to come, we should avoid making too many assumptions about future computers and imagine an adversarial compiler with few limitations.

C example

Take this bit of C:

printf("%d", sizeof(foo));

The printf function is variadic, and it relies entirely on the format string in order to correctly handle all its arguments. The %d specifier means that its matching argument is of type int. The result of the sizeof operator is an integer of type size_t, which has a different sign and may even be a different size.

Typically this code will work just fine. An int and size_t are generally passed the same way, the actual value probably fits in an int, and two’s complement means the signedness isn’t an issue due to the value being positive. From the printf point of view, it typically can’t detect that the type is wrong, so everything works by chance. In fact, it’s hard to imagine a real situation where this wouldn’t work fine.

However, this still undefined behavior — a scenario where a creative adversarial implementation can break things. In this case there are a few options for an adversarial implementation:

  1. Arguments of type int and size_t are passed differently, so printf will load the argument it from the wrong place.
  2. The implementation doesn’t use two’s complement and even small positive values have different bit representations.
  3. The type of foo is given crazy padding for arbitrary reasons that makes it so large it doesn’t fit in an int.

What’s interesting about #1 is that this has actually happened. For example, here’s a C source file.

float foo(float x, int y);

float
bar(int y)
{
    return foo(0.0f, y);
}

And in another source file:

float
foo(int x, int y)
{
    (void)x;  // ignore x
    return y * 2.0f;
}

The type of argument x differs between the prototype and the definition, which is undefined behavior. However, since this argument is ignored, this code will still work correctly on many different real-world computers, particularly where float and int arguments are passed the same way (i.e. on the stack).

However, in 2003 the x86-64 CPU arrived with its new System V ABI. Floating point and integer arguments were now passed differently, and the types of preceding arguments mattered when deciding which register to use. Some constructs that worked fine, by chance, prior to 2003 would soon stop working due to what may have seemed like an adversarial implementation years before.

Python example

Let’s look at some Python. This snippet opens a file a million times without closing any handles.

for i in range(1, 1000000):
    f = open("/dev/null", "r")

Assuming you have a /dev/null, this code will work fine without throwing any exceptions on CPython, the most widely used Python implementation. CPython uses a deterministic reference counting scheme, and the handle is automatically closed as soon as its variable falls out of scope. It’s like having an invisible f.close() at the end of the block.

However, this code is incorrect. The deterministic handle closing an implementation behavior, not part of the specification. The operating system limits the number of files a process can have open at once, and there’s a risk that this resource will run out even though none of those handles are reachable. Imagine an adversarial Python implementation trying to break this code. It could sufficiently delay garbage collection, or even have infinite memory, omitting garbage collection altogether.

Like before, such an implementation eventually did come about: PyPy, a Python implementation written in Python with a JIT compiler. It uses (by default) something closer to mark-and-sweep, not reference counting, and those handles are left open until the next collection.

>>>> for i in range(1, 1000000):
....     f = open("/dev/null", "r")
.... 
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
IOError: [Errno 24] Too many open files: '/dev/null'

A tool for understanding specifications

This fits right in with a broader method of self-improvement: Occasionally put yourself in the implementor’s shoes. Think about what it would take to correctly implement the code that you write, either as a language or the APIs that you call. On reflection, you may find that some of those things that seem cheap may not be. Your assumptions may be reasonable, but not guaranteed. (Though it may be that “reasonable” is perfectly sufficient for your situation.)

An adversarial implementation is one that challenges an assumption you’ve taken for granted by turning it on its head.

null program

Chris Wellons