Web Scraping into an E-book with BeautifulSoup and Pandoc

I recently learned how to use BeautifulSoup, a Python library for manipulating HTML and XML parse trees, and it’s been a fantastic addition to my virtual toolbelt. In the past when I’ve needed to process raw HTML, I’ve tried nasty hacks with Unix pipes, or routing the content through a web browser so that I could manipulate it via the DOM API. None of that worked very well, but now I finally have BeautifulSoup fill that gap. It’s got a selector interface and, except for rendering, it’s is basically as comfortable with HTML as JavaScript.

Today’s problem was that I wanted to read a recommended online book called Interviewing Leather, a story set “in a world where caped heroes fight dastardly villains on an everyday basis.” I say “online book” because the 39,403 word story is distributed as a series of 14 blog posts. I’d rather not read it on the website in a browser, instead preferring it in e-book form where it’s more comfortable. The last time I did this, I manually scraped the entire book into Markdown, spent a couple of weeks editing it for mistakes, and finally sent the Markdown to Pandoc to convert into an e-book.

For this book, I just want a quick-and-dirty scrape in order to shift formats. I’ve never read it and I may not even like it, so I definitely don’t want to spend much time on the conversation. Despite having fun with typing lately, I’d also prefer to keep all the formating — italics, etc. — without re-entering it all manually.

Fortunately Pandoc can consume HTML as input, so, in theory, I can feed it the original HTML and preserve all of the original markup. The challenge is that the HTML is spread across 14 pages surrounded by all the expected blog cruft. I need some way to extract the book content from each page, concatenate it together along with chapter headings, and send the result to Pandoc. Enter BeautifulSoup.

First, I need to construct the skeleton HTML document. Rather than code my own HTML, I’m going to build it with BeautifulSoup. I start by creating a completely empty document and adding a doctype to it.

from bs4 import BeautifulSoup, Doctype

doc = BeautifulSoup()
doc.append(Doctype('html'))

Next I create the html root element, then add the head and body elements. I also add a title element. The original content has fancy Unicode markup — left and right quotation marks, em dash, etc. — so it’s important to declare the page as UTF-8, since otherwise these characters are likely to be interpreted incorrectly. It always feels odd declaring the encoding within the content being encoded, but that’s just the way things are.

html = doc.new_tag('html', lang='en-US')
doc.append(html)
head = doc.new_tag('head')
html.append(head)
meta = doc.new_tag('meta', charset='utf-8')
head.append(meta)
title = doc.new_tag('title')
title.string = 'Interviewing Leather'
head.append(title)
body = doc.new_tag('body')
html.append(body)

If I print(doc.prettify()) then I see the skeleton I want:

<!DOCTYPE html>
<html lang="en-US">
 <head>
  <meta charset="utf-8"/>
  <title>
   Interviewing Leather
  </title>
 </head>
 <body>
 </body>
</html>

Next, I assemble a list of the individual blog posts. When I was actually writing the script, I first downloaded them locally with my favorite download tool, curl, and ran the script against local copies. I didn’t want to hit the web server each time I tested. (Note: I’ve truncated these URLs to fit in this article.)

chapters = [
    "https://banter-latte.com/2007/06/26/...",
    "https://banter-latte.com/2007/07/03/...",
    "https://banter-latte.com/2007/07/10/...",
    "https://banter-latte.com/2007/07/17/...",
    "https://banter-latte.com/2007/07/24/...",
    "https://banter-latte.com/2007/07/31/...",
    "https://banter-latte.com/2007/08/07/...",
    "https://banter-latte.com/2007/08/14/...",
    "https://banter-latte.com/2007/08/21/...",
    "https://banter-latte.com/2007/08/28/...",
    "https://banter-latte.com/2007/09/04/...",
    "https://banter-latte.com/2007/09/20/...",
    "https://banter-latte.com/2007/09/25/...",
    "https://banter-latte.com/2007/10/02/..."
]

I visit a few of these pages in my browser to determine which part of the page I want to extract. I want to look closely enough to see what I’m doing, but not too closely as to not spoil myself! Right clicking the content in the browser and selecting “Inspect Element” (Firefox) or “Inspect” (Chrome) pops up a pane to structurally navigate the page. “View Page Source” would work, too, especially since this is static content, but I find the developer pane easier to read. Plus it hides most of the content, revealing only the structure.

The content is contained in a div with the class entry-content. I can use a selector to isolate this element and extract its child p elements. However, it’s not quite so simple. Each chapter starts with a bit of commentary that’s not part of the book, and I don’t want to include in my extract. It’s separated from the real content by an hr element. There’s also a footer below another hr element, likely put there by someone who wasn’t paying attention to the page structure. It’s not quite the shining example of semantic markup, but it’s regular enough I can manage.

<body>
  <main class="site-main">
    <div class="entry-body">
      <div class="entry-content">
        <p>A little intro.</p>
        <p>Some more intro.</p>
        <hr/>
        <p>Actual book content.</p>
        <p>More content.</p>
        <hr/>
        <p>Footer navigation junk.</p>
      </div>
    </div>
  </main>
</body>

The next step is visiting each of these pages. I use enumerate since I want the chapter numbers when inserting h1 chapter elements. Pandoc will use these to build the table of contents.

for i, chapter in enumerate(chapters):
    # Construct h1 for the chapter
    header = doc.new_tag('h1')
    header.string = 'Chapter %d' % (i + 1,)
    body.append(header)

Next grab the page content using urllib and parse it with BeautifulSoup. I’m using a selector to locate the div with the book content.

    # Load chapter content
    with urllib.request.urlopen(chapter) as url:
        page = BeautifulSoup(url)
    content = page.select('.entry-content')[0]

Finally I iterate over the child elements of the div.entry-content element. I keep a running count of the hr element and only extract content when we’ve seen exactly one hr element.

    # Append content between hr elements
    hr_count = 0
    for child in content.children:
        if (child.name == 'hr'):
            hr_count += 1
        elif (child.name == 'p' and hr_count == 1):
            child.attrs = {}
            if (child.string == '#'):
                body.append(doc.new_tag('hr'))
            else:
                body.append(child)

If it’s a p element, I copy it into the output document, taking a moment to strip away any attributes present on the p tag, since, for some reason, some of these elements have old-fashioned alignment attributes in the original content.

The original content also uses the text “#” by itself in a p to separate sections rather than using the appropriate markup. Despite being semantically incorrect, I’m thankful for this since more hr elements would have complicated matters further. I convert these to the correct markup for the final document.

Finally I pretty print the result:

print(doc.prettify())

Alternatively I could pipe it through tidy.

$ python3 extract.py | tidy -indent -utf8 > output.html

A brief inspection with a browser indicates that everything seems to have come out correctly. I won’t know for sure, though, until I actually read through the whole book. Finally I have Pandoc perform the conversion.

$ pandoc -t epub3 -o output.epub output.html 

And that’s it! It’s ready to read offline in my e-book reader of choice. The crude version of my script took around 15–20 minutes to write and test, so I had an e-book conversion in under 30 minutes. That’s about as long as I was willing to spend to get it. Tidying the script up for this article took a lot longer.

I don’t have permission to share the resulting e-book, but I can share my script so that you can generate your own, at least as long as it’s hosted at the same place with the same structure.

The Adversarial Implementation

When coding against a standard, whether it’s a programming language specification or an open API with multiple vendors, a common concern is the conformity of a particular construct to the standard. This cannot be determined simply by experimentation, since a piece of code may work correctly due only to the specifics of a particular implementation. It works today with this implementation, but it may not work tomorrow or with a different implementation. Sometimes an implementation will warn about the use of non-standard behavior, but this isn’t always the case.

When I’m reasoning about whether or not something is allowed, I like to imagine an adversarial implementation. If the standard allows some freedom, this implementation takes an imaginative or unique approach. It chooses non-obvious interpretations with possibly unexpected, but valid, results. This is nearly the opposite of djb’s hypothetical boringcc, though some of the ideas are similar.

Many argue that this is already the case with modern C and C++ optimizing compilers. Compiler writers are already creative with the standard in order to squeeze out more performance, even if it’s at odds with the programmer’s actual intentions. The most prominent example in C and C++ is strict aliasing, where the optimizer is deliberately blinded to certain kinds of aliasing because the standard allows it to be, eliminating some (possibly important) loads. This happens despite the compiler’s ability to trivially prove that two particular objects really do alias.

I want to be clear that I’m not talking about the nasal daemon kind of creativity. That’s not a helpful thought experiment. What I mean is this: Can I imagine a conforming implementation that breaks any assumptions made by the code?.

In practice, compilers typically have to bridge multiple specifications: the language standard, the platform ABI, and operating system interface (process startup, syscalls, etc.). This really ties its hands on how creative it can be with any one of the specifications. Depending on the situation, the imaginary adversarial implementation isn’t necessarily running on any particular platform. If our program is expected to have a long life, useful for many years to come, we should avoid making too many assumptions about future computers and imagine an adversarial compiler with few limitations.

C example

Take this bit of C:

printf("%d", sizeof(foo));

The printf function is variadic, and it relies entirely on the format string in order to correctly handle all its arguments. The %d specifier means that its matching argument is of type int. The result of the sizeof operator is an integer of type size_t, which has a different sign and may even be a different size.

Typically this code will work just fine. An int and size_t are generally passed the same way, the actual value probably fits in an int, and two’s complement means the signedness isn’t an issue due to the value being positive. From the printf point of view, it typically can’t detect that the type is wrong, so everything works by chance. In fact, it’s hard to imagine a real situation where this wouldn’t work fine.

However, this still undefined behavior — a scenario where a creative adversarial implementation can break things. In this case there are a few options for an adversarial implementation:

  1. Arguments of type int and size_t are passed differently, so printf will load the argument it from the wrong place.
  2. The implementation doesn’t use two’s complement and even small positive values have different bit representations.
  3. The type of foo is given crazy padding for arbitrary reasons that makes it so large it doesn’t fit in an int.

What’s interesting about #1 is that this has actually happened. For example, here’s a C source file.

float foo(float x, int y);

float
bar(int y)
{
    return foo(0.0f, y);
}

And in another source file:

float
foo(int x, int y)
{
    (void)x;  // ignore x
    return y * 2.0f;
}

The type of argument x differs between the prototype and the definition, which is undefined behavior. However, since this argument is ignored, this code will still work correctly on many different real-world computers, particularly where float and int arguments are passed the same way (i.e. on the stack).

However, in 2003 the x86-64 CPU arrived with its new System V ABI. Floating point and integer arguments were now passed differently, and the types of preceding arguments mattered when deciding which register to use. Some constructs that worked fine, by chance, prior to 2003 would soon stop working due to what may have seemed like an adversarial implementation years before.

Python example

Let’s look at some Python. This snippet opens a file a million times without closing any handles.

for i in range(1, 1000000):
    f = open("/dev/null", "r")

Assuming you have a /dev/null, this code will work fine without throwing any exceptions on CPython, the most widely used Python implementation. CPython uses a deterministic reference counting scheme, and the handle is automatically closed as soon as its variable falls out of scope. It’s like having an invisible f.close() at the end of the block.

However, this code is incorrect. The deterministic handle closing an implementation behavior, not part of the specification. The operating system limits the number of files a process can have open at once, and there’s a risk that this resource will run out even though none of those handles are reachable. Imagine an adversarial Python implementation trying to break this code. It could sufficiently delay garbage collection, or even have infinite memory, omitting garbage collection altogether.

Like before, such an implementation eventually did come about: PyPy, a Python implementation written in Python with a JIT compiler. It uses (by default) something closer to mark-and-sweep, not reference counting, and those handles are left open until the next collection.

>>>> for i in range(1, 1000000):
....     f = open("/dev/null", "r")
.... 
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
IOError: [Errno 24] Too many open files: '/dev/null'

A tool for understanding specifications

This fits right in with a broader method of self-improvement: Occasionally put yourself in the implementor’s shoes. Think about what it would take to correctly implement the code that you write, either as a language or the APIs that you call. On reflection, you may find that some of those things that seem cheap may not be. Your assumptions may be reasonable, but not guaranteed. (Though it may be that “reasonable” is perfectly sufficient for your situation.)

An adversarial implementation is one that challenges an assumption you’ve taken for granted by turning it on its head.

Two Games with Monte Carlo Tree Search

Monte Carlo tree search (MCTS) is the most impressive game artificial intelligence I’ve ever used. At its core it simulates a large number of games (playouts), starting from the current game state, using random moves for each player. Then it simply picks the move where it won most often. This description is sufficient to spot one of its most valuable features: MCTS requires no knowledge of strategy or effective play. The game’s rules — enough to simulate the game — are all that’s needed to allow the AI to make decent moves. Expert knowledge still makes for a stronger AI, but, more many games, it’s unnecessary to construct a decent opponent.

A second valuable feature is that it’s easy to parallelize. Unlike alpha-beta pruning, which doesn’t mix well with parallel searches of a Minimax tree, Monte Carlo simulations are practically independent and can be run in parallel.

Finally, the third valuable feature is that the search can be stopped at any time. The completion of any single simulation is as good a stopping point as any. It could be due to a time limit, a memory limit, or both. In general, the algorithm converges to a best move rather than suddenly discovering it. The good moves are identified quickly, and further simulations work to choose among them. More simulations make for better moves, with exponentially diminishing returns. Contrasted with Minimax, stopping early has the risk that the good moves were never explored at all.

To try out MCTS myself, I wrote two games employing it:

They’re both written in C, for both unix-like and Windows, and should be easy to build. I challenge you to beat them both. The Yavalath AI is easier to beat due to having blind spots, which I’ll discuss below. The Connect Four AI is more difficult and will likely take a number of tries.

Connect Four

MCTS works very well with Connect Four, and only requires modest resources: 32MB of memory to store the results of random playouts, and 500,000 game simulations. With a few tweaks, it can even be run in DOSBox. It stops when it hits either of those limits. In theory, increasing both would make for stronger moves, but in practice I can’t detect any difference. It’s like computing pi with Monte Carlo, where eventually it just runs out of precision to make any more progress.

Based on my simplified description above, you might wonder why it needs all that memory. Not only does MCTS need to track its win/loss ratio for each available move from the current state, it tracks the win/loss ratio for moves in the states behind those moves. A large chunk of the game tree is kept in memory to track all of the playout results. This is why MCTS needs a lot more memory than Minimax, which can discard branches that have been searched.

A convenient property of this tree is that the branch taken in the actual game can be re-used in a future search. The root of the tree becomes the node representing the taken game state, which has already seen a number of playouts. Even better, MCTS is weighted towards exploring good moves over bad moves, and good moves are more likely to be taken in the real game. In general, a significant portion of the tree gets to be reused in a future search.

I’m going to skip most of the details of the algorithm itself and focus on my implementation. Other articles do a better job at detailing the algorithm than I could.

My Connect Four engine doesn’t use dynamic allocation for this tree (or at all). Instead it manages a static buffer — an array of tree nodes, each representing a game state. All nodes are initially chained together into a linked list of free nodes. As the tree is built, nodes are pulled off the free list and linked together into a tree. When the game advances to the next state, nodes on unreachable branches are added back to the free list.

If at any point the free list is empty when a new node is needed, the current search aborts. This is the out-of-memory condition, and no more searching can be performed.

/* Connect Four is normally a 7 by 6 grid. */
#define CONNECT4_WIDTH  7
#define CONNECT4_HEIGHT 6

struct connect4_node {
    uint32_t next[CONNECT4_WIDTH];      // "pointer" to next node
    uint32_t playouts[CONNECT4_WIDTH];  // number of playouts
    float    score[CONNECT4_WIDTH];     // pseudo win/loss ratio
};

Rather than native C pointers, the structure uses 32-bit indexes into the master array. This saves a lot of memory on 64-bit systems, and the structure is the same size no matter the pointer size of the host. The next field points to the next state for the nth move. Since 0 is a valid index, -1 represents null (CONNECT4_NULL).

Each column is a potential move, so there are CONNECT4_WIDTH possible moves at any given state. Each move has a floating point score and a total number of playouts through that move. In my implementation, the search can also halt due to an overflow in a playout counter. The search can no longer be tracked in this representation, so it has to stop. This generally only happens when the game is nearly over and it’s grinding away on a small number of possibilities.

Note that the actual game state (piece positions) is not tracked in the node structure. That’s because it’s implicit. We know the state of the game at the root, and simulating the moves while descending the tree will keep track of the board state at the current node. That’s more memory savings.

The state itself is a pair of bitboards, one for each player. Each position on the grid gets a bit on each bitboard. The bitboard is very fast to manipulate, and win states are checked with just a handful of bit operations. My intention was to make playouts as fast as possible.

struct connect4_ai {
    uint64_t state[2];         // game state at root (bitboard)
    uint64_t rng[2];           // random number generator state
    uint32_t nodes_available;  // total number of nodes available
    uint32_t nodes_allocated;  // number of nodes in the tree
    uint32_t root;             // "pointer" to root node
    uint32_t free;             // "pointer" to free list
    int turn;                  // whose turn (0 or 1) at the root?
};

The nodes_available and nodes_allocated are not necessary for correctness nor speed. They’re useful for diagnostics and debugging.

All the functions that operate on these two structures are straightforward, except for connect4_playout, a recursive function which implements the bulk of MCTS. Depending on the state of the node it’s at, it does one of two things:

That’s pretty much all there is to it.

Yavalath

Yavalath is a board game invented by a computer program. It’s a pretty fascinating story. The depth and strategy are disproportionately deep relative to its dead simple rules: Get four in a row without first getting three in a row. The game revolves around forced moves.

The engine is structured almost identically to the Connect Four engine. It uses 32-bit indexes instead of pointers. The game state is a pair of bitboards, with end-game masks computed at compile time via metaprogramming. The AI allocates the tree from a single, massive buffer — multiple GBs in this case, dynamically scaled to the available physical memory. And the core MCTS function is nearly identical.

One important difference is that identical game states — states where the pieces on the board are the same, but the node was reached through a different series of moves — are coalesced into a single state in the tree. This state deduplication is done through a hash table. This saves on memory and allows multiple different paths through the game tree to share playouts. It comes at a cost of including the game state in the node (so it can be identified in the hash table) and reference counting the nodes (since they might have more than one parent).

Unfortunately the AI has blind spots, and once you learn to spot them it becomes easy to beat consistently. It can’t spot certain kinds of forced moves, so it always falls for the same tricks. The official Yavalath AI is slightly stronger than mine, but has a similar blindness. I think MCTS just isn’t quite a good fit for Yavalath.

The AI’s blindness is caused by shallow traps, a common problem for MCTS. It’s what makes MCTS a poor fit for Chess. A shallow trap is a branch in the game tree where the game will abruptly end in a small number of turns. If the random tree search doesn’t luckily stumble upon a trap during its random traversal, it can’t take it into account in its final decision. A skilled player will lead the game towards one of these traps, and the AI will blunder along, not realizing what’s happened until its too late.

I almost feel bad for it when this happens. If you watch the memory usage and number of playouts, once it falls into a trap, you’ll see it using almost no memory while performing a ton of playouts. It’s desperately, frantically searching for a way out of the trap. But it’s too late, little AI.

Another Tool in the Toolbelt

I’m really happy to have sunk a couple weekends into playing with MCTS. It’s not always a great fit, as seen with Yavalath, but it’s a really neat algorithm. Now that I’ve wrapped my head around it, I’ll be ready to use it should I run into an appropriate problem in the future.

My Journey with Touch Typing and Vim

Given the title, the publication date of this article is probably really confusing. This was deliberate.

Three weeks ago I made a conscious decision to improve my typing habits. You see, I had a dirty habit. Despite spending literally decades typing on a daily basis, I’ve been a weak typist. It wasn’t exactly finger pecking, nor did it require looking down at the keyboard as I typed, but rather a six-finger dance I developed organically over the years. My technique was optimized towards Emacs’ frequent use of CTRL and ALT combinations, avoiding most of the hand scrunching. It was fast enough to keep up with my thinking most of the time, but was ultimately limiting due to its poor accuracy. I was hitting the wrong keys far too often.

My prime motivation was to learn Vim — or, more specifically, to learn modal editing. Lots of people swear by it, including people whose opinions I hold in high regard. The modal editing community is without a doubt larger than the Emacs community, especially since, thanks to Viper and Evil, a subset of the Emacs community is also part of the modal editing community. There’s obviously something significantly valuable about it, and I wanted to understand what that was.

But I was a lousy typist who couldn’t hit the right keys often enough to make effective use of modal editing. I would need to learn touch typing first.

Touch typing

How would I learn? Well, the first search result for “online touch typing course” was Typing Club, so that’s what I went with. By the way, here’s my official review: “Good enough not to bother checking out the competition.” For a website it’s pretty much the ultimate compliment, but it’s not exactly the sort of thing you’d want to hear from your long-term partner.

My hard rule was that I would immediately abandon my old habits cold turkey. Poor typing is a bad habit just like smoking, minus the cancer and weakened sense of smell. It was vital that I unlearn all that old muscle memory. That included not just my six-finger dance, but also my NetHack muscle memory. NetHack uses “hjkl” for navigation just like Vim. The problem was that I’d spent a couple hundred hours in NetHack over the past decade with my index finger on “h”, not the proper home row location. It was disorienting to navigate around Vim initally, like riding a bicycle with inverted controls.

Based on reading other people’s accounts, I determined I’d need several days of introductory practice where I’d be utterly unproductive. I took a three-day weekend, starting my touch typing lessons on a Thursday evening. Boy, they weren’t kidding about it being slow going. It was a rough weekend. When checking in on my practice, my wife literally said she pitied me. Ouch.

By Monday I was at a level resembling a very slow touch typist. For the rest of the first week I followed all the lessons up through the number keys, never progressing past an exercise until I had exceeded the target speed with at least 90% accuracy. This was now enough to get me back on my feet for programming at a glacial, frustrating pace. Programming involves a lot more numbers and symbols than other kinds of typing, making that top row so important. For a programmer, it would probably be better for these lessons to be earlier in the series.

For that first week I mostly used Emacs while I was finding my feet (or finding my fingers?). That’s when I experienced first hand what all these non-Emacs people — people who I, until recently, considered to be unenlightened simpletons — had been complaining about all these years: Pressing CTRL and ALT key combinations from the home row is a real pain in in the ass! These complaints were suddenly making sense. I was already seeing the value of modal editing before I even started really learning Vim. It made me look forward to it even more.

During the second week of touch typing I went though Derek Wyatt’s Vim videos and learned my way around the :help system enough to bootstrap my Vim education. I then read through the user manual, practicing along the way. I’ll definitely have to pass through it a few more times to pick up all sorts of things that didn’t stick. This is one way that Emacs and Vim are a lot alike.

Update: Practical Vim: Edit Text at the Speed of Thought was recommended in the comments, and it’s certainly a better place to start than the Vim user manual. Unlike the manual, it’s opinionated and focuses on good habits, which is exactly what a newbie needs.

One of my rules when learning Vim was to resist the urge to remap keys. I’ve done it a lot with Emacs: “Hmm, that’s not very convenient. I’ll change it.” It means my Emacs configuration is fairly non-standard, and using Emacs without my configuration is like using an unfamiliar editor. This is both good and bad. The good is that I’ve truly changed Emacs to be my editor, suited just for me. The bad is that I’m extremely dependent on my configuration. What if there was a text editing emergency?

With Vim as a sort of secondary editor, I want to be able to fire it up unconfigured and continue to be nearly as productive. A pile of remappings would prohibit this. In my mind this is like a form of emergency preparedness. Other people stock up food and supplies. I’m preparing myself to sit at a strange machine without any of my configuration so that I can start the rewrite of the software lost in the disaster, so long as that machine has vi, cc, and make. If I can’t code in C, then what’s the point in surviving anyway?

The other reason is that I’m just learning. A different mapping might seem more appropriate, but what do I know at this point? It’s better to follow the beaten path at first, lest I form a bunch of bad habits again. Trust in the knowledge of the ancients.

Future directions

I am absolutely sticking with modal editing for the long term. I’m really enjoying it so far. At three weeks of touch typing and two weeks of modal editing, I’m around 80% caught back up with my old productivity speed, but this time I’ve got a lot more potential for improvement.

For now, Vim will continue taking over more and more of my text editing work. My last three articles were written in Vim. It’s really important to keep building proficiency. I still rely on Emacs for email and for syndication feeds, and that’s not changing any time soon. I also really like Magit as a Git interface. Plus I don’t want to abandon years of accumulated knowledge and leave the users of my various Emacs packages out to dry. Ultimately I believe will end up using Evil, to get what seems to be the best of both worlds: modal editing and Emacs’ rich extensibility.

How to Write Portable C Without Complicating Your Build

Suppose you’re writing a non-GUI C application intended to run on a number of operating systems: Linux, the various BSDs, macOS, classical unix, and perhaps even something as exotic as Windows. It might sound like a rather complicated problem. These operating systems have slightly different interfaces (or very different in one case), and they run different variants of the standard unix tools — a problem for portable builds.

With some up-front attention to detail, this is actually not terribly difficult. Unix-like systems are probably the least diverse and least buggy they’ve ever been. Writing portable code is really just a matter of coding to the standards and ignoring extensions unless absolutely necessary. Knowing what’s standard and what’s extension is the tricky part, but I’ll explain how to find this information.

You might be tempted to reach for an overly complicated solution such as GNU Autoconf. Sure, it creates a configure script with the familiar, conventional interface. This has real value. But do you really need to run a single-threaded gauntlet of hundreds of feature/bug tests for things that sometimes worked incorrectly in some weird unix variant back in the 1990s? On a machine with many cores (parallel build, -j), this may very well be the slowest part of the whole build process.

For example, the configure script for Emacs checks that the compiler supplies stdlib.h, string.h, and getenv — things that were standardized nearly 30 years ago. It also checks for a slew of POSIX functions that have been standard since 2001.

There’s a much easier solution: Document that the application requires, say, C99 and POSIX.1-2001. It’s the responsibility of the person building the application to supply these implementations, so there’s no reason to waste time testing for it.

How to code to the standards

Suppose there’s some function you want to use, but you’re not sure if it’s standard or an extension. Or maybe you don’t know what standard it comes from. Luckily the man pages document this stuff very well, especially on Linux. Check the friendly “CONFORMING TO” section. For example, look at getenv(3). Here’s what that section has to say:

CONFORMING TO
    getenv(): SVr4, POSIX.1-2001, 4.3BSD, C89, C99.

    secure_getenv() is a GNU extension.

This says this function comes from the original C standard. It’s always available on anything that claims to be a C implementation. The man page also documents secure_getenv(), which is a GNU extension: to be avoided in anything intended to be portable.

What about sleep(3)?

CONFORMING TO
    POSIX.1-2001.

This function isn’t part of standard C, but it’s available on any system claiming to implement POSIX.1-2001 (the POSIX standard from 2001). If the program needs to run on an operating system not implementing this POSIX standard (i.e. Windows), you’ll need to call an alternative function, probably inside a different #if .. #endif branch. More on this in a moment.

If you’re coding to POSIX, you must define the _POSIX_C_SOURCE feature test macro to the standard you intend to use prior to any system header includes:

A POSIX-conforming application should ensure that the feature test macro _POSIX_C_SOURCE is defined before inclusion of any header.

For example, to properly access POSIX.1-2001 functions in your application, define _POSIX_C_SOURCE to 200112L. With this defined, it’s safe to assume access to all of C and everything from that standard of POSIX. You can do this at the top of your sources, but I personally like the tidiness of a global config.h that gets included before everything.

How to create a portable build

So you’ve written clean, portable C to the standards. How do you build this application? The natural choice is make. It’s available everywhere and it’s part of POSIX.

Again, the tricky part is teasing apart the standard from the extension. I’m a long-time sinner in this regard, having far too often written Makefiles that depend on GNU Make extensions. This is a real pain when building programs on systems without the GNU utilities. I’ve been making amends (and finding some bugs as a result).

No implementation makes the division clear in its documentation, and especially don’t bother looking at the GNU Make manual. Your best resource is the standard itself. If you’re already familiar with make, coding to the standard is largely a matter of unlearning the various extensions you know.

Outside of some hacks, this means you don’t get conditionals (if, else, etc.). With some practice, both with sticking to portable code and writing portable Makefiles, you’ll find that you don’t really need them. Following the macro conventions will cover most situations. For example:

You don’t need to do anything weird with the assignments. The user invoking make can override them easily. For example, here’s part of a Makefile:

CC     = c99
CFLAGS = -Wall -Wextra -Os

But the user wants to use clang, and their system needs to explicitly link -lsocket (e.g. Solaris). The user can override the macro definitions on the command line:

$ make CC=clang LDLIBS=-lsocket

The same rules apply to the programs you invoke from the Makefile. Read the standards documents and ignore your system’s man pages as to avoid accidentally using an extension. It’s especially valuable to learn the Bourne shell language and avoid any accidental bashisms in your Makefiles and scripts. The dash shell is good for testing your scripts.

Makefiles conforming to the standard will, unfortunately, be more verbose than those taking advantage of a particular implementation. If you know how to code Bourne shell — which is not terribly difficult to learn — then you might even consider hand-writing a configure script to generate the Makefile (a la metaprogramming). This gives you a more flexible language with conditionals, and, being generated, redundancy in the Makefile no longer matters.

As someone who frequently dabbles with BSD systems, my life has gotten a lot easier since learning to write portable Makefiles and scripts.

But what about Windows

It’s the elephant in the room and I’ve avoided talking about it so far. If you want to build with Visual Studio’s command line tools — something I do on occasion — build portability goes out the window. Visual Studio has nmake.exe, which nearly conforms to POSIX make. However, without the standard unix utilities and with the completely foreign compiler interface for cl.exe, there’s absolutely no hope of writing a Makefile portable to this situation.

The nice alternative is MinGW(-w64) with MSYS or Cygwin supplying the unix utilities, though it has the problem of linking against msvcrt.dll. Another option is a separate Makefile dedicated to nmake.exe and the Visual Studio toolchain. Good luck defining a correctly working “clean” target with del.exe.

My preferred approach lately is an amalgamation build (as seen in Enchive): Carefully concatenate all the application’s sources into one giant source file. First concatenate all the headers in the right order, followed by all the C files. Use sed to remove and local includes. You can do this all on a unix system with the nice utilities, then point cl.exe at the amalgamation for the Visual Studio build. It’s not very useful for actual development (i.e. you don’t want to edit the amalgamation), but that’s what MinGW-w64 resolves.

What about all those POSIX functions? You’ll need to find Win32 replacements on MSDN. I prefer to do this is by abstracting those operating system calls. For example, compare POSIX sleep(3) and Win32 Sleep().

#if defined(_WIN32)
#include <windows.h>

void
my_sleep(int s)
{
    Sleep(s * 1000);  // TODO: handle overflow, maybe
}

#else /* __unix__ */
#include <unistd.h>

void
my_sleep(int s)
{
    sleep(s);  // TODO: fix signal interruption
}
#endif

Then the rest of the program calls my_sleep(). There’s another example in the OpenMP article with pwrite(2) and WriteFile(). This demonstrates that supporting a bunch of different unix-like systems is really easily, but introducing Windows portability adds a disproportionate amount of complexity.

Caveat: paths and filenames

There’s one major complication with filenames for applications portable to Windows. In the unix world, filenames are null-terminated bytestrings. Typically these are Unicode strings encoded as UTF-8, but it’s not necessarily so. The kernel just sees bytestrings. A bytestring doesn’t necessarily have a formal Unicode representation, which can be a problem for languages that want filenames to be Unicode strings (also).

On Windows, filenames are somewhere between UCS-2 and UTF-16, but end up being neither. They’re really null-terminated unsigned 16-bit integer arrays. It’s almost UTF-16 except that Windows allows unpaired surrogates. This means Windows filenames also don’t have a formal Unicode representation, but in a completely different way than unix. Some heroic efforts have gone into working around this issue.

As a result, it’s highly non-trivial to correctly support all possible filenames on both systems in the same program, especially when they’re passed as command line arguments.

Summary

The key points are:

  1. Document the standards your application requires and strictly stick to them.
  2. Ignore the vendor documentation if it doesn’t clearly delineate extensions.

This was all a discussion of non-GUI applications, and I didn’t really touch on libraries. Many libraries are simple to access in the build (just add it to LDLIBS), but some libraries — GUIs in particular — are particularly complicated to manage portably and will require a more complex solution (pkg-config, CMake, Autoconf, etc.).

Why I've Retired My PGP Keys and What's Replaced It

tl;dr: Enchive (rhymes with “archive”) has replaced my use of GnuPG.

Two weeks ago I tried to encrypt a tax document for archival and noticed my PGP keys had just expired. GnuPG had (correctly) forbidden the action, requiring that I first edit the key and extend the expiration date. Rather than do so, I decided to take this opportunity to retire my PGP keys for good. Over time I’ve come to view PGP as largely a failure — it never reached the critical mass, the tooling has always been problematic, and it’s now a dead end. The only thing it’s been successful at is signing Linux packages, and even there it could be replaced with something simpler and better.

I still have a use for PGP: encrypting sensitive files to myself for long term storage. I’ve also been using it to consistently to sign Git tags for software releases. However, very recently this lost its value, though I doubt anyone was verifying these signatures anyway. It’s never been useful for secure email, especially when most people use it incorrectly. I only need to find a replacement for archival encryption.

I could use an encrypted filesystem, but which do I use? I use LUKS to protect my laptop’s entire hard drive in the event of a theft, but for archival I want something a little more universal. Basically I want the following properties:

I couldn’t find anything that fit the bill, so I did exactly what you’re not supposed to do and rolled my own: Enchive. It was loosely inspired by OpenBSD’s signify. It has the tiny subset of PGP features that I need — using modern algorithms — plus one more feature I’ve always wanted: the ability to generate a keypair from a passphrase. This means I can reliably access my archive keypair anywhere without doing something strange like uploading my private keys onto the internet.

On Enchive

Here’s where I’d put the usual disclaimer about not using it for anything serious, blah blah blah. But really, I don’t care if anyone else uses Enchive. It exists just to scratch my own personal itch. If you have any doubts, don’t use it. I’m putting it out there in case anyone else is in the same boat. It would also be nice if any glaring flaws I may have missed were pointed out.

Not expecting it to be available as a nice package, I wanted to make it trivial to build Enchive anywhere I’d need it. Except for including stdint.h in exactly one place to get the correct integers for crypto, it’s written in straight C89. All the crypto libraries are embedded, and there are no external dependencies. There’s even an “amalgamation” build, so make isn’t required: just point your system’s cc at it and you’re done.

Algorithms

For encryption, Enchive uses Curve25519, ChaCha20, and HMAC-SHA256.

Rather than the prime-number-oriented RSA as used in classical PGP (yes, GPG 2 can do better), Curve25519 is used for the asymmetric cryptography role, using the relatively new elliptic curve cryptography. It’s stronger cryptography and the keys are much smaller. It’s a Diffie-Hellman function — an algorithm used to exchange cryptographic keys over a public channel — so files are encrypted by generating an ephemeral keypair and using this ephemeral keypair to perform a key exchange with the master keys. The ephemeral public key is included with the encrypted file and the ephemeral private key is discarded.

I used the “donna” implementation in Enchive. Despite being the hardest to understand (mathematically), this is the easiest to use. It’s literally just one function of two arguments to do everything.

Curve25519 only establishes the shared key, so next is the stream cipher ChaCha20. It’s keyed by the shared key to actually encrypt the data. This algorithm has the same author as Curve25519 (djb), so it’s natural to use these together. It’s really straightforward, so there’s not much to say about it.

For the Message Authentication Code (MAC), I chose HMAC-SHA256. It prevents anyone from modifying the message. Note: This doesn’t prevent anyone who knows the master public key from replacing the file wholesale. That would be solved with a digital signature, but this conflicts with my goal of encrypting files without the need of my secret key. The MAC goes at the end of the file, allowing arbitrarily large files to be encrypted single-pass as a stream.

There’s a little more to it (IV, etc.) and is described in detail in the README.

Usage

The first thing you’d do is generate a keypair. By default this is done from /dev/urandom, in which case you should immediately back them up. But if you’re like me, you’ll be using Enchive’s --derive (-d) feature to create it from a passphrase. In that case, the keys are backed up in your brain!

$ enchive keygen --derive
secret key passphrase:
secret key passphrase (repeat):
passphrase (empty for none):
passphrase (repeat):

The first prompt is for the secret key passphrase. This is converted into a Curve25519 keypair using an scrypt-like key derivation algorithm. The process requires 512MB of memory (to foil hardware-based attacks) and takes around 20 seconds.

The second passphrase (or the only one when --derive isn’t used), is the protection key passphrase. The secret key is encrypted with this passphrase to protect it at rest. You’ll need to enter it any time you decrypt a file. The key derivation step is less aggressive for this key, but you could also crank it up if you like.

At the end of this process you’ll have two new files under $XDG_CONFIG_DIR/enchive: enchive.pub (32 bytes) and enchive.sec (64 bytes). The first you can distribute anywhere you’d like to encrypt files; it’s not particularly sensitive. The second is needed to decrypt files.

To encrypt a file for archival:

$ enchive archive sensitive.zip

No prompt for passphrase. This will create sensitive.zip.enchive.

To decrypt later:

$ enchive extract sensitive.zip.enchive
passphrase:

If you’ve got many files to decrypt, entering your passphrase over and over would get tiresome, so Enchive includes a key agent that keeps the protection key in memory for a period of time (15 minutes by default). Enable it with the --agent flag (it may be enabled by default someday).

$ enchive --agent extract sensitive.zip.enchive

Unlike ssh-agent and gpg-agent, there’s no need to start the agent ahead of time. It’s started on demand as needed and terminates after the timeout. It’s completely painless.

Both archive and extract operate stdin to stdout when no file is given.

Feature complete

As far as I’m concerned, Enchive is feature complete. It does everything I need, I don’t want it to do anything more, and at least two of us have already started putting it to use. The interface and file formats won’t change unless someone finds a rather significant flaw. There is some wiggle room to replace the algorithms in the future should Enchive have that sort of longevity.

OpenMP and pwrite()

The most common way I introduce multi-threading to small C programs is with OpenMP (Open Multi-Processing). It’s typically used as compiler pragmas to parallelize computationally expensive loops — iterations are processed by different threads in some arbitrary order.

Here’s an example that computes the frames of a video in parallel. Despite being computed out of order, each frame is written in order to a large buffer, then written to standard output all at once at the end.

size_t size = sizeof(struct frame) * num_frames;
struct frame *output = malloc(size);
float beta = DEFAULT_BETA;

/* schedule(dynamic, 1): treat the loop like a work queue */
#pragma omp parallel for schedule(dynamic, 1)
for (int i = 0; i < num_frames; i++) {
    float theta = compute_theta(i);
    compute_frame(&output[i], theta, beta);
}

write(STDOUT_FILENO, output, size);
free(output);

Adding OpenMP to this program is much simpler than introducing low-level threading semantics with, say, Pthreads. With care, there’s often no need for explicit thread synchronization. It’s also fairly well supported by many vendors, even Microsoft (up to OpenMP 2.0), so a multi-threaded OpenMP program is quite portable without #ifdef.

There’s real value this pragma API: The above example would still compile and run correctly even when OpenMP isn’t available. The pragma is ignored and the program just uses a single core like it normally would. It’s a slick fallback.

When a program really does require synchronization there’s omp_lock_t (mutex lock) and the expected set of functions to operate on them. This doesn’t have the nice fallback, so I don’t like to use it. Instead, I prefer #pragma omp critical. It nicely maintains the OpenMP-unsupported fallback.

/* schedule(dynamic, 1): treat the loop like a work queue */
#pragma omp parallel for schedule(dynamic, 1)
for (int i = 0; i < num_frames; i++) {
    struct frame *frame = malloc(sizeof(*frame));
    float theta = compute_theta(i);
    compute_frame(frame, theta, beta);
    #pragma omp critical
    {
        write(STDOUT_FILENO, frame, sizeof(*frame));
    }
    free(frame);
}

This would append the output to some output file in an arbitrary order. The critical section prevents interleaving of outputs.

There are a couple of problems with this example:

  1. Only one thread can write at a time. If the write takes too long, other threads will queue up behind the critical section and wait.

  2. The output frames will be out of order, which is probably inconvenient for consumers. If the output is seekable this can be solved with lseek(), but that only makes the critical section even more important.

There’s an easy fix for both, and eliminates the need for a critical section: POSIX pwrite().

ssize_t pwrite(int fd, const void *buf, size_t count, off_t offset);

It’s like write() but has an offset parameter. Unlike lseek() followed by a write(), multiple threads and processes can, in parallel, safely write to the same file descriptor at different file offsets. The catch is that the output must be a file, not a pipe.

#pragma omp parallel for schedule(dynamic, 1)
for (int i = 0; i < num_frames; i++) {
    size_t size = sizeof(struct frame);
    struct frame *frame = malloc(size);
    float theta = compute_theta(i);
    compute_frame(frame, theta, beta);
    pwrite(STDOUT_FILENO, frame, size, size * i);
    free(frame);
}

There’s no critical section, the writes can interleave, and the output is in order.

If you’re concerned about standard output not being seekable (it often isn’t), keep in mind that it will work just fine when invoked like so:

$ ./compute_frames > frames.ppm

Windows Portability

I talked about OpenMP being really portable, then used POSIX functions. Fortunately the Win32 WriteFile() function has an “overlapped” parameter that works just like pwrite(). Typically rather than call either directly, I’d wrap the write like so:

#ifdef _WIN32
#define WIN32_LEAN_AND_MEAN
#include <windows.h>

static int
write_frame(struct frame *f, int i)
{
    HANDLE out = GetStdHandle(STD_OUTPUT_HANDLE);
    DWORD written;
    OVERLAPPED offset = {.Offset = sizeof(*f) * i};
    return WriteFile(out, f, sizeof(*f), &written, &offset);
}

#else /* POSIX */
#include <unistd.h>

static int
write_frame(struct frame *f, int i)
{
    size_t count = sizeof(*f);
    size_t offset = sizeof(*f) * i;
    return pwrite(STDOUT_FILENO, buf, count, offset) == count;
}
#endif

Except for switching to write_frame(), the OpenMP part remains untouched.

Real World Example

Here’s an example in a real program:

julia.c

Notice because of pwrite() there’s no piping directly into ppmtoy4m:

$ ./julia > output.ppm
$ ppmtoy4m -F 60:1 < output.ppm > output.y4m
$ x264 -o output.mp4 output.y4m

output.mp4

Asynchronous Requests from Emacs Dynamic Modules

A few months ago I had a discussion with Vladimir Kazanov about his Orgfuse project: a Python script that exposes an Emacs Org-mode document as a FUSE filesystem. It permits other programs to navigate the structure of an Org-mode document through the standard filesystem APIs. I suggested that, with the new dynamic modules in Emacs 25, Emacs itself could serve a FUSE filesystem. In fact, support for FUSE services in general could be an package of his own.

So that’s what he did: Elfuse. It’s an old joke that Emacs is an operating system, and here it is handling system calls.

However, there’s a tricky problem to solve, an issue also present my joystick module. Both modules handle asynchronous events — filesystem requests or joystick events — but Emacs runs the event loop and owns the main thread. The external events somehow need to feed into the main event loop. It’s even more difficult with FUSE because FUSE also wants control of its own thread for its own event loop. This requires Elfuse to spawn a dedicated FUSE thread and negotiate a request/response hand-off.

When a filesystem request or joystick event arrives, how does Emacs know to handle it? The simple and obvious solution is to poll the module from a timer.

struct queue requests;

emacs_value
Frequest_next(emacs_env *env, ptrdiff_t n, emacs_value *args, void *p)
{
    emacs_value next = Qnil;
    queue_lock(requests);
    if (queue_length(requests) > 0) {
        void *request = queue_pop(requests, env);
        next = env->make_user_ptr(env, fin_empty, request);
    }
    queue_unlock(request);
    return next;
}

And then ask Emacs to check the module every, say, 10ms:

(defun request--poll ()
  (let ((next (request-next)))
    (when next
      (request-handle next))))

(run-at-time 0 0.01 #'request--poll)

Blocking directly on the module’s event pump with Emacs’ thread would prevent Emacs from doing important things like, you know, being a text editor. The timer allows it to handle its own events uninterrupted. It gets the job done, but it’s far from perfect:

  1. It imposes an arbitrary latency to handling requests. Up to the poll period could pass before a request is handled.

  2. Polling the module 100 times per second is inefficient. Unless you really enjoy recharging your laptop, that’s no good.

The poll period is a sliding trade-off between latency and battery life. If only there was some mechanism to, ahem, signal the Emacs thread, informing it that a request is waiting…

SIGUSR1

Emacs Lisp programs can handle the POSIX SIGUSR1 and SIGUSR2 signals, which is exactly the mechanism we need. The interface is a “key” binding on special-event-map, the keymap that handles these kinds of events. When the signal arrives, Emacs queues it up for the main event loop.

(define-key special-event-map [sigusr1]
  (lambda ()
    (interactive)
    (request-handle (request-next))))

The module blocks on its own thread on its own event pump. When a request arrives, it queues the request, rings the bell for Emacs to come handle it (raise()), and waits on a semaphore. For illustration purposes, assume the module reads requests from and writes responses to a file descriptor, like a socket.

int event_fd = /* ... */;
struct request request;
sem_init(&request.sem, 0, 0);

for (;;) {
    /* Blocking read for request event */
    read(event_fd, &request.event, sizeof(request.event));

    /* Put request on the queue */
    queue_lock(requests);
    queue_push(requests, &request);
    queue_unlock(requests);
    raise(SIGUSR1);  // TODO: Should raise() go inside the lock?

    /* Wait for Emacs */
    while (sem_wait(&request.sem))
        ;

    /* Reply with Emacs' response */
    write(event_fd, &request.response, sizeof(request.response));
}

The sem_wait() is in a loop because signals will wake it up prematurely. In fact, it may even wake up due to its own signal on the line before. This is the only way this particular use of sem_wait() might fail, so there’s no need to check errno.

If there are multiple module threads making requests to the same global queue, the lock is necessary to protect the queue. The semaphore is only for blocking the thread until Emacs has finished writing its particular response. Each thread has its own semaphore.

When Emacs is done writing the response, it releases the module thread by incrementing the semaphore. It might look something like this:

emacs_value
Frequest_complete(emacs_env *env, ptrdiff_t n, emacs_value *args, void *p)
{
    struct request *request = env->get_user_ptr(env, args[0]);
    if (request)
        sem_post(&request->sem);
    return Qnil;
}

The top-level handler dispatches to the specific request handler, calling request-complete above when it’s done.

(defun request-handle (next)
  (condition-case e
      (cl-ecase (request-type next)
        (:open  (request-handle-open  next))
        (:close (request-handle-close next))
        (:read  (request-handle-read  next)))
    (error (request-respond-as-error next e)))
  (request-complete))

This SIGUSR1+semaphore mechanism is roughly how Elfuse currently processes requests.

Making it work on Windows

Windows doesn’t have signals. This isn’t a problem for Elfuse since Windows doesn’t have FUSE either. Nor does it matter for Joymacs since XInput isn’t event-driven and always requires polling. But someday someone will need this mechanism for a dynamic module on Windows.

Fortunately there’s a solution: input language change events, WM_INPUTLANGCHANGE. It’s also on special-event-map:

(define-key special-event-map [language-change]
  (lambda ()
    (interactive)
    (request-process (request-next))))

Instead of raise() (or pthread_kill()), broadcast the window event with PostMessage(). Outside of invoking the language-change key binding, Emacs will ignore the event because WPARAM is 0 — it doesn’t belong to any particular window. We don’t really want to change the input language, after all.

PostMessageA(HWND_BROADCAST, WM_INPUTLANGCHANGE, 0, 0);

Naturally you’ll also need to replace the POSIX threading primitives with the Windows versions (CreateThread(), CreateSemaphore(), etc.). With a bit of abstraction in the right places, it should be pretty easy to support both POSIX and Windows in these asynchronous dynamic module events.

null program

Chris Wellons