Tuesday, November 23, 2021

Stability and versioning: Lock yourself in at your own peril.

I usually write about Rust, but today I want to discuss something broader; flexibility in software, particularly in design and architecture. To begin, let's focus briefly on versioning.

Versioning

Prior to semantic versioning, version numbering conventions used in software releases were nearly as numerous as the releases themselves. Some had implied meaning for releases with odd vs even numbers, some would skip numbers arbitrarily, some would use letters and other non-numeric characters, and some would have unique counting strategies. Semantic versioning provided some much-needed consistency.

Consistency, however, is not the ultimate goal. You might imagine some issues with consistency itself; is it better to have some inconsistencies (mix the good with the bad), or just be consistently bad? One of the most important goals for a software maintainer is to provide repeatability; in other words, consistency within your build process and testing infrastructure. Same word, just scoped differently. Versioning, when used appropriately, can help increase the repeatability of your builds and tests.

So then, let's defined "appropriate versioning". If we follow SemVer closely, then it should not be possible to introduce breaking changes in our software that will negatively impact downstream dependents. If a mistake is made and a breaking change goes out in a version that is not a semantically breaking version, then we have to do the right thing and unpublish/yank the semantically incorrect version. That's all there really is to it. I'll leave you to read the SemVer spec for the precise rules to follow, but this is all that one needs to appropriately version their software.

Stability

This term, "stability", is ironically such a fragile concept. For something to be stable, it must be unchanging. For something to be unchangeable, it must be perfect. Very few things are universally perfect. I like to use science to illustrate this point: the sciences are ever-evolving; knowledge is not static but accumulates and is refined over time precisely because it didn't start out perfect. It is fitting that computer science is one of these fluidly changing organizations of knowledge.

When software engineers think of stability, they are usually looking at it in the context of any of the following layers of abstraction:

  • API stability. This is what SemVer addresses directly.
  • ABI stability. The idea that binaries compiled long ago (or by different toolchains) should work transparently with binaries compiled today.
  • Resilience. Resistance to changing behavior or breaking compatibility, but also resistance to logic bugs and the like.

Resilience in service-oriented software is a very interesting topic of ongoing research, but we can frame resilience in the terms described above. Namely resistance to logic bugs, since changes in behavior or interface are better addressed through tools like SemVer.

So, what do we know about logic bugs in stable interfaces? The first thing that comes to my mind is in the C standard library. It's our old friend, the null-terminated string and its wild band of merry havoc-wreaking unsafe functions! There is good reason this has been dubbed the most expensive one-byte mistake. Now just because C has null-terminated strings and its standard library supports them doesn't mean you have to use these abominations even if you write code purely in C. But the sheer support for them in the wider ecosystem sure does make it harder to integrate various libraries when you are using any of the safe alternatives. Not to mention the fragmentation that each of those contributes to.

We are left with a stable interface whose intractable design makes it almost useless in practice. It may as well not exist at all! We'd be better off without null-terminated strings, de facto. But in some sense, we are stuck with them for all time, and there is nothing that I, you, or anyone else can do about it. This is clearly a highly important lesson of stabilization.

There is another term for stability, often used with critical connotation:

Ossification

No doubt, many will first encounter this term when trying to understand the QUIC protocol. It has been used in various circles for far longer, though.

If you read between the lines a little, ossification sounds like an objectively bad thing. It is outside of your control, and you cannot fix it. What could be worse than that? In my opinion, being the decisionmaker in that case would be worse. I wouldn't want to be held accountable for saying "this is our call; it's set in stone and it can never be changed."

Stability comes packaged with this gremlin called ossification, and they are inseparable. What sounds like a good idea right now may turn out to be a bad idea later. It happened with null-terminated strings and nullable pointers, it happened with PHP and Python, and it will happen again. Maybe the next library or application you stabilize will turn out to be totally wrong/broken/inadequate.

You can't tell the future, but you will definitely be stuck with the past.

Outside of standard libraries that ship with programming languages and compilers, it is a pretty safe bet that being totally wrong/broken/inadequate is a reasonable situation to be in; you just patch your library or application when things need to be fixed and all is right, right? I think this actually depends on a few aspects:

  1. The software maintainer needs to be on top of maintenance. If they fall behind on making patches or merging PRs, then you're left with the maintenance burden yourself, often on software that you are not an expert in. It just happens that you depend on its functionality and you get stuck with the bill when the maintainer moves on to other projects.
  2. Other dependents need to maintain their software, too! That means all transient dependencies in the entire tree need the same treatment and amount of prioritization to receive patches in a timely manner.

To echo the sentiment of the Fastly article, as long as everyone keeps up their end of the bargain with maintenance, there isn't a whole lot that can go wrong. It's just an impractical task to herd all of those cats; that's the reality.

What, then, is there to do about standard libraries? Hopefully as little as possible! Considering the likelihood of getting a design or public interface wrong, and the resistance to change with strong stability guarantees, it is in everyone's best interest that standard libraries be as small and self-contained as reasonably possible. Even some primitive types that are useful in general may be better off in a library that can freely be updated (in breaking fasion) by end users. Yes, I'm thinking of strings in C, again; but I'm also thinking about mutex poisoning in Rust; I'm thinking about Python's multiple XML interfaces and its asyncio module that is arguably poorly designed compared to curio and trio; I'm thinking about Go officially supporting largely irrelevant and insecure ciphers like RC4, DES, and MD5 (honorable mention to both pseudorandom number packages, because that's never going to confuse anyone).

Maybe it's better to not include everything in the standard library. We just end up hearing recommendations over and over about how you shouldn't use this feature or that function because it's unsafe/insecure/deprecated/there's a better alternative. Starting with a very small surface area seems like the right way to go. This is not a proposal to start small and grow gradually, though! I don't think a standard library should grow much, if at all.

Think about it this way: Something like JSON might seem like an obvious choice for inclusion now, but in 40 years, is anyone honestly still going to use JSON? I mean, ok, fine. People are still using COBOL. But my point is that 80% of all developers are not COBOL devs. In four decades, sure there will be some JSON stragglers just like there are some COBOL stragglers today, but I can't believe that it will remain popular considering how bad it is for so many reasons. (Let's face it! The only reason protobuf or something else hasn't entirely supplanted JSON is because web browsers have ossified their serialization format.) JSON will very probably end up like null-terminate strings; the bane of some seasoned developer's existence. They both seem just so innocent, don't they?

Be very cautious about what you choose to ossify.

Resilience

I'm not much of a words-person, but when I research things that I write about, I usually go to the thesaurus often to find a good synonym for whatever concept I'm trying to convey. One of the top synonyms given for "flexibility" was "resilience"! And this was a fascinating discovery itself, worthy of a paragraph or two. Flexibility implies resilience because flexible things bend rather than break. When something is rigid and enough pressure is applied, it has to break at some point. And breaking is not very resilient, is it?

This definition of "bend rather than break" applies in a number of relevant ways for software design. A service provider strives for resilience to downtime and outages. A programming language designer and compiler implementer strives for resilience to breaking user code. Interface designers strive for resilience to footguns and obvious logic bugs. In each case, the person wants the software (or user) to bend rather than break.

I like it. In fact, I'm willing to boldly claim that resilience is objectively desired. If that is the case, then its synonym, flexibility, should also be just as desirable, objectively. Among the definitions of this particular word, you'll find "bend rather than break", "ease of adaptation or offering many options", and (my person favorite) "the willingness to adjust one's thinking or behavior." Doesn't that last definition sound suspiciously like science and knowledge? It's no coincidence, and I'm not just playing word games. I'm pointing out the underlying truth that things must change. It's the third law of thermodynamics.

Any resilient system will necessarily resist ossification.

What are these quotes, by the way? Don't worry about it! I just made them up. If I used twitter, these are the kinds of one-liners I would tweet. But I don't, so I don't. You have to read the longform, instead.

I have argued so far that the concept of "stability" has something of a duality. There's the notion that something is stable because it resists external change, and the notion that something is stable because it resists bugs; it resists stagnation. An example of the former is a strong interface stability guarantee from a library. An example of the latter is keeping your browser and OS up-to-date to fix bugs, known security vulnerabilities, and just to acquire new features. Or stated another way, the "it's not a bug, it's a feature" line of thinking is the bug.

Lock yourself in

Having set the stage for stability as a means to lock-in a design or set it in stone, let's dive in to that concept a bit more. There are some examples in package managers where "locking in" your dependency list is a normal best-practice. npm has a package-lock file and a ci command to install packages from it. Cargo has a Cargo.lock file and installs packages from it by default (top-level only; lock files in dependencies are ignored). go.mod files have the same basic use.

Aside (added May 2024):

Nix takes package manager version locking to an extreme. My very broad understanding is that it uses hashes for package references in a similar way that git uses hashes for commit references. Meaning that it will scale well and be resilient, exactly the properties that we are looking for here!

I bring this up because versioning dependencies has been converging on this concept of locking the precise version number of every package in the tree within many major package managers. And it's no surprise, because this is one of the best tools for providing repeatable builds and testing. Semantic versioning plays right into that, as I covered earlier.

There's just one problem. This is not the kind of "locking in" that I allude to in the title. I'm talking about locking yourself into a specific design that is hard to change later. I'm talking about being that decisionmaker that owns the responsibility for putting a crappy middlebox on the Internet without ever updating it to support newer protocol revisions (or newer protocols, period). I'm talking about tying your hands by offering a stable API that just plain sucks. No, I take that back. I'm talking about tying your users' hands. Because it's not the author that is locking themselves into a contract. They are locking their users into a contract.

You may lock yourself in at your own peril, but you lock everyone else in with you at their peril.

This is something I take personal grievance with. Development philosophies like "worse is better" means that everything I use on a daily basis is figuratively worse than it could be otherwise. The Internet is held together with bubblegum and string. Wirth's law is real, and it threatens our very existence. I'll admit, using Wirth's law to criticize Bitcoin is a bit cheeky. But sometimes you just have to be cheeky.

I've rambled on enough, and linked to about a dozen articles that you should read. I'll just leave you with one last thought: Change or die.