Identifying abstraction

Written on 2022-01-07

In this article, I'd like to explore the concept of abstraction in programming and to propose a definition of it that allows us to think of abstractions as a design tool with clear trade-offs. This is the result of being taught for years (in school, while learning functional programming and modern C++, and generally online) that abstractions are "good", and later that they are "mostly bad" from some more experienced, old-school programmers. Lots of hand waviness, not many definitions of what abstractions actually are, but especially no convincing arguments (if any) on either side.

As a tangent, those assertions of what is "good", solely based on authority, what sounds good and real world metaphors are doing phenomenal amounts of harm to the whole industry, millions of programmers blindly following silly rules like clean code or TDD's "tests are more important than code". Critical thinking is a prerequisite for good design, so let's try to apply some of it here.

What is an abstraction anyway?

Of course, if you disagree with my definition of what an abstraction is, my points might make no sense to you. Therefore, I will operate on the basis that an abstraction is a piece of code exposed via an interface that doesn't leak its implementation. I see this as a gradient: it is impossible to just be an abstraction or not be one. For example, a simple function call, or even binding a value to a variable could be interpreted as hiding. After all, you could have just inlined that piece of code/value everywhere you use it. On the other hand, if you don't have the faintest idea of what some abstraction is doing under the hood, it is completely useless as you can't reason about its performance characteristics (even at the scale of "will it terminate in the next second or in 3 months?"), handling of edge cases, etc. In other words, abstraction is a measure of implementation opacity.

By this definition, "highly abstract" isn't the same as "high level". We'll see a little later how those are different. In short, it is possible to have e.g. a high level function, which provides lots of functionality with low-level details, that is still not an abstraction because all of the details leak (e.g. semantic compression raises the level of the code, but doesn't necessarily make it more abstract). This may or may not be desirable.

This isn't a dictionary definition, I don't care about it being completely accurate, I just want to establish some common grounds before getting started. So when I say "X is (not) an abstraction" it means that it is mostly (resp. mostly not) an abstraction. I don't care if you disagree with this in general, but it is required for the rest of this article.

Broad strokes

Let's start with reviewing what people generally consider to be abstractions and see if this still holds true with our definition.

Functions are sometimes abstractions

Simple functions like lerp or dot_product are not abstractions: as a caller, you expect to know exactly what is going to happen. You probably hope that they will be inlined, that the implementation matches exactly what you have in mind and have probably read the implementation at least once to make sure of that. Those are just shorthands and names put on a bit of code to make it clearer. Note that those lead to higher level code (you deal with a mathematical concept rather than additions and multiplications at the call site) but not more abstract code.

Some other functions are abstractions. In fact, I would argue that the previous functions would be more abstract if they were simply renamed to something like get_value_between_points (or even just interpolate) and get_proxy_value_for_same_direction. In both cases, the function does the same thing, but the caller doesn't have the same expectations and likely doesn't care as much about what is actually happening under the hood. This makes changing the underlying implementation easier but also provides less guarantees.

Let's take open_file as a last example. Maybe this function does something very specific like opening a file on an ext4 filesystem using certain options using Linux system calls (note that you might not need/want a more verbose name in that case if this is the context that you are working in). Maybe it takes an arbitrary URI and supports dozens of storage backends. The former is merely a convenience aimed at people who want a specific thing and provides it directly, making it much easier to reason about what work is being done. Who knows what the latter might be doing, but it certainly provides tons of value. Both are clearly high-level functions, but they don't have the same degree of abstraction: context matters.

Classes are sometimes abstractions

Just because data is wrapped in a structure doesn't mean that it is highly abstracted, no matter if behavior is attached to it or not. Simple POD types like Vec3 or Pair are easy to see through. On the other hand, anything suffixed with ProcessorFactoryFactory is not only disgusting but also abstract to a very high degree: who knows what might be happening in there?

Data structures are not abstractions

Data structures (the container kind, like arrays, trees, hash maps, etc.) are not abstractions: when you use them, you should know exactly what is going on under the hood. Using special types and functions in this context is not meant to hide implementation or the underlying design but to convey intent clearly and avoid code duplication.

Abstract type like Java's Map interface are different (hence the name too). When you use a Map, you have no idea what code is actually going to run, whether it is going to be thread safe or not, how long specific operations over it are going to take, etc. Instead, you get an easy way to write generic code.

Libraries are not necessarily abstractions

Libraries are a mixed bag, just like regular code. For example, math libraries mostly provide low levels of abstraction: you likely know what you're getting when you instantiate a 4 by 4 matrix. On the other hand, something like a cross-platform GUI toolkit or a hardware abstraction layer rates much higher on the abstraction scale. You might be able to predict what they do in specific situations, but some implementation might range from writing to a memory location to making an RPC call to another subsystem depending on the target (and even then, the details may vary wildly).


Out of all the high-level constructs that are available to us, only some qualify as abstractions: some are very opaque, some very transparent, no matter what convenience they might provide over low-level details.

Abstraction trade-offs

As you might have noticed by now, abstraction is often not something that is imposed over a programmer, it is an opt-in design decision. For example, polymorphism is a good proxy for abstraction (although it is not the only factor), but is seldom a requirement to writing a given program. With any design decision comes trade-offs.

On the one hand, abstraction allows thinking at a higher level, often more than "simple" semantic compression. If a function call let's you deal with a problem without knowing much about it, you can likely iterate faster than if you had to care about how that problem will be dealt with, and therefore concentrate on other problems. As well-explained in Titus Winters' Configuration, Extension, Maintainability talk, abstraction can also be a powerful tool to ease maintenance. By providing a strong abstraction to client code (rather than exposing the details or even providing knowledge about them), one can gain high leverage when it comes to refactoring. The least guarantees you provide, the more freedom you reserve for your future self. Finally, abstraction can facilitate code reuse, modularity and evolution. For example, being able to swap out a filesystem or database backends means that new ones can be integrated more quickly, that they can be tested against a shared suite and that one can more easily switch to a new one if it turns out to be better suited.

On the other hand, if abstraction can help you in the future, it might be strictly worse right now. Abstracted solutions are rarely the optimal solution, because they solve more than just a given problem. They involve more code, which often can't take advantage of all the context available, be it hardware, software or, more critically, domain knowledge. It is unfortunately very common that people call out to some code that solves a much more generic problem than the one they need solved (see Casey Muratori's Simple Code, High Performance for a detailed, real-world example). You don't need a whole web browser and it's JavaScript interpreter to draw a circle on a computer screen.

Treating non-abstractions as abstractions

Code is virtually never labelled with an "abstraction value", so people often see leaky things as black boxes. For example, a frightening amount[1] of professional C++ programmers don't know the difference between a std::list and a std::vector, or how a std::unordered_map roughly works, even if they use them daily. This is terrible because not only can it crush the performance of their programs, it can make them badly incorrect as well.

Data structures are an obvious example, but there are only very few cases where you don't care at all about what the code you're calling is going to do.

Another case of treating a non-abstraction as an abstraction is calling an API that isn't abstract enough for some use case. For example, any call to a platform-specific API might look fine, until you need to port it to another platform.

Treating abstractions as non-abstractions

The other side of the coin is that people often treat abstractions as the best solutions to the problem that they're facing. It's easy to believe that something is a universally best solution because it was written by someone smart, has lots of stars on some hosting platform or is talked about a lot. In reality, this is often wrong, as illustrated by Andreas Fredriksson in Context is Everything. In short, you might not need a generic JSON library if you only intend to extract specific data, especially if you control the source as well ; you might not need a generic file-reading interface if you can deal better directly with memory-mapped IO ; you might not need a fancy serialization library if {read,write}v does what you need. The list goes on forever.

Parting thoughts

I don't want to give rules of thumb for how abstract your code (and the one you use) should be, but at least some pointers to consider.

First, always consider the least amount of abstraction given your performance and decoupling needs. Performance is always good and you might know that your context won't change much (or at least that it will be worth it to rewrite important parts of your code if it does). In that case, reducing the amount of abstraction is probably good. On the other hand, you might anticipate to run your code on different classes of devices, with different architectures, on various operating systems. In that case, it would be quite foolish to not abstract away some of the common operations that are not performance critical.

The downsides of abstraction don't matter in all situations either. For example, if your program is always going to be IO bound, it matters a lot less that you're not solving some computation problems optimally. By the same token, if you're writing BigCorp's go to library for doing something that hundreds of programmers depend on, it doesn't matter that it isn't doing the best possible job for each of them (which it can't do). If it did matter to them, they should use a more custom, less abstracted solution instead.

If you decide to go leaky, try to keep a path towards abstraction in mind. While you shouldn't solve imaginary problems, it is a valuable skill to anticipate how requirements may evolve and to make sure that your code could be adapted to become more abstract. By starting with concrete code, you'll end up in a better place than you would if you had gone full abstract to begin with anyways, because you'll only introduce the abstraction that is required and no more. You'll be able to keep all the concrete parts simpler and more efficient by leveraging knowledge about your environment and domain.

If you decide to go abstract, try to keep a path towards customizability. You won't provide the best solution for everyone, but that shouldn't prevent you from thinking about how to provide users with some control over what's going on[2]. See Casey Muratori's Designing and Evaluating Reusable Components for food for thoughts. If you have a large amount of users, also beware of Hyrum's Law.

Note that abstraction is even more relative in the sense that everything starts as an abstraction in the real world. As a beginner (no matter if we start with raw machine code or JavaScript), we can only grasp a tiny amount of what is going on in the computer, at any level. This continues to be true as we learn more and start working on new projects, especially in large, foreign codebases. Abstraction is also required to keep non-trivial systems in your head: while you may be able to pierce all the layers from your configuration system down to the branch predictor of your CPU, you will need to break that all up into smaller subsystems to reason about the whole stack. This doesn't really adhere to our initial definition (because such an "abstraction" is very leaky if you know your whole stack, and is by that definition not an abstraction), but it might help you avoid insanity by letting you not think about microarchitecture while writing the routing that parses your 10-lines configuration file for a long-running service.

I hope that you've gained something from this rambly mess. I don't have any worthy additions to what "best practices" should be about this topic. I do believe however that there is value in identifying how much we should peek through versus leave code alone, and where on the spectrum the code we're writing is. I've seldom seen such analysis made explicitly.

[1] Source: my omniscience (aka my limited experience, but still enough for me to be frightened). Jump back

[2] Or maybe your are working at Google on some large-scale library that you should make as tightly-sealed as possible to keep maintainability up. Most people have much smaller codebases and much fewer people working on them though. Jump back