Why I don't love Rust (either)

Written on 2021-11-14
Edited on 2021-12-27: added a section on compile-time evalutation and one on low-level obfuscation

Although I write C++ for a living, I don't write a single line of it in my spare time, I use Rust instead. People assume that it's because Rust is my favorite language, but really it's just because I don't want to have to deal with C++'s (lack of standard) build systems, dependency management situation, all of it's footguns and legacy baggage.

Now, there are also positive reasons to use Rust, it's not all about avoiding C++: the borrow checker is nice even if it's not really smart enough yet, native sum types are missed everywhere else, the tooling - for all its faults - gets out of your way... overall the development experience is pleasant and it's still easy for me to keep track of complexity and performance.

With that being said, it's not all rainbows and unicorns and Rust, while young, has already made some big and small final decisions that make it a pain to use for me. In fact, the only reason Rust might be my favorite language is that everything else right now is even worse.

First of all, I want to clarify that this is not a general "review" of Rust. Despite the authoritative tone, I'm not saying that Rust is a bad language or that the things I don't like should change/shouldn't have been this way to begin with, I'm just stating what I don't like about it. And guess what: that's to be expected from a tool that tries to appeal to a broad audience, it can't completely please everyone (and arguably anyone). The authoritative tone is merely used because it is easier to write and read in my opinion. I also want to highlight that I know that designing a language is hard. I'm simply substantiating why I don't think that rUsT iS a PeRfEcT LaNgUaGe, no matter what the Rust Evangelism Strike Force says.
Everything that I'm going to talk about here relates to things that are very unlikely to change/be fixed. For example, I am missing generalized const generics and generics specialization, which currently prevents Rust from competing against C++ at all for performance sensitive applications[1], but those are coming, so this eventually won't be an issue for me anymore.
Finally, please note that this list is neither exhaustive nor ordered in any way, in parts because what annoys me most isn't necessarily what is objectively the biggest issue.


Definitely not the biggest issue, but perhaps the most obvious one. Some people find it ugly for what I'd consider the wrong reasons (e.g. the fact that it's a C-like language in syntax, the turbofish or lifetime annotations). I also consider that there will always be some inherent complexity to the syntax of low-level languages, especially if you want to provide some form of safety (and therefore not just a way of declaring ints and dereferencing them for everything that's not a function call). Still:

Lack of consistency

The Rust syntax is much more complicated that it needs to be. There are lots subtly different pieces of syntax, and you just have to allocate some grey cells to remember that. An argument for that would be that this avoids overloading and makes things more explicit, and that's acceptable to some extent, but there are already syntax overloads[2], so the benefit is much smaller. Besides, few people notice that, for example, fat arrows are only ever used in pattern matching. They rather remember that when they need pattern matching, it's a fat arrow (not a thin one or a semi-colon) that they need.

Examples of syntax that I find needlessly diverse include:

There are two kinds of arrows in Rust, -> and =>. We don't need both! In fact, we don't need the second kind at all, it's only used for pattern matching and we could just use a semi-colon there!
Speaking of arrows, why don't we use them in lambda notation? The two pipe characters are only used like that in lambdas and not only do they look foreign to everyone but Ruby programmers, they are also more annoying to type than arrows.
Using {} vs ,
In several places (if/match bodies and lambdas), one can omit curly braces if the block consists of a single statement/expression. I think that imposing consistency here by forcing the use of curly braces would be fine, but I can see the benefit of lighter syntax in a common pattern. What bugs me a lot more however, is that the comma at the end of a match block becomes optional (but remains valid) when using curly braces. This just creates more potential for inconsistency, which is simply unnecessary, even if rustfmt will take care of it in most codebases.

Multiple ways of doing the same thing

This one bothers me a lot more as it makes code harder to understand for beginners and makes lots of people waste time by trying to understand the difference between to things that are semantically equivalent. Many people find this to be a non-issue because you can just use whatever you prefer, but I strongly disagree because someone who is used to writing

int id(int x) {
    return x;
in C++ will have a hard time the first time they read code like
auto id(int x) -> decltype(x) {
    return x;
(I know that there is a place for trailing return types in C++, my point is that some people like to use them everywhere, even if "old-school" syntax would be more than adequate and universally understood)

By allowing multiple ways of doing the same thing into your language, you're making it a lot less accessible to everyone just so that a subset of your users get to write code in a way that pleases them a bit more. Those subsets eventually become dialects that you either must all learn to be proficient in various code bases, or stick to only one/a few. But then, if you encounter something new to you, it might take a considerable amount of time to look up what that new thing is, how it works and if it maps exactly to the way you're used to doing things or if the semantics are slightly different (and if so, how). This is dangerous, because failure to discern different semantics will lead to (maybe subtle) bugs, which in turn removes some of the esteemed fearlessness that user expect from using Rust.
Examples of that in Rust include:

Using () or [] in macro calls
vec!() and vec![] are exactly equivalent, but because people like one over the other, we get both. And now all beginners have no clue why the vec! macro uses square brackets. Maybe because arrays/vectors are special since we can index them?
Using where in generics
Consider those two Rust functions:
fn f<T: Read>(_: T) {}

fn g<T>(_: T)
    T: Read,
What's the difference between them you ask? Well, there is none. Some people will argue that one is more readable that the other (I've heard the argument both ways, so it's a poor argument in my mind), but beside that, this only causes confusion.

Personal peeve: impl blocks

Those are just terrible in my opinion. Now I could understand the need for them if we could only have a single impl block per-type, plus one per trait implementation, but that's just not the case. My biggest issues with those are that they're noisy, but most importantly that they add a new level of indentation to everything that's not a free function (i.e. most things in most code bases). The worst thing about this is that's it's syntax somewhat copied from C++, where C++ has two pieces of syntax for the same thing, and this is the worst one of the two!

What I mean by this is that instead of

impl KillMeNow {
    fn urgh(&self) {
        // OH WOW, I'm two levels of indentation deep already
        // and I haven't done anything yet
we should just have to write
fn HappyNow::nice(&self) {
    // Wait, is this C or something..? *_*
Notice how not only we're almost sticking to the left side, we also have one less useless block in general.

Sure, I could probably configure rustfmt to dedent everything in impl blocks by one level but:

Also, "just write simpler code lol" isn't an argument, this is already too much indentation, even without any more control flow. "Just use 2-spaces indentation" is even less of an argument. "It's nice to have consistency between impl blocks and trait implementation": eh, maybe, but ruining methods for everyone isn't the right solution IMO.

Language design

While Rust gets a lot of things right, there are still a few things that I consider to be straigth mistakes.

Lambda captures

People often make fun of C++ closures, mostly because of their verbose syntax. To this day, I C++ closures remain the most elegant and cleanest that I've seen: there's a list of captured variables, a list of arguments and the body of the lambda. Each list is delimited by a pair of brackets, all of which are different to cleanly differentiate them (square brackets, parenthesis, then curly brackets, which is consistent with e.g. how functions look like).
The capture list is quite clean, there's no need to specify the type of variables, only the way you want to capture them (by value, pointer, lvalue reference, etc.).
It is true, especially since C++ 20, that the shorthands for captures are getting a little wild now, which I'd argue hurts readability, especially for beginners, but at least those captures are still explicit.

In Rust, on the other hand, captures are completely implicit. If you read the header of a lambda, you have no idea what is going to be captured in the closure, or how. Sure, this allows you to write code quicker, but then you might as well write in Python if you'd rather let the language decide things for you. Also, it allows the language to avoid letting you capture more/more agressively than needed, but that can be taken care of by issuing a warning (see -Wunused-lambda-capture).
On the other hand, it means that as you write code, you might capture a lot more than is reasonable, since the compiler will happily take (mutable) references of whatever you need, or even move this. Of course this is wasteful, but even worse is that you might not notice this until you use that closure somewhere else and suddenly the borrow checker complains that it can't move some of the captures twice out of your original context or something.
On top of that, the language decides how variables are captured, the most authoritative source stating that this means whatever is possible first in that order: &T -> &mut T -> T. So this means that if you need to immutably capture a u8, you'll take a reference to it, which is wasteful. Well unless you consider optimizations I suppose. Oh actually look, the dissassembly shows that it's copied even with -C opt-level=0. Just close your eyes and let the compiler know better since the language doesn't specify anything. You're not in charge here.

The sad happy path (control flow)

While error handling and otherwise non-happy path control flow must not be skimped over, I find it crucial that the main thread of execution should be obvious. What I mean by this is that rather than

if (ready_for_execution()) {
    if (auto required_ptr = get_that_ptr()) {
        if (auto result = execute(required_ptr)) {
            if (result) {
            // painful
        // handling
    // of other cases
I'd much rather read
if (!ready_for_execution()) {
    // handle
auto required_ptr = get_that_ptr();
if (!required_ptr) {
    // handle
auto result = execute(required_ptr);
if (!result) {
    // handle

Note that while this may look to you like error handling, it is nothing but general control flow: it might be completely expected that we're rarely ready for execution (think of a trading system waiting for an opportunity to appear before sending an order) and it could be that the post execution handler should only be run under certain circumstances, in which case !result wouldn't indicate an error at all either.

Now, why do I prefer the second snippet?
First, the code is much easier to understand: if I read something, I need very little context to understand where I am, what I am doing, why and how I ended up here. In particular, I don't need to start counting indentation or stare at a specific column in my text editor to figure out where I am.
Second, the control flow is unidirectional, it almost exclusively unrolls top-down and not left-to-right. This means that when I'm asking myself questions such as "What if I had taken the other branch?" or "What code has been executed?", I can reason about a lot more complexity in the second case. To understand what the program is doing, you must follow its execution, and its execution is linear. Imagine that some of the handling code had return statements and not others: you'd need to build a complex state machine to figure out what could be going on, even with such a simple example. On the other hand, you can easily build mental checkpoints about where you'd land and what would get executed in the second case.
Finally, while the happy path is more dense in the first example, I don't think that it makes it more obvious, especially when the code gets more complicated:

With all that said, let's see how Rust tackles this. The way I see it, there are a few ways of dealing with this:

Try operator
I won't talk about it right now because
  1. It only apply to error handling, not generic control flow
  2. I will talk about it in the context of error handling in the next section
Manual checks
This is the straightforward way of achieving my preferred layout of control flow: you .is_this()/.is_that() something and later .get_this()/.get_that() and never panic because you do the checks properly. You do, right? Weeell, maybe not always. Or maybe you don't anymore after some fairly involved refactoring. Either way, you're losing on most of the benefits of sum types: you don't get a compiler error as you add new unhandled variants since you're not pattern matching and nothing is proving that your .get_*() access has been validated first.
Pattern matching
This is the straightforward way of achieving the layout of control flow that I first described and dislike. You do get all the safety that Rust offers for sum types though.
Method chaining
This is very popular and I hate it:
  1. It obscures what is happening by replacing simple control flow primitive with situation-specific methods like .ok_or_else() or .get_or_insert_with(). Good luck with learning dozens of them/remembering which is does what
  2. It's really hard to navigate those chains since there are no variables (which means no variable names) to let you know where you are, what you've juste done and what type you're dealing with at any step of the chain. Your IDE might help you a little here with virtual text type hints, but that doesn't replace well-named variables (and it doesn't help in most code review environments or quick edits with less-than-optimal setup, etc.)
  3. The previous point is exacerbated by the fact that you need to deal with many return type formats. Some methdods might mutate a parameter, some might return a bool, an Option, a Result or anything else. You might end up with an Option<CustomResult<Foo>>, and now you need to intertwine error handling/unwrapping and business logic in those hellish method chains. The try operator might alleviate some of that pain, but often times you'll want to handle None/Err properly, not just give up. The try operator also won't help you with any of the conversions that you'll often need to perform
  4. While simple chains look linear, if you need to handle more cases than "happy or give up", you'll be left with more 2-dimensional madness than when using pattern matching
  5. For the reasons stated above, it has the downside, like the try operator, that it's only really useable for error handling
  6. Rust only allows returning from the current scope, not the one that's one step wider. As we'll see in the next section, this cripples the use of method chains in many cases of error handling
  7. Method chains promote the use of methods (obviously) and function pointers/closures for everything that they do. This means that tons of code are thrown at the optimizer in hope that it will convert all of that into straightforward, inlined code. In practice, it might be fine, but I wouldn't be surprised if one was to hit less-than-optimal cases fairly quickly in a complex codebase. Besides, this means that supposedly simple control flow can add a lot to compile times

So what are we left with? Many ways of dealing with control flow, but no silver bullet. Of course, it makes sense to pick the right tool for the job, we don't need a single way of doing things if it compromises too much. But the reality is, Rust doesn't let you pick the right tool for the job here, only the least bad one, but it's always going to be painful anyways. This is less apparent in simple code, where pattern matching looks elegant, method chains are clean and the try operator makes your problems vanish, but beyond slideware, reality is less glamorous.

I'll add that while refactoring is usually nice in Rust thanks to its type system, making a method chain more complex or converting it to pattern matching as it grows is often going to give you headaches. You'll get there, the compiler will make sure of that, but it won't be pleasant[3].

Error handling

Error handling is a funny subject in Rust. Its evangelists will let you know how it is one of the best things Rust has to offer and how much better it is than in other languages. Yet I disagree and more importantly, the Rust Library team also admits that there's a lot more work to be done here. I've said that I wouldn't talk about current issues in Rust that are expected to be fixed sometimes in the future, but the work that has been done so far and the direction that this is all taking doesn't indicate to me that my concerns are ever going to be addressed.

Let's start with the good parts first. Rust has predictable, value-based, sum-type error handling. Of course, you can still return error codes and use out parameters instead (or even std::process::exit(1) your way out of errors), but the language provides sugar that makes it easier to do your error handling the idiomatic way.
For example, Rust will warn you if you ignore return values but it has no concept of an out parameter. Also, integer return codes aren't as safe as strong types, since the compiler won't make sure that you match over all the values that you've produced for that specific error type. You also can't use the try operator with them.
On the other hand, if you use match over custom sum types, you'll need to opt out of handling all variants, including if you add new ones and you'll get to use the try operator to simply bubble that error up.
Rust not supporting exceptions means that the control flow is (mostly[4]) predictable from anywhere, which is also a win: your error handling won't launch you some other place without you seeing, which makes auditing code much easier. It also makes performance predictable: while you should not use exceptions as control flow anyways, not having exceptions means that you can never pay their cost (both in terms of space and time).
Finally, Rust supporting destructors (via the Drop trait), means that there's no need for cleanup sections to goto like in C or manual defers like in Zig.

First off, it's important to note that all the pitfalls of Rust's flow control also apply to its error handling, since error handling has a large intersection with error handling[5]. Pattern matching is painful, especially as you start to compose error enums together, method chains are annoying to work with or downright unusable in some situations and using C-style error codes is very error-prone.

Beyond just pattern matching on sum types that might be errors, something that is annoying is defining and composing those types in the first place, and dealing with those wide enums at the call site. There are some libraries that are meant to alleviate that pain, both at the declaration and call sites, but adding (especially non-standard) dependencies to a project isn't a good solution, particularly if it involves mixing code using different libraries.
Something else that people often overlook is how expensive that kind of explicit, value-based error handling is. It boils down to chains of conditions and jumps, which are expensive, especially on modern hardware, which thrives on predictable code. People love to hate on exceptions because they have a cost, but in practice, that cost is only paid in code size (and Rust doesn't optimize for this so it's not an obviously bad trade-off) and error path execution, which doesn't happen often for exceptional errors (note that exceptional errors can very well happen in very hot spots). I'm not going to argue in detail why I think that more people should consider exceptions for handling kinds of errors in C++ (I'm reserving that for another post), but I do think that Rust not providing the option makes it suboptimal in some (many) cases, both in terms of performance and ease of use[6].

Although we can't do much about the underlying checks and jumps resulting from the use of values as errors, the try operator provides a way of greatly improving the ergonomics of dealing with them: you use it and it will bubble up errors if it encounters any. In practice though, it is a bit more complicated than that: you can't bubble anything and you can't bubble out of anything either. For example, you can't use the try operator on a boolean, which means that you can't just give up based on some predicate. Conversely, it is not possible to bubble out of a void function. Those limitations are problematic to me since they occur in perfectly valid scenarios and forking out of the usual, idiomatic error handling patterns in only some cases makes code more complicated and less readable.
The try operator also doesn't magically solve the problem of plumbing between different types. Even if you constraint yourself to using std::{option::Option,result::Result}, you'll still have to explicitly convert one into another (and provide an Err value in one of the cases) and the variant types won't be implicitly converted from one to another. Arguably, that last point isn't desirable given Rust's design, but some way of improving conversion ergonomics would be nice.
The try operator also encourages bubbling up errors up without thinking too much about it, which means that all you're left with at some point is some value and no context attached to it. std::error::Error is supposed to support backtraces at some point, but it remains to be seen if it actually solves the problem in a practical way when it goes live.
Finally, there are proposals and lots of traction to get syntactic sugar into the language that would allow handling errors in a way that looks like checked exceptions in other languages. I doubt that this is a good idea, as it would make the language even more complex and would provide little benefit (this wouldn't change anything about performance and types would still need to be converted) and might mislead many users about what is happening. Granted, the same arguments were held against the addition of the try macro/operator, so we'll see how this evolves.

Finally, let's talk one last time about method chains. We've established that they were quite limited for general purpose control flow, and while error handling tends to be more straightforward, we hit another major issue: method chains use closures a lot. More specifically, many standard error handling methods take closures, either to decide what should happen when a condition is met or to lazily/dynamically produce values (e.g. Result::or vs Result::or_else). The issue here is that the scope that one can return from in a closure is that closure itself. In other word, it's not possible to bail out of a function from within a closure that is evaluated as part of an error handling method chain. This sounds obvious from a language standpoint (and it is, really, I wouldn't want a function to be able to randomly goto_upper_scope_twice), but it makes method chains impractical anyways.

Ok, I think that's enough talk about control flow and error handling, let's move on...

Language bloat

While I believe that a good language has a good deal of necessary complexity (like some way of doing generics, destructors, some way of ensuring safety and strong typing), I prefer tools that remain simple where possible. The biggest reasons for that are that a complex language is harder to learn and be productive in, easier to misuse and harder to build competing implementations for. On top of that, the more complex a language gets the more risk there is to get something wrong and it only gets more complicated to fix those mistakes as the language gets more traction. The obvious example for all of those pitfalls is obviously C++, which is a disgusting, complicated monster that no one can build compilers for from scratch anymore. Another example outside of the PL sphere would be the web (with web browsers being the implementations).
Our field being so young entails that this balance is still highly subjective, but this article is about my preferences, so that's fine.

A good example of language bloat is if a feature can't be implemented in terms of the language itself, but requires calling into some magic compiler hook. This is unfortunately the case of the format_args macro:

macro_rules! format_args {
    ($fmt:expr) => {{ /* compiler built-in */ }};
    ($fmt:expr, $($args:tt)*) => {{ /* compiler built-in */ }};
This makes me a bit sad because it is unnecessary and implies that doing something somewhat like formatting arguments (which is considered regular code in many systems languages) requires extending the compiler, which is obviously not feasible in the vast majority of cases.
What makes me even more sad is that Rust is getting implicit named arguments in formatting macros, which makes the whole thing even more magic. I'm also still of the opinion that none of that matters, bikeshedding how formatting calls look like is unimportant. On the other hand, this increases the language's complexity and introduces more ways of doing the same thing. I've explained earlier why I think that's a bad thing.
On the other hand, Rust avoids the bloat that comes with varargs by usually using macros instead. Ironically, I'm not sure that this is the best way of dealing with this (I think I quite like how Zig passes tuples to regular functions instead), but at least we don't have a complicated, broken version of varargs in the language, which is nice.

Another example of language bloat is standard intrinsics, which Rust has a lot of and I have a issues with.
Intrinsics are nice, they give users access to special things that either their compiler or platform supports, which often results in additional functionality or performance. What can make them hard to use is the fact that they are not portable: what a platform exposes might not have a direct equivalent everywhere else (same goes for compiler intrinsics). After all, if that was not the case, they'd just be exposed as libraries or language features! So if you want to rely on them, you'll have to carefully pick an abstraction that works for you. If you're working with a platform that supports a specific feature and try to port your code to another one that doesn't support it, you might want to consider picking a different algorithm/design altogether rather than implement that feature in software. Conversely, if you're moving to a platform that has two different features to achieve that same functionality, you'll have to decide which one to pick, based on trade-offs (e.g. execution speed vs precision) that only make sense to your specific application. This is the price you have to pay for the benefits that using intrinsics provides you with.
The same goes for compiler intrinsics (like Clang's __builtin_memcpy_inline, which is platform-independent): if you depend on one, be ready to do without it to support other compilers or to pick which one is the most appropriate in the event that this new target offers more choice in that domain.
Hopefully, this illustrates why some things are intrinsics and not regular functions/language features. With that being said, what does it mean for an intrinsic to be standard? Well, it means that every Rust compiler must expose them for every platform. What if some platform doesn't support computing the sine of a float? I guess your implementation of sinf64 will be in software then ¯\_(ツ)_/¯. This isn't bad per se, but should be exposed as a function rather than an intrinsic, which in turn shows how these operations should be exposed as functions rather than intrinsics.
We also have black_box, which essentially is used to prevent optimizations on a chunk of code, for example in benchmarks. However, it is provided on a "best-effort basis", a.k.a. it will do absolutely nothing with some compilers and is unpredictable with regards to portability. At this point, why not just keep this a rustc intrinsic, document it well in that context and keep it at that? That would be hard enough to deal with in a correct manner, making some API standard and only dictating "this does something, maybe" is pointless and dangerous.

Compile-time evaluation

Rust's compile-time evaluation closely resembles that of C++: one can define constants almost like variables (using const instead of let) as long as the expression yielding the value is a constant expression. Constant expressions be formed from function calls, as long as those functions are marked const.

The obvious downside to this is that any function not marked const cannot be used in constant expressions. This includes code over which you don't have control.

Another downside is that there are contexts in which it is not convenient to declare something const. One example which I have recently encountered (in C++, but it would have been similar in Rust) looked fundamentally something like this:

// serialize a foo_t in a buffer_t with a certain precision
void serialize(buffer_t*, foo_t const&, precision_t);

// call site
constexpr precision_t precision = /* ... */;
serialize(buffer, foo, precision);
Note how I needed to extract the initialization of the precision parameter in order to ensure that it would be constructed at compile-time. The compiler might have done that either way as an optimization had I inlined the construction of the precision parameter in the function call, but that is not guaranteed.

This is only a simple example and is not that big of an issue in practice but it keeps me wondering if some other model could be more effective. I don't know what it might look like, though.

Low-level obfuscation

This is in parts a generalization of the previous section. In short: I don't know what the compiler is going to do and what (machine) code it is going to spit out. I intend on writing more at length about this in another article, so I'll keep this brief.

Rust theoretically enables code generation that is close to optimal. For example, iterators can compile down to assembly that is as good as a for loop. However, that requires that the iterator is well-written and that the compiler optimizes all of its code away. Of course, the same goes for all "zero-cost abstractions", not just iterators.

My issue with this is that this requires a lot of faith, which isn't exactly engineering. One needs to trust that the pile of abstractions that they write will actually be well-optimized. Without language-level guarantees, it's not possible to be sure that ideal code will be generated. Where the language does provide guarantees, it is not always easy to spot whether they apply in a codebase or not, especially transitively. In other words, you can't at a quick glance be sure that the code that's in front of you will compile to good assembly, especially when involving auto-vectorizers (I recommend reading Matt Pharr's story of ispc for valuable insight about this). I wish that the language would make it easy for me to reason at a high level about code generation (which is something that C still rightfully gets praised for).

The next best thing is tooling. To be fair, Matt Godbolt's Compiler Explorer is a tremendous tool when it comes to that. However, it is not tractable at scale: it makes no sense to "godbolt" your whole codebase. More critically, there are no tools that I know of that can help monitoring that over time to prevent regressions, either due to compiler changes or increased complexity in the code.

I have personally seen C++ code be orders of magnitude slower when using STL "zero-cost abstractions" like std::accumulate rather than the corresponding for loop. This is downright unacceptable and should be illegal by the specification, especially for anything in namespace std and makes me dread what me code will compile to, especially in a cross-platform setting. C++ is not Rust, but bear in mind that its major compilers are much more mature than rustc is and don't just throw tons of IR at LLVM for it to optimize.

Performance, simplicity and safety are often touted as primary goals for modern systems languages. I wish that they would consider predictability of generated code as well.

In today's Rust, I know that iterators are better than for loops over arrays because they avoid bounds checking, as long as the iterator code gets optimized away, but also unless the compiler can figure out that there would be no out-of-bounds accesses ahead of time by just using a loop. In other words, it is not obvious how to iterate over an array-like type in a performant manner to someone who has actually looked into it. This is terrible as it adds incredible amounts of mental overhead, doubt and uncertainty to what should be trivial programming.


This deserves it's own section because it is extremely important. Without unsafe, Rust would be a high-level language with an annoying borrow checker and likely a pile of standard structures that might be fast but couldn't be implemented by regular people. This means no custom data structures, no custom allocators, no unions, no talking to the hardware via memory-mapped IO, nothing interesting, really.

Unsafe is cumbersome

This is all in all very minor, but it bothers me anyways. Code that is unsafe heavy (think access to unions in a CPU emulator, anything low level that needs to cast between different numeral types, not just data structures or "scary pointer manipulation") tends to be ugly. If you want to limit the area of your unsafe code (which I do), you'll need to write the keyword a lot. You could have helper functions for that, but this would be hiding the unsafety of it and is less direct than seeing what's going on. This is personal, but I like that aspect of C.

I agree that this isn't best practice, but not everything needs to be. The more narrow the scope of your code, the less you need to put big warning signs everywhere, especially in a part of your code where this is implied (e.g serialization). For example, it is good to outline potentially tricky portions of C++ with {static,reinterpret}_cast, but it's just painfully heavy in some context.

I'm not saying that Rust is wrong here, it just bothers me sometimes.

Unsafe is more dangerous than C/C++

This one on the other hand I think is dangerous. It is nice that the Rust community frowns upon excessive/unaudited usage of unsafe, but that doesn't apply to projects that aren't part of it and real systems programming does mandate the use of unsafe operations (at least if you want to spend your time building things rather than jumping through hoops, hoping that the abstractions that you're using are zero-cost for your use case[7]).
"So what, all of C/C++/other-systems-language is unsafe, so that makes Rust better!" What does that mean exactly? When you break it down, there are two problematic patterns with the use of unsafe code, in any language. You could do something stupid in an unsafe context that would break your program immediately or corrupt its data, like dereferencing a null pointer. No language is going to save you from that and this is precisely where you would use a feature like unsafe anyways (except obviously you'd have intended to do something like memory-mapped IO, not just dereferencing nullptr...). You could also do something that would break the invariants of your safe code. In C++, this would be something like freeing the underlying storage of a std::unique_ptr or a std::string. The same thing can happen in Rust too.
The key insight here is that unsafe code affects "safe" code: if you can't uphold invariants in unsafe code, your program is ill-formed and will do nasty things, especially after the compiler takes advantage of optimizations enabled by undefined behavior. The two things that you can do to minimize the likelihood of that happening is by either limiting the places where the invariant breaks can happen (unsafe blocks) and looking really really really carefully at them to make sure or limiting how many invariants you need to uphold (or how hard they are to uphold). If you consider that every C++ program essentially runs in the context of unsafe fn main(), Rust is definitely better equipped in that domain. In terms of the quantity/nature of invariants to uphold, it gets trickier. The Rust reference states about the list of behaviors considered unsafe that "There is no formal model of Rust's semantics for what is and is not allowed in unsafe code, so there may be more behavior considered unsafe". In other words, not only are there more things that you need to look out for in unsafe Rust than e.g. C++, anything you do in an unsafe block that you couldn't do outside may or may not be undefined behavior, which may or may not consequently break your "safe" code. Scary, right?
In practice, the compiler probably won't generate garbage code for something that isn't well-known to be undefined behavior, but still.


I'm fortunate enough that I don't write a lot of async code in Rust. I don't like async Rust. It is sometimes necessary, but for the most part I can just pull in tons of dependencies, wait for them to compile for 15 minutes the first time, slap the boilerplate where it needs to be and go back to writing straight code again. Oof, it already sounds bad, doesn't it?

My biggest gripes with async Rust are its color problem, how disparate the ecosystem around it is and the general heaviness and clunkiness of it all. Yes, you can use smaller runtimers or roll your own, but that's still a lot more code or more dependencies and now you can't interact with crates that depend on Tokio anymore, hardly a win.

Regarding the color problem, some people think that it is not a problem and others even think that it is a good thing. I understand their points but I just disagree.

Just slap it on an executor
No! If your function does something useful, I want to be able to call it from a sync context and just wait for it to complete, with no executor, no additional crates, no nothing. Maybe I don't care about performance, maybe I just want to download a goddamn page to scrape it for content and I don't care about waiting a few milliseconds for it to get there. If it takes me too much effort to look up how to get Tokio running in my project, download it, compile it and add all the boilerplate, I'll just get the thing in another language and read it from Rust.
Type safety is good, Future is just like Option
I agree with this, it is important that one can discern what's going on. I don't want the language to do stuff behind my back or to rely on documentation to figure out if a function is/can be async.
std is sync only, what are you complaining about?
I don't trust the Rust foundation to ensure that this will remain true, but even assuming that, this tells nothing about the wider ecosystem. If a significant portion of it isn't trivially compatible with my code, be it sync or async, I'm sad. If we need tons of code to be duplicated, I'm sad. If for avoiding code duplication we need to introduce executor into sync codebases, I'm sad.

To be fair, I don't think that how languages like Go handle this is the right way to go about it (I've never messed with Erlang/Elixir, so I don't know about that). Also I'm not an async expert. From what I've seen, Zig feels a little closer to what I like, but a global

pub const io_mode = .evented;
// or
pub const io_mode = .blocking;
is terrifying to me and I'd rather explicitely pick between sync and async execution per function call. I don't care much about a little additional verbosity, but I wish the compiler would just transitively turn all the async code that I call with a specific generic parameter or keyword into sync code.

I also wish actual async code was more approachable, but that's just wishful thinking, since I also don't want a default, standard executor to magically run behind the scenes when I write async code.

Overall, I don't like async/await, coroutines and stuff like that so I'm happy that most of what I do works well with straightforward "one thread of execution per OS thread" code, since Rust definitely doesn't change my feelings about that.

Standard library

I don't have tons of complaints about the standard library, it's pretty nice overall. It's a shame that a lot of containers don't expose their allocator (or something like that), which makes them impossible to use in scenarios where a custom allocator is mandated but implementing a BTreeMap from scratch might not be worth it.
Some APIs are abit annoying to learn: for example, the first time you want to iterate over a string, trying to grok what Chars is takes a bit of time. Same goes for most of std::process More annoyingly, algorithms can't return pointers/iterators to you like in the C++ STL, which makes sense, Rust being Rust.
Overall, the standard library provides good foundational collections and decent APIs for interacting with your OS.


One of the most obvious issues about Rust is how terribly long its build times are. This is a Rust issue, not just a rustc problem (rustc actually uses rather cutting edge technology): the language is complicated and build times will almost always be traded for performance or safety. I'm happy about that trade-offs, but that doesn't make my projects build any faster...

On a related note, Rust is quite annoying to integrate to cross-technology projects. Cargo is a really nice tool, but it almost only cares about Rust. I hear that Bazel (which is great btw) has decent support for Rust, which is nice, but that's about it.
Integration with other languages, even with C via FFI, is also rather heavy. Again, there are decent reasons for that, but I understand why Zig enthusiasts enjoy being able to just #include C code in their projects.

As a last note, I'll say that I'm not a fan of the Rust ecosystem. There are great things about it, lots of innovative crates that are simply the best out of any language out there today. Yet, many of them have integration and/or dependency issues. Many depend on a specific async runtime, don't expose sync APIs, don't compose well with other crates in the same domain. Many also have gigantic transitive dependencies for no good reason, which makes them a lot harder to package, audit and keep secure. Many crates are also strongly depended on and cease getting any update as their sole maintainer loses interest in them or decides to write a better alternative to them. This is understandable of course and is to be expected from open source libraries, which often depend from the continuous efforts of single individuals. The Actix fiasco is a sad illustration of that and how the Rust community can make this problem even worse.

That's it, most of the things I dislike about Rust. I'm sure that I forgot some and that I'll update this post as I hit them again, since I'll surely be writing tons more Rust in the years to come...

[1] Unless you don't care about staying DRY at all, in which case you could always just write C/assembly/microcode, but you don't want that if you're interested in C++/Rust anyways. Jump back

[2] Like using semi-colons for type specification and type instantiation, parenthesis for function calls and destructuring types, using square brackets to index values or call macros, mixing types and lifetimes in generic lists, etc. Jump back

[3] It is still heaps better than with a dynamic language like Python or Javascript, so we can have nice things. I just wish we could have even better ones. Jump back

[4] Rust still supports destructors, which is hidden control flow and you can always panic. Jump back

[5] Yes, error handling involves more than just the mechanical act of doing something in the face of an error, including a general plan with regards to error handling, defining surfaces delimiting around which errors can/cannot happen due to sanitization, which parts of the system are expected to fail and if so, in what way, determining with what granularity errors must be reported, etc. But this doesn't change much that what what we care about from a language is what facilities it provides to realize that error handling strategy. Jump back

[6] I understand the argument that exceptions make the code harder to reason about, and while I agree with that to some extent (e.g. if an external library can throw anything at any point and all you can rely on is at best documentation detailing what might happen), I think that those cases can largely be mitigated, from both technical and social standpoints. For example, exceptions could be forbidden forbidden to cross crate boundaries. This way, what a library does would be none of your problem (you already trusted them to not panic/be fast enough before anyways, what's the difference?). Within a single crate, the use of exceptions (or lack thereof) is mostly a social problem. Don't want them in your code? Don't let them in! Want them in your code? Define where and how they can be used and enforce that during code review. Or even better, show that Rust can be better and write tools that do that for you! Jump back

[7] Quick reminder that zero-cost abstractions doesn't meant that the code that you're using is optimal, merely that what it was designed for can't be written in a more efficient way. Barring the fact that you're still paying for them in terms of compile times and abstraction overhead (how much more space it takes in your brain and how many layers you need to reason through), if your use case isn't exactly what that abstraction was built for, there likely is a better way to solve your problem. Jump back