Async Allocators

RSS podcast badge Spotify podcast badge Apple podcast badge YouTube podcast badge

A deep dive into the potential benefits, and awkward drawbacks, by making all allocations async and fallible

View the presentation

Video

Audio

M4A

Download as M4A

Show Notes

Episode Sponsor: CodeCrafters

Transcript

Intro

James Munns: So now that Amos has raised the blast shield,

Amos Wenger: Yes, I'm good to go. I think.

Amanda Majorowicz: Good to go!

James Munns: Your poor mic arm.

Amos Wenger: Yes, exactly. That was, I think as well, I can hide now. This is great. I can act as a dungeon master.

Making allocators async is a bad geat idea

James Munns: So I'm going to talk about something that's been... I talk about this a lot with a friend of mine, Eliza. This is one of those ideas that's so close to being good, but all the like real world realities of it make it obnoxious to use. But my pitch today is that making allocators async is a bad great idea. Not a great bad idea, but a bad great idea in that it is a great idea, but it is practically bad.

Amos Wenger: Mhm, mhm, I'm listening.

GlobalAlloc trait

James Munns: So at least in Rust, there's like two... you know, if you come from C, there's like malloc and free. C++ has something that I don't know, but in Rust, there's like the classic interface, which is the GlobalAlloc trait, which I'm showing on my screen, but it's got two main interfaces. Alloc, where you give it a layout, like how big and what's the padding or alignment of a type. And it infallibly gives you back a pointer to a byte, like a byte pointer, sort of like how malloc works. And then there's dealloc, which is an unsafe function where you give it that pointer and the layout again, because when you dealloc, you need to have the same layout when you unallocate something. And these are sort of like the core fundamentals. There's a lot of like things that build on top of this, but essentially like, give me a chunk of memory this big and with this alignment. And I am no longer using this chunk of memory that is this big and this alignment. This is sort of like the foundation of alloc and dealloc in Rust today.

Amos Wenger: I just want to mention that the reason dealloc asks for the layout as well is so that you don't have to store it inline.. "in-band", I guess, in the memory itself, unlike C free, right?

James Munns: Yeah. I think most allocators do that. Yeah. Like they shove the metadata of the allocation in the allocation. It's just hidden from the user.

Amos Wenger: If you were to take a pointer returned by malloc and you'd look like a few bytes before, then you would not get a segmentation fault. You would just get the metadata for that allocation. I'm pretty sure, but that's highly implementation-dependent. So... so, don't.

James Munns: It's an impl detail. Yeah. Yeah. So that's today's implementation of it, but there's some stuff missing about it. And I'm going to show you the unstable, but interface that I like a little bit better.

Allocator trait

James Munns: And this is the Allocator trait. So not the GlobalAlloc trait, but the unstable Allocator trait. Which has two very similar methods: alloc or allocate, instead of just returning a pointer, it returns a result NonNull slice of u8 or an allocation error. And the deallocation takes a NonNull u8 and a layout.

Amos Wenger: So I know that fallible allocation was a requirement for the Rust in the Linux kernel people. Did the Allocator trait come from there or was it there before?

James Munns: I don't know where it came from, but it's definitely to address those kinds of things, because at least in like userspace programs, you most of the time assume that the allocator will never fail.

What does it meant to be "out of memory"?

James Munns: These days it's very rare to be out of memory. And this is for two reasons.

One, we have a lot. And two, your operating system lies. It will say that you have an allocation and it will do things like swap memory in and out. It will do things like, tell you that you have allocated it, but not actually wired up that memory. So if you don't touch it, it lazily allocates basically, like it lies and says that you have memory, but you don't until you actually touch it, and then it gets wired up, and things like that.

And it can also lie in time in that when you call alloc, if you allocate and that memory isn't available, it might pause the whole world, like basically unschedule your program, set up that memory for you, and then come back and give that to you.

So it can lie in a bunch of different ways that aren't exposed in the old interface. It just says like, "You will immediately get back a pointer of memory." I think technically the old Allocator trait is allowed to return a null pointer if you are out of memory, but it's a weird thing and not everything handles it well. But like the Linux kernel itself, because there is no operating system underneath Linux to lie to Linux, it has to like expose this reality.

So this is one of those like abstractions that you pretend don't exist, but at some point you have to recognize that they exist once you start working on operating systems or things of, of similar caliber.

So we talked a little bit about what it means to be out of memory. On a desktop, usually like when a desktop runs out of memory for real, like when Linux says, "Okay, finally, I'm giving up. You asked for memory that I could never give you, or there's too much going on and I've decided you're not important."

Usually the reason why you also don't observe out of memory on Linux a lot of time is by the time the operating system says 'You are out of memory,' it just kills the program. Like, the program doesn't get to see 'I'm out of memory,' you get killed by the operating system. Or you get a signal that most programs don't handle, which kills you, because you have an unhandled signal, essentially. It's like having a segfault, basically.

Handling reality in "userspace"

James Munns: So, this is something that's like, extremely rare. The good default is to just kill the program. There are some things that don't like that. If you're running like a database or something like that, you might decide, 'Hey, I don't like that,' and so I might handle that signal and there are some hooks for that, but it's a 99 percent of programs don't think about this ever, because it's challenging to actually handle in a reasonable way. So just giving up is actually a pretty good outcome for this, especially on desktops where you've got virtual memory and stuff like that.

Amos Wenger: Yeah. And it's assumed that you're always going to have some breathing room. I forget what the technical term is called, but especially on servers and whatnot, you never aim for a hundred percent utilization, right? You want somewhere to grow up to realize- Oh, you need to scale up. So you always want some headroom.

James Munns: Yeah, exactly. Headroom or overhead, exactly. Where this is less true is on embedded systems. And that's because embedded systems have a lot less memory, they don't typically have an MMU, so they're not like, remapping memory that could get paged out or swapped out, because you don't have a disk to swap to a lot of the time.

You don't have an operating system that can always pause the world and lie. And you might not even have threads that can be paused. There might just be the thread, which is the entire program. But you might still have cases where you only have so much memory or the memory is a little fragmented and you might make a request for memory that the system can't honor. But in my opinion, in embedded, these cases tend to be a lot more ephemeral than on a desktop. There might be a case where right now you don't have enough free contiguous memory for that. But if you were to wait like 20 milliseconds until you're done drawing the display, all of a sudden the display is going to drop that frame buffer, and that memory becomes available.

Where in the same way that the operating system could just go, "We're gonna pause this thread for a little bit. Okay, now 50 milliseconds later, memory did become available. Free up that memory, give it to this other one, unpause the world," you're good to go. But that had to be done by the operating system. It's not in-band control, it's out-of-band control by the operating system.

Amos Wenger: Yeah. Plus there's no mechanism for timeouts. It just blocks and then the OS decides how long it blocks, if it ever unblocks and fails and whatnot. But I can imagine, which is surely in the next few slides, that if you have an async method in the future, which you can drop, you can combine with a timeout to actually decide when you want to give up and say, "Okay, well, I guess I'm not using memory after all."

James Munns: Exactly. So this is an experiment that I did a while back for the operating system that I've been hacking on, Mnemos, where:

Code example: alloc & dealloc

James Munns: What if the signature for allocation and deallocation look like this? We have a function called alloc that takes a layout, an async function that takes a layout and returns a NonNull u8. Notice that, it's, again, there's no error.

So this is taking a layout, but it's not returning a result. It's just returning a NonNull. And we have a non-async function called deallocate that takes a pointer. It probably should be a NonNull, but takes a pointer. And also the layout so it can do deallocation.

This is for exactly the reasons you said, because in async, you can combine futures. So I could have a future for allocation and select on it with a timeout that says, "Okay, we'll try and allocate. And if no one gets me memory in a hundred milliseconds, then just give up and either do something else or send a NAK If you're waiting on like packet memory or something like that, you can choose in-band- or in userspace or however you want to call it- how to respond to that.

Challenge 1: drop isn't async

James Munns: Now, one thing you'll notice is that the dealloc isn't async. And this is because at least in Rust, drop is not async, or at least today and for the near future is not async, which means if you were to drop memory, you need to deallocate that memory when it's freed. Essentially, there's no way to have an async drop today, which is annoying, but not necessarily the end of the world.

Amos Wenger: So I know there's a lot of controversy around async drop and let's not get into it, but I keep forgetting how would we even make it work? And I think the answer is we, we wouldn't mostly, but in this specific case, when thinking about memory in an embedded context and deallocating, would you ever want that to be async? What kind of network banner or something operations would you do when deallocating memory?

James Munns: I can tell you how I handled it. And the answer is: When you allocate memory, your allocator can choose that there might be some minimum amount of space that an allocation can be, just from the implementation of the allocator or for whatever reason it feels like.

If you ask the allocator for two bytes, it might give you 16. It will tell you that they are two, but it really has given you 16 bytes and just lied a little bit. So what I did when you allocate, every allocation is the minimum size of what you asked for, and enough room for an intrusive linked list node.

So essentially a pointer header, and a layout. So I actually have a union of whatever allocation you asked for, and a linked list node, and a layout in there. And so what happens when you call dealloc, I run the destructor on the memory, so essentially now that memory is whatever it was in the allocation before.

And then I replace what was there with a linked list node, and the layout of the actual type. And then I link the node into essentially a free list to the allocator. And because it's an intrusive linked list, it can be infinitely lengthed because essentially it's like a linked list of things like that.

And then the next time that you call alloc or at some periodic garbage collector- not garbage collection, but like maintenance window- it just starts draining the link list of free items and then repopulating the allocator table with those items. Which means you can always free immediately at the cost of the minimum allocation size being essentially three words. It's like the size, the alignment, and then the pointer to the item, or something like that. So it ends up being like three words, which might be like 12 or 24 bytes depending on the system that you're on.

That's an implementation detail and just how I worked around it. There's nothing like intrinsic of how you have to do it, but that's how I got around of: Having to worry about what happens if you drop everything at once, how do you have enough room to store that?

And the answer is, well, you're already allocating chunks of memory, so just make sure that they are big enough to recycle themselves. Which was a very fun and clever trick, and a lot of very fun, unsafe code.

But that's not actually asynchronous, right?

Amos Wenger: I can imagine. I like that trick. I didn't know about it before. I don't think you specifically answered my question though, which is: because that's not actually asynchronous, right? Putting it in the, the potentially infinite linked list of available space, recently free space. It's not actually asynchronous.

So when would you actually need to do something over in the network or like some, I don't know, some desk IO. I'm still thinking in terms of desktop. Okay. My brain is desktop wired and your brain is microcontroller wired. So...

James Munns: Yeah, the answer is I don't know. The funny thing about my drop list is it's almost exactly how ready lists work for async tasks. So the issue with drop right now is just the function itself in Rust is not async. There's no way to pass it the context of the executor you're running on.

It has no way of like spawning a task that does the drop and then awaiting that. There's a lot of proposed ways of doing that. In Rust, because we have this decoupling of the system and the executor you choose, there's no like blessed way that like a general async function could be scheduled on that, because something might be dropped. Like you could drop a future in a non-async context, because you've got to have a function that returns a future and then drops it, but outside of an async function.

Amos Wenger: I was going to say, I've definitely spawned tokio background tasks from drop, but I guess I just got lucky because it just happened to be dropped from an async context already. So it was...

One thing that clicked when I actually started looking into async runtimes was that the current runtime stores a thread local, at least in tokio. It is actually an optimization trick. You can actually send objects to a different thread and drop them from there, or, you know, if you're short lived program, just don't drop them at all because the operating system's job is to reclaim all that memory. So you don't actually have to do it.

James Munns: Yeah. And when I come up with my problems with an async allocator, this is actually the first one, is that drop isn't async. I told you how I deal with that and at least practically why it's fine. I think in the long term you would want to figure out what async drop is, but at least in my opinion, from the allocator's perspective, the allocator doesn't care about how you free that memory necessarily. There's workarounds. It'd be nice if we had async drop because then we could make dealloc async as well, but that's sort of beside the point.

Challenge 2: alloc::collections isn't async

James Munns: The more pressing, really painful thing is that the whole point of using an allocator most of the time is to have nice collection types. Things like Vecs and HashMaps and Box and Arc, and all of those lovely, nice things that you'd like. They're part of the standard library, which means you can't mess with them usually, and they usually have internal implementation details that from outside, like in userspace, you can't mess with.

And this sucks, because it means that when you're writing something, you either need to use this, like, totally separate standard library collection, and I have to write all of my own collections, which isn't horrific- I did it, basically. Not everything that's in the standard library, but a lot of them, like, Box and Arc and Vec and things like that- just so you can have, like, async constructors, basically. Because when you want to call Vec::new, you need to await that. Which actually is really nice in practice, but when you're trying to mix it with existing code that doesn't do that, it sucks because you're just duplicating it for no good reason.

Amos Wenger: And your async constructors, or your kind of wrappers around the collections, they're made possible by methods like from_raw_parts or something like that?

James Munns: Yeah. When possible you try and construct something from scratch- like Vec or Box, you can create from scratch using from_raw_parts, those kind of things. Things like Arc, which have a private struct called ArcInner there's no way to make that from scratch.

Amos Wenger: Do you transmute?

James Munns: No, I just have my own Arc type. When I can't use the existing collection types, like I can't create them manually and then use them. I just have completely separate types that work the same way, but aren't the same type.

Amos Wenger: Right. James, that's boring. I thought you were doing ABI crimes and like having runtime assertions that the contents of Arc, the layout is exactly what you expect it to be.

James Munns: I actually avoid those a lot of the time. But yeah, my favorite, at least the thing that makes me happy is when you do horrifically cursed unsafe things, but make them totally stable and they totally pass things like Miri. Like when I was working on that intrusive linked list for the free list and things like that, that was like my whole point.

I use like unions, I use a lot of transmute casting. I used a lot of that. But it was all stable and it was all passing Miri and testing, so that's like what brings me joy.

Challenge 3: you can't "turn off" non-async allocations

James Munns: But the last- I don't know, this is maybe a separate side of the last challenge here is you can't turn off non-async allocations because when someone does call Vec::new that calls the regular allocation interfaces, which means if you're in one of these out of memory cases, you just segfault or you crash because it returns a null pointer and you get a panic or something like that.

There's no way to force third party libraries to use this interface or even to play nice with fallible allocations. Maybe in the future when more libraries use a fallible allocations interface, you could wrap that, but it's very challenging to use this effectively. Like you have to kind of go all in on this split universe other thing, which defeats a lot of the purpose which is: I would like to move faster by using nice existing collection types, and use off the shelf libraries and things like that.

Amos Wenger: This is a much worse split for the record. It feels to me at least that this would be a much worse split than the executor situation that we currently have and which isn't as bad as people think it is.

James Munns: Yeah, it's up there with having fallible allocations really. I think- I think once Rust stabilizes fallible allocations, we will have a while where it's just awkward because so much of existing code assumes that allocations never fail. But... they can. And I think the Linux kernel project is going to be the one that shakes this out, of like pushing more of the like the ecosystem of crates to be like, "Hey, you should probably have a try new function for everything in the same way that we have like fallible constructors in a lot of the cases.

And for that one, I think that the paradigm isn't so weird, but introducing async allocators would be then like a third flavor of weird, where then you have like unfallible, fallible and failable async or infallible async constructors.

So, I don't know, and I don't think this is widespread enough that the standard library would take it, which means it's sort of like always this weird side research thing that is really, really cool. I might start using it for some network stack stuff, but essentially I can't make it a public interface. It'll just be this like weird internal pattern that I use because it's so nice, but it just does not play nice with the rest of the universe.

Linker trick

Amos Wenger: But I feel like the only way to make it work, first of all, like you said, it is like in your own highly controlled code base. And second of all, this reminds me of the 'never panic' crate or whatever it's called, where you use it- I think they use a linker trick to make sure that you never actually end up- I was going to say calling the panic macro, but there's no such thing as calling a macro at runtime. So I don't know exactly how it works.

James Munns: Do you wanna know?

Amos Wenger: Sure.

James Munns: Essentially you define an undefined symbol, like an extern undefined symbol as your panic handler, which means that if all the panic handlers are optimized out because it's never used in the program, the linker just throws away that symbol and it never tries to resolve that unresolvable symbol.

Amos Wenger: It only works in release mode?

James Munns: Basically.

Amos Wenger: Because like in debug, you can't rely on it to be sturdy.

James Munns: There's false positives because if the optimizer can't guarantee that there are no panicking branches, even if there aren't practically any, it still fails because it's a linker error.

Amos Wenger: Right. But if we had a generic way, and it can't be a Clippy lint, right? Because it's not just in your own code. It has to be in all your transitive dependencies. If we have a generic way to ban methods and say, "No, nobody gets to call Vec::new. If it does, I want to compile time error. Don't rely on the linker. That's dirty." Then we could do something like that. I can think of other use cases. Not right now, but I'm sure we could think of other use cases.

James Munns: Yep. So for now it just lives on the shelf as a fun thing, but if you're designing the next programming language out there, maybe think about having async allocators in the same way that we've finally just gotten people to admit that fallible allocators are- are maybe something that you should think about as a first class aspect of your language.

So, my hope is that maybe the next language beyond Rust gets to use this.

Episode Sponsor

CodeCrafters is a service for learning programming skills by doing.

CodeCrafters offers a curated list of exercises for learning programming languages like Rust or learning skills like building an interpreter. Instead of just following a tutorial, you can instead clone a repo that contains all of the boilerplate already, and make progress by running tests and pushing commits that are checked by the server, allowing you to move on to the next step.

If you enjoy learning by doing, sign up today, or use the link in the show notes to start your free trial. If you decide to upgrade, you'll get a discount and a portion of the sale will support this podcast.