Talking to Microcontrollers with Postcard-RPC

RSS podcast badge Spotify podcast badge Apple podcast badge YouTube podcast badge

It should be easy to talk to all of your small computer friends

A conceptual introduction to structured communication protocols, and the design decisions behind the postcard-rpc crate

View the Presentation

Audio

Download as M4A

Show Notes

Episode Sponsor: Ladybird web browser

Transcript

You can get a lot done with 2 primitives...

James Munns: Okay, so last time I mentioned that Postcard-RPC is a protocol on top of Postcard, which is an encoding library. And I mentioned that one, it's called Postcard-RPC, and that there's basically two things that it can do. But my point this week, and at least what I figured out from figuring out how much I can take away from everything and still be useful: I found that if you have two building blocks, RPC and topics that you can do almost everything that you might want to do with communication.

I'm excited if you have counterpoints, or whether this actually is complete, but I'm gonna explain what I mean by these two different pieces.

Amos Wenger: Please do.

rpc: "remote procedure call"

James Munns: So RPC is remote procedure call, which basically means pretend that we're doing a function call over a network. This actually has a lot of connotations into it, like we have a specific client and server role, that there's a certain kind of message that goes from client to server, we call that the request, that there's a certain kind of message that goes from the server to the client, and we call that a response, and that every request has a response. And this sounds really fundamental, but it's actually a lot of important little pieces that make it easier to think about.

You can have a bunch of these in parallel or concurrently or however you want to think about it. Usually there's some way to say this is the response that goes with this request. There's some way to unwind that usually with sequence numbers. But the goal is that every request gets its own unique response.

Code example: request -> response

James Munns: And the reason that we typically call it remote procedure call is just because it looks like a function call and we're making it over the network and we're pretending that the network doesn't necessarily exist because we want to do the same kind of async function, "I give you something, you give me back something kind of thing."

This is used in a ton of different places like gRPC or JSON-RPC or Postcard-RPC, but there's a huge amount of communication that can be modeled. Not that this is the only way of communicating, but you can reasonably address a lot of problems with this kind of pattern, not everything, and I'll get to that, but...

A lot of stuff really assumes that you're doing something transactionally. And this is something that I think gets lost in a lot of pub sub or other data models or communication models, or even when you're trying to figure out how to get two things inside of your same program to talk to each other, that having sort of this transactionality and a defined role and the defined pieces of this communication is really useful more often than not.

I'm interested because you've mentioned RPC last time and were like, "Ugh, RPC?" I'm interested why you have that, "Ugh, RPC?" it.

Amos Wenger: I may have emitted the wrong "ugh", I was going to say you misinterpreted the "ugh", but it's probably on me. No, no, no. I've done a lot of RPC before. I'm interested specifically- I don't know how many more slides you've got, but the

Question: pretending the network doesn't exist, what happens if...

Amos Wenger: "We are pretending the network doesn't exist" bit is load bearing and there's a lot of questions of like, "What happens if?" And I'm looking forward to ask them to you, but this can wait for later in the presentation. If you have more slides.

James Munns: I have some more, but I don't have the slide that you're looking for. It's funny that you mentioned that, because in other languages like Python or C++ you might, like you said, just pretend that the network doesn't exist and model it as a blocking call.

But Rust has two pieces that make it possible to actually bring that reality into the code base. One is async. The fact that this is going to transactionally take some amount of time to get there and back. And actually in Postcard-RPC, I say in this slide that there's just returning a response. It's actually returning something like a result, result, result response. The first result is, " Is our connection to that remote thing still live?" The second result is, " Did that foreign entity understand our request?" Or did they just say, "I don't even know what you're asking me about?" And then the third result is, "Did that request succeed and give you back a successful response?"

And because we have nested types like results, which you might still flatten or just want to flatten all those results down into one result and then unwrap it because you say, I don't care if it goes wrong. So you can still pretend if you'd like, but in Rust we can actually model both of those uncanny valley points that I feel like RPC gets a lot of flack for in other languages, where you have to pretend that it's an immediate thing, but it's really blocking until the request and response comes back.

But in Rust, we just say that's async. We don't know, it could be the same locally if we're waiting for a mutex or waiting for some channel or waiting for something like that, we just know it's going to take a non zero amount of time, or it's allowed to take a non zero amount of time. And then because we return a result, we can model, "Hey, this can fail in a bunch of different ways, and you can decide how granular you want to handle all of those different sources of error. So actually I think that's a very good point, but I think Rust with async and results, you can care as much as you want, which is a nice balance to take.

Amos Wenger: I guess I'm immediately thinking of HTTP and for example idempotent requests. I was thinking of retrying because like connections drop, things happen. So my first question is where do you put the responsibility of retrying requests? And if it's not on the caller, the end user of the library, how do you know which requests are even safe to retry? How do you, you know, communicate that? Do you have nonces? I hate that this is a word in British English that doesn't mean the same thing as in protocols at all.

Do you model it? Do you just not care and it's on the caller? And if it's on the caller, then it's not really as transparent as like just using an API, then is it?

Postcard-RPC should be a protocol, not an encoding

James Munns: So this is another one of those where I said that I had to realize that Postcard-RPC should be a protocol and not an encoding. I think there is one more layer in there, which is network stack. And I think I am getting towards this where the answer is: Postcard-RPC doesn't. Right now it's sort of assumes that every message losslessly gets there and back, which is true sometimes, especially like over USB, it's unlikely to lose-lose a message like you would over a multi-hop network. But there are other links like serial ports that I want to support where that's not true. You could have corruption of messages and you need to handle retries and things like that.

I will likely in future episodes start talking about my research around networks . And part of that is looking at the past of like when the OSI model was introduced and contemporaries of like the 80s and 90s, like AppleTalk, which are networking protocols that are meant to run on, let's say Apple machines, which at the time were about as powerful as today's microcontrollers.

So we know that they're designed in a way that is reasonable to handle on this amount of computing and still do everything else that they're supposed to do. And that's one of my longer term research items here is to find a network stack that I can bring to a bunch of different devices. To get TCP like guarantees of: Oh, if a message is lost or corrupted, it's resent.

There's some way of doing service discovery. There's some way of doing all of these things. So the answer is Postcard-RPC does not and should not in my opinion, but in the same way that I have Postcard-RPC, which stacks neatly on top of Postcard, I should have a network stack that stacks neatly on top of Postcard-RPC.

If you go: Well, I'm talking to these different microcontrollers over a bunch of different interfaces and none of them look like Ethernet or Wi-Fi. I want something that feels like TCP that I can run Postcard-RPC on top of, but is much lighter weight or less burdened with history as TCP is.

Amos Wenger: Yeah. I think specifically in the context of RPC, it makes sense to, again, you're going to think all I think about is HTTP and it's true- boys only want one thing and it's a it's an HTTP implementation fast on top of io_uring. And I'm doing that right now.

James Munns: Disgusting.

Headers versus body & head-of-line blocking

Amos Wenger: I know, and it's important to think about: headers versus body and head-of-line blocking. So I will explain both of these, even though it's your half I'm taking over. So headers versus body is important because if you think of a server processing a request, and this is very much in the request response model you have here on the slides. It may have already done the side effect. Just reading the headers. And even though like there might not be any request body or there might be one, but it's discarding it, or it's just, I don't know, the effect is already there. It's already started doing something like creating a record with that ID.

And if you do it again, if you retry, then it'll fail because that record already exists, even though it may be partial or corrupted because the request body actually dropped.

James Munns: I think this is a semantic of HTTP. I think you are absolutely right for HTTP. But, I don't think that's inherent to protocols. I think that's inherent to HTTP. For example, Postcard-RPC does have a header. It's got two things in it. One is a key, which is a hash of the endpoint name and schema.

And the second thing is a sequence number. There are no verbs in Postcard-RPC's header like HTTP has, where you might have create, delete, whatever. The only verbs are within the body itself. And this was a choice that I made of not defining the verbs in the protocol itself. So in that case, you would have to get to the body.

Like there are, there is a chance in Postcard-RPC where if you give it a key or a sequence number that it's not expecting, the protocol stack itself might not even give that to user space. It might immediately say, look, we have no handlers for that request type. So just go away. And that was that second result that I was talking about. I think it is important to define like you were saying, but I think what you're describing is a property of HTTP, not protocols in general.

Amos Wenger: That's definitely, I'm talking about how HTTP solved a specific problem. But if we're thinking about Postcard-RPC, how would you implement something, an interface, a service, let's say, because service is kind of a generic RPC term that lets you upload something. I can imagine that being rather than even an embedded. Cause like, I don't know, you're, you're enrolling fingerprints and then you have to upload the images scan or something.

I can see your bias here is that the payload is small enough that it could be all in the request and it's fine, but I'm thinking, what if it's too big? What if it blocks all the other requests? Do you have quality of service going on here?

James Munns: So let me get to the other primitive cause then I'm going to start talking about how I would combine these. Cause I, yeah. So the first one I've talked about is RPC. I think most things will fit in the RPC box request response, but there are a couple of things that super don't. And they're opposites of each other.

And I described these last week as the stuff you would use WebSockets for. Which ends up being about two different things. Either streaming, where you're sending so much data that it doesn't make sense to double the number of packets that you're going through, that you are just like: I'm blasting multiple parts of a transfer.

Or things that happen very, very rarely, like notifications. So instead of polling, "Has this event happened?" You just leave the connection open and then 30 seconds in the future, you get a, "It happened!" And you just send one packet, like you maybe have some keep alive for keeping the socket open. But. In general, you're not polling like that.

"topics", for streaming or notifications

James Munns: So the way this works in Postcard-RPC is again, there's a specific type that goes with that message. So Postcard-RPC is very opinionated that everything should be strongly typed. And these topic messages can go in one direction. They can either go PC to microcontroller, or they can go microcontroller to PC.

They're always in one direction and they're always of a given type and there's never any chance to respond to these ever. They are totally unsolicited, unidirectional, with no acknowledgement or whatever. This is more Pub/Sub-y, and the word topics comes from MQTT, which is a Pub/Sub protocol.

But I really just stole the name more than semantics or anything like that. And at least in Rust, the way we modeled this is you publish kind of like you would send over a channel and you receive by first subscribing to that topic so that you buffer up all the incoming messages and then you pop them off like messages in a bounded channel more or less.

So these are the only two primitives in Postcard-RPC. These topics have the same header that RPC endpoints have, in that they have the schema of the message and a sequence number. But the idea is that essentially you can make a lot of other fundamental communication structures by combining these.

So for example, in a, a demo that I gave during one of my workshops was if you want to start listening to live streaming data- like you want to start listening to an accelerometer-

Code example: streaming

James Munns: you might send an RPC request to start streaming, that starts the streaming, and then you immediately get a response that says 'I started streaming.'

And then later you say, 'stop streaming.' And then you get a response that says 'I stopped.' And then the stream stops. For things like file uploads: you might say, 'I am going to start a file upload. Please give me the stream ID, or give me some unique ID for this.' And then the server says, 'Okay, I am now listening for that.' And it gives you back, 'This is upload number 27.' You can then start doing a multi part upload over topic messages, which you just include that ID in there, and then you just broadcast back. And at the end you go, 'I'm done. Are you happy or would you like me to retransmit anything?' And I'm sort of saying: well, you can build it yourself out of sticks and rocks, which is true, but at least in embedded, a lot of the time, people will only need one or two fancy things outside of the two basic structures and having a whole protocol library doesn't necessarily make sense.

I was really inspired by ZeroMQ, which has a similar sort of approach where they say we've got like four or five messaging and concurrency primitives and then we have a cookbook that says: look if you combine these primitives in this way, it will have these characteristics. It will do these things. And I think I would like to have some support library items that say like: are you going to do this multi part upload? Here's a function that does that for you or a handler that does that for you. But the only fundamental building blocks of the protocol itself are the two endpoint and topic items because it makes the protocol easier to customize and easier to reason about when you only have a couple of fundamental pieces.

Amos Wenger: I have an immediate question, because

I'm looking at the code sample that's been on the screen for a while and it has like the publish and subscribe methods are, how do I pronounce this? Parameterized? They're generic.

Turbofishing: question & answer

James Munns: They're turbofished.

Amos Wenger: They're turbofished. Exactly. Much easier to pronounce. They're turbofished with like my TX topic and my RX topic. This makes it seem like you have a finite amount of topics, because you have to declare all the types ahead of time.

But you're talking about stream IDs, because you might want to upload multiple files, and not have a fixed number of upload slots, so how would that work?

James Munns: So endpoints and messages have what's called this, I call it a metadata trait because I realized that traits are an easy way to couple types and constants with each other. So when we have an endpoint, they have a type that is the request, a type that it is the response, the path, which is sort of like the URI. And then these keys, which are that like pre calculated hash of essentially schemas and path. And the reason that all those functions get monomorphized like that, or they get turbofished like that, is because if you give it this one marker trait, it's got all the information it needs. It knows what the request type is, the response type is, the key that it should include in every outgoing message, the key that it should expect in every incoming message, and how to recreate these keys if they needed to.

So the answer is you would do it in user space, in that you would just have to have a struct that had a field that was like 'Upload ID whatever.' So it wouldn't be at the protocol level. Postcard-RPC wouldn't think about it. It would just know: well, if the request type is this and the response type is this, that means that if you subscribe to this message, I know that it's always incoming messages of this specific type. So I will just deserialize every incoming message with that key in this way.

Amos Wenger: So, if we're thinking about this in terms of I guess, existing message buses. Cause I've been looking at this for reasons myself. So something like Apache Kafka or something like this. In your case, there's one topic per like, how do you call it, per key? If there's multiple streams, it's all the user space, like you said, it's in the struct, that's in the body of the messages this topic. So again, you're going to think, all I think about is HTTP and that may well be true. But one nice feature of HTTP/2 for all sins is that if you don't care about a stream anymore, you can tell the other peer to stop. Cause you've made it very clear that like topics are unidirectional. So in a world where everyone's well behaved, and I think that's also an assumption you're making because I think in your line of work, you control both ends of the connection, but in my world, we don't. In my world, the peer is the enemy.

James Munns: Yeah, that's a good point.

Amos Wenger: It's important when there's an upload coming in and you're like: no, no, no, you can stop, you just reset the stream. And you say, don't- don't send me this anymore. And if they don't stop at that, then it's a protocol error. You can just sever the underlying connection.

James Munns: That's also a layer that I don't have, but is a good point that I'm going to have to have is the difference between- like when you have a fixed USB connection, there's no hanging up the socket. There's actually no socket. There's just messages coming in. There's an implicit bi directional socket. And this is one of those things I have to start getting into once you want routing or passing the messages on. Like if I'm connected over USB, but then I have someone connected to me and I want to forward messages on, you start needing that concept of a socket that I don't have right now. In a lot of embedded systems it isn't relevant. You're just directly connected, like you said, you control both sides, which I'm cheating by ignoring a lot.

And I think that's a really good point on the difference between HTTP and what I would typically use Postcard-RPC for.

HTTP semantics & CoAP

Amos Wenger: Yeah, and like you said, I keep bringing up HTTP semantics, but I think those are useful to have, because, you know, there's a standard way to signify, you know, if you have a 2XX status code, things probably went well. If it's 100, you got something else to do, 400, it's your fault, 500, it's my fault. And then, you know, in terms of caching- I think caching is probably the thing I'm going to have the hardest time selling you- but at least retrying or like migrating ongoing streams to another connection or transport or something. Yeah, so those semantics are here for a reason. I think specifically you're making g the extreme choice of like leaving as much as possible to user space, but that will make it harder to implement something on- well, I, yeah, I guess maybe not. Yeah.

James Munns: Have you ever heard of CoAP?

Amos Wenger: No, what is that?

James Munns: CoAP is a really cool protocol that is like HTTP light. It's a little binary focused in that they use CBOR instead of text for a lot of things. But it's funny because if you read how CoAP is specified, it's an interesting midpoint between where Postcard-RPC is and HTTP is, in that it has a lot of the semantics of HTTP. It has the binary interfaces of Postcard-RPC with a slightly different encoding.

But you can tell that there's CoAP sort of in the middle. And I looked at that and I thought about making a CoAP library to pick HTTP semantics because it gives you out of the box answers of what happens when I want to do caching, what happens when I want to do proxying or reverse proxying? Because you can just say, "Well, we do it like HTTP," and people will understand it.

And I think that is a very worthwhile approach to it. And I think like you said I've made a different choice. I don't know if that's a good one yet. And I think everyone who I show it to like Tef or whoever, I explained to them what I'm doing with Postcard-RPC and they go, "You realize you're just making like a more limited version of CoAP, right?" And I go, "Yes on purpose." and this is why I got so surprised when people asked me if I could do Postcard-RPC over HTTP, which is something someone asked me at RustNL, and I go, " The whole point of Postcard-RPC is to be the minimal rope bridge to cross a chasm when there's nothing there."

Postcard-RPC: the minimal rope bridge

James Munns: Like, if there's no roads or bridges or infrastructure crossing a chasm, I want it to be something that you can just throw over, tie it off on both sides and it will work, and as long as you're just walking back and forth, it will be everything you need and it will be so lightweight and so easy to use. When you want a performant server interface, or if you already have like a paved road from here to there, there's no point in putting Postcard-RPC on top of it because you're better served by the infrastructure that's already there. But I do a lot of that, like 'throw the ropes over a chasm that has nothing built across it yet' sort of thing.

Or I need to draw up blueprints of a bridge and it's faster to just chuck the rope across the chasm, than draw up the blueprints for a bridge and making sure that both sides are load bearing, you know, it's a weird extended metaphor now, but the goal is to be the dumbest, simplest working thing that works reliably and predictably. And if you need more than that, I will probably say, "Either do it in user space or don't do it with Postcard-RPC." But the goal is to have something that like, if you have nothing, I will give you minimum viable semantics for a protocol.

Amos Wenger: I guess the question of 'can we do Postcard-RPC on top of HTTP' is not that weird because people do gRPC on top of HTTP, which makes me really sad and I thought I was going to be able to ignore all my life. As you do, but then I dealt with Docker and they do exactly that. Like you make an HTTP/1 connection, then you switch protocols and then, "Ta da! It's gRPC now!"

James Munns: over HTTP/2.

Amos Wenger: I think it was just a HTTP/1. Cause like the switching protocols thing is an HTTP/1 thing. HTTP/2

why would you do-

James Munns: It's ALPN. So then you switch to either HTTP/2 or gRPC, but there's definitely people doing gRPC over HTTP/2, because they want the multiplex streams and they want to be able to do those kinds of things over there. Where I know it was a question that came up for River of: will River do regular TCP proxying, but also gRPC proxying specifically. You could just pretend that it's just HTTP/2, but there's some stuff where if you wanted to be able to peek specifically into gRPC structures and things like that, you may not want to pretend that it's strictly HTTP/2.

Amos Wenger: Yeah, exactly. You can either like pass through, but then you don't have any insights on what's going on, or you can deserialize everything and then reserialize it whatever. And then you pay the cost of that. I guess a better competition for Postcard-RPC is tarpc. I don't know if you're aware of it?

James Munns: I've heard of it before, but I'm not familiar with it.

Amos Wenger: You should check it out before next week. No, I don't assign a work to you but I could, I should!

I think, for the record, I haven't read through the entire Postcard-RPC README, but I really liked the ZeroMQ. Maybe it only speaks to me because I know what ZeroMQ is. I was really hyped about it back when I wrote bindings for several languages.

James Munns: I didn't steal anything specifically from ZeroMQ, other than that concept of have a couple primitives and then a cookbook, rather than trying to specify the long tail of every possible usage of your protocol.

Amos Wenger: Right, but starting with like what it's not is A really good strategy for open source, because since one of the working names for the podcast is like "open source research hype" or whatever. The best piece of advice I can tell anyone who's looking to like gather hype around a project is be extremely honest about what it's not what it's not doing yet. Don't put your wishlist on the README. Keep it on your wishlist.

James Munns: Expectation management.

Amos Wenger: Exactly. Because people will get excited about even reasonably sized miracles, but it's really hard to over promise and deliver later in open source land.

I think specifically for this one, yeah: all those questions are going to keep coming up. There's dozens of people who are into HTTP semantics, I'll have you know. So you will get those questions from other people again.

James Munns: And I'm looking for the people who are interested in TCP and UDP semantics because when I start doing the exact same treatment to a network stack that I'm doing to a protocol stack right now, where I go, "How little of a protocol can I get away with and still call it a protocol that works?" And there will be those comparisons of: why didn't you do it like TCP, or why didn't you do it like UDP, or why didn't you do it like TCP/IP.

And I have no answer for that at all right now, because I haven't even figured how minimal I can get. And I think that's the baseline that I'm going to have to play with. In Embedded Rust, we have a TCP/IP stack called smoltcp, which is really good and very performant, and has some limitations. Like, it doesn't really address networking necessarily, like having two interfaces and doing routing between them.

Baseline: is it smaller, faster, more resource efficient?

James Munns: It's more like I've set up one specific interface that's Wi-Fi or Ethernet or whatever and I'm talking to it. And really that's sort of my baseline of does what I've actually produced: is it smaller, and is it faster and is it more resource efficient than just doing TCP over a weird interface, like there's SLIP for serial ports, there's a bunch of different serial ports or buses that I could implement it for.

I could implement just weird PHY level stuff and pretend that it's just weird Ethernet over some weird thing like SLIP is. And that's what I really have to sort of test myself against of: would I have been better off just doing IPv4 with UDP over my weird esoteric serial ports than trying to come up with a whole new network protocol.

Like you asked in our last conversation: would I do this again? And the answer is: I'm always going to reinvent wheels just for fun, because that's what I enjoy doing. So I'm going to reinvent this wheel, but I give myself maybe happily a 50 50 shot at the end of me just going, "Oh, I've made a worse version of this. Okay. Well, that was very instructive," and I'm going to put it on a shelf and never use it again. Or use some of the ideas for something else, but I think I'm going to see it through. I don't know whether it's actually going to be better than the baseline of TCP/IP, but, uh, that's what the research is for, right?

Amos Wenger: Yeah, well, I'm looking forward to it.

Episode Sponsor

This episode is sponsored by the Ladybird browser.

Today, every major web browser is funded or powered by Google's advertising empire. Choice is good, but your only choice is Google. The Ladybird browser wants to do something about this. Ladybird is a brand new browser and web engine written from scratch and free of the influences of Big Tech. Driven by web standards first approach, Ladybird aims to render the modern web with good performance, stability, and security.

From its humble beginnings as an HTML viewer for the SerenityOS hobby operating system project, Ladybird has since grown into a cross-platform browser supporting Linux, macOS, and other Unix like systems.

In July, Ladybird launched a non-profit to support development and announced a first Alpha for early adopters targeting 2026, but you can support the project on GitHub or via donations today.

Visit ladybird.org for more information and to join the mailing list.