Proxying: it's good for you!
James and Amos are back, and talking about routing, reverse proxies, and yeeting packets onto the internet.
Video
Audio
Show Notes
Episode Sponsor: Depot
- River Reverse Proxy, Poststation
- proxy server, postcard-rpc
- Domain Name System (DNS), Internet Protocol version 4 (IPv4) and version 6 (IPv6), Border Gateway Protocol (BGP)
- traceroute
- Internet Control Message Protocol (ICMP) packets
- Mara Bos, time traveling traceroute, time to live (TTL)
- HTTP/3, QUIC
- Collapse OS, dual in-line package (DIP)
- assembly to forth to Python
- Nginx, Apache HTTP Server, Caddy
- connection termination, sidecar proxy
- Squid caching and forwarding HTTP web proxy
- bufferbloat
- Go programming language, Amos' articles I want off Mr.Golang's Wild Ride and Lies we tell ourselves to keep using Golang
- postcard
- MAC address
- Variable-length quantity (varint)
Transcript
Amos Wenger: I think everyone's okay. Can you hear me fine, James?
James Munns: I can hear you just fine.
Amos Wenger: Cool. Alright, James, welcome back to season two of SDR. What do you have for us today, James?
Proxying is just dumb routing
James Munns: So, have I already told you that my brain thinks in terms of like X is just Y, like that's kind of integral to my learning experience is I go, "Oh, this is like that other thing."
Amos Wenger: I have noticed that. Yes. So proxying is the uber of routing is what you're saying?
James Munns: I don't- oh God, I don't know how to pitch this as a startup deck. I've been working a lot on proxying, both for River and then Poststation, this app that I'm developing for embedded systems is also very proxying based in that it's kind of the bridge to all of your devices and things like that. I think I've even mentioned on episodes before, I've tried to make a routing or a whole networking stack, really. Yes. And routing is a pretty important part of that. And always kind of like, there's always something where I got my feet tangled up on and routing is one of them.
And so I ended up leaning really hard into proxying for Poststation and kind of realized after I had everything wired up that I actually got. Real close to what you would want out of routing. So today is just proxying is just dumb routing.
Amos Wenger: Okay. But you know, like when you say proxying, I think about, yeah, I mean HTTP proxying, is that what we're gonna talk about today?
Or are we talking more low level stuff like postcard-rpc based?
James Munns: I mean, it's gonna be around postcard-rpc 'cause that's literally what I've been doing since the last time we recorded an episode is I-
Amos Wenger: but you're also writing a reverse HTTP proxy, so that's, I dunno.
James Munns: It's true. But yeah, I mean,
Amos Wenger: That's where my mind goes.
That's why I ask these questions, James, but sure.
James Munns: Effectively they're very similar. But routing does a lot more. So it's almost like a proxying is just kind of a sort of subset of routing.
So like normally when we're talking exactly like you're talking about HTTP devices, the internet, we've got computers everywhere and you generally, your computer doesn't know exactly how to get to Google servers or to get from my computer to your computer because all other computers are just somewhere else, even on your computer.
The answer is just, it's somewhere else. When we send a message or a packet or something, we just send them to somewhere else. We say like- we have a vague idea, like when we do DNS, we might get an IP address, and then we kind of just write it on the outside of the envelope and drop it in the outgoing mailbox and hope it gets there.
We really like the computer, has no idea how it gets there, but it knows the destination and it just says "Go." And this is like generally how any computer can talk to any other computer as long as you have some concept of the destination, like an IP address or DNS address that gets turned into an IP address.
Routing is a whole thing
James Munns: You just kind of yeet it out onto the internet and it gets to the other side because routing is the thing that figures it out. Mm-hmm. And routing is one of those, like "it's a whole thing" topics. There are many-
Amos Wenger: It absolutely is.
James Munns: PhD careers in engineering and company whose entire jobs are to just think about routing of how you manage that.
And a lot of it comes down to like, you only have so much overview of even an IPv4. You've got 32 bits, 4 billion addresses. If you had to keep information on all 4 billion of those devices up until maybe the last decade or two, that would be very difficult. And now that there's IPv6 it's, you know, even more outstandingly difficult.
Amos Wenger: And so much of the knowledge around routing is kept.
'Cause not many people really need to know about it. I've been in some circles where people have said, "Ah, it's always DNS," when something goes out, there's an out on the internet. But then you run into some deeper circles of hell where people say, "Oh, If it's not DNS, it's BGP," and you're like, "Wait a minute, what's BGP?"
And then people found out about BGP because of some major outages. But then when you look at how it's actually done in the big router companies... it's frightening. It's very scary.
James Munns: And routing is another one of those things where there's like, as it's described by the RFCs, like the internet RFCs for them, and then there's what actually happens in practice. In the same way that you can send certain packets, and it's totally valid according to the RFCs, but if you ever go through a Cisco router somewhere in the middle, it'll just drop your packet because
Amos Wenger: Absolutely.
James Munns: It's technically allowed, but it just says- no, get outta here.
Amos Wenger: Yep.
James Munns: And so there's also like those kind of things. The big takeaway here, like you said with BGP, is there's this whole system where every machine that your packet kind of goes through, every district office that it bounces to on the way to its destination, they all have sort of a partial view of the internet. Usually, like they're direct neighbors and then maybe some big picture things that's where like BGP comes in and things like that.
And they'll have routing tables and no one ever has a complete picture of, or most commodity hardware has no idea of the entire internet. They just have like, "Where is the next step?" And then maybe like some big picture version of that.
Amos Wenger: Well, and the thing is, there's no, when I looked into traceroute and that kind of thing, first of all, traceroute is not representative of how packets are actually routed.
There's something I discovered recently. There's a whole page about how it's actually fricking useless, and if you notice that there's drops somewhere, it doesn't actually mean that there's a problem there. It could just mean that they're ignoring your ICMP packets because they are big routers and they have other stuff to do, like routing, actual traffic.
James Munns: Was it Mara or someone made, someone made a traceroute implementation that made all of the times go backwards? Because it's like exactly what you're saying is it's just a payload in a packet and turns out you can lie on the internet. And you can just put the timestamp in history.
Amos Wenger: There's different techniques.
Yeah. Yeah. 'cause it's just the TTL. Traceroute with ICMP packets is a hack. So routers are not abated by standards or whatever to actually respect that. You can do trade fraud by UDP, but then that's not the same thing as TCP, which is what most applications use- ah, that's not true anymore 'cause of HTTP/3. Darn it. The point I was getting at was that...
James Munns: You were too QUIC to say that
Amos Wenger: Frigging Google.
James Munns: For Amanda QUIC is the original name of HTTP/3.
Amos Wenger: Oh. I didn't even freaking catch the joke. Well played James. Touche.
James Munns: I'm glad I explained that
Amos Wenger: As the Americans say, pretending to be French, touche.
James Munns: That means it's staying in the, uh, in the episode.
Amos Wenger: It probably is.
I had another point, I swear. Traceroute... something. Yes, oh, there is no- like, because it's distributed systems and I'm sure you were getting to this and I'm stealing the, the, the limelight on your episode, but there is no one version of the truth. Studying the internet is, is being a historian. I don't know what's the closest thing, but you just go around and you try to ask questions of things and gather data, and you have some incomplete everchanging version of the truth.
And depending on where you stand, when you ask a question, you get different answers. And you don't know what's going on in certain countries that are- in different countries that have different policies about different things. Yeah.
James Munns: And it's one of those things, it just takes effort and it's how the internet has evolved.
It tends to work fairly well. But it means that you need to have sort of an idea of where things are and you need to keep track of that because they do change over time. And for embedded devices, particularly small devices, that's state and things that you have to do to keep up with figuring out where they are.
Not easy for small devices
James Munns: You know, I heard this report about this, but has that timed out? Have I gotten newer, conflicting information? Figuring out all these things. So you've got your little device that's a sensor or whatever, and it wants to send something over the internet. You have to go, "Okay, I would like to put a destination on this. Where do I send this?"
And in some cases it's just hardcoded. You just send to a place and someone else will figure it out. But if you'd like to do any kind of networking locally where you're like: well, sometimes I send it over here over my wireless network and sometimes over, you know, the serial bus that I have over here, you kind of have to keep track of some things, and that's not always easy for small devices.
It's very possible you can get little wifi devices and things like that. But like a TCP/IP stack that does the bare minimum of routing takes up a non-trivial amount of your CPU cycles and memory and Yeah, and code storage and things like that.
Amos Wenger: And it's probably way overkill.
James Munns: Well, you know... devices are going that way. What was overkill a while ago. Devices are getting fast and cheap and powerful. So...
Amos Wenger: I guess so. Yeah. Have we talked about Collapse OS on the- I'm sorry to hijack your presentation once again.
James Munns: No, I don't think we have. I've seen a bunch of projects like that though. Is that, is that the one that like bootstraps from Forth or is that a different one?
Amos Wenger: I think it's a different one, but I see the one you're thinking of. Mine is just, um, if there is an apocalypse and we suddenly live in a post apocalypse world. Uh, we are gonna lose computing. And it's, it's a shame 'cause for modern computing to exist, there's so many prerequisites.
But what they notice is that even though we're probably not gonna be able to make chip from scratch again for a very long time, it is ever, there's a ton of 16 bits, chips that you can scavenge from a lot of different equipments. And so they're developing for that. Essentially it was like you go around, you get 16 bit chips from various places and you can actually assemble them manually with like wires and stuff. 'Cause it's still like this- what is it called? Is it pinball package? I don't know exactly what it's called.
James Munns: Are you talking about like DIP ones that you that have Yeah, the dual in-line package.
Amos Wenger: Yeah. It's still human scale. You can work it with your fingers and like connect them to things. And that's their premise.
I remember needing to take a minute after finding that website, like, wow, some people have put some thought, when I see doomsday preppers with a lot of canned goods, I'm like, "This is gonna last you two weeks and you're gonna get eaten by your neighbors."
James Munns: This is exactly what I was gonna draw.
Amos Wenger: Yeah. Whatever.
But when I see that, I'm like, "Huh...." that's more realistic and more scary to me somehow.
James Munns: Yeah... it's challenging as someone who likes building the world from scratch and does it as a hobby, there's a lot to build, especially if you're at the point where you can't just buy things off the shelf either.
Yeah, I've seen that and I've seen a couple of other ones where you're like, okay, if I can get something that can compute some numbers, how can I work back up? And I've, I've seen one that bootstrapped itself from like. assembly to forth to Python, and then maybe a C compiler at some point in there, where if you have some kind of device that can run forth, or if you can write enough assembly to get a forth, you can write enough forth to get a Python, and you can write enough Python to make a C compiler or something like that. It was an impressive set of bootstrapping,
Amos Wenger: But James... Why would you want a Python?
Sorry, this is a weird step. I can forgive C, but I draw the line at Python C.
Amanda Majorowicz: Hang on... this is not the right podcast. We got- it's a different one, different podcast for Python.
What about proxying
James Munns: Okay. So if I'm gonna make the claim routing is too hard for these smalllittle guys that I write code for, then what about proxying?
So we've talked about proxying. Most people have probably heard of proxying if you've done backend stuff. Proxying is the Nginx. The River, the Apache 2 that you put in front of your production servers.
Amos Wenger: James, I like you, but did you just sneak up your own work between those? You could have mentioned Caddy. Caddy's mainstream ish.
James Munns: Yeah, I probably shouldn't 'cause I'm not actively working on it right now. But like,
Amos Wenger: Do you wanna mention,
James Munns: I'll take it again. I'll take it again.
Amos Wenger: No, you can- no, let's keep it. I'm just think, do you wanna mention why people, why it's advised to have a proxy in front of your web application?
James Munns: You would probably know better than I would, but I just wanna talk about what proxying is first.
Amos Wenger: You can do that.
James Munns: Proxying in general is you put a computer in front of your computer. When a client wants to talk to who they think is the server, what they end up talking to is a proxy who gets their first chance to either terminate the TLS connection to do some kind of security check to say, hey, you're asking for things that you should not ask for, to do authentication, and then usually that proxying agent is sort of like a bouncer, and if they decide to pass that on, then they're gonna pass that on. And they could also do the job of load balancing.
There's a lot of different reasons you'd use it for proxying, so that first server might just say, okay, well I've got 10 people in the back. You're gonna go to desk number four to get your request served because they're the least busy.
Amos Wenger: In the explanation you just gave, there is one word that gives away that you have been working on reverse proxy.
Do you know which one it is? Terminate.
James Munns: Terminate. Yeah, it's a good one.
Amos Wenger: 'Cause when you tell people, "I'm gonna terminate your connection," what they think is you're gonna cut it off. Yeah. But what it actually means in that context, terminating TLS means we're gonna do the last bit of work that takes all HTTP traffic and and encrypts it and decrypts it.
James Munns: I think this is one of those old telephone network rules. 'cause when you talk about-
Amos Wenger: Probably.
James Munns: Talk about terminating like a connection, it's literally where you've connected it onto some device or something like that. So I think-
Amos Wenger: It's the last mile or something. Yeah. Yeah. I was just putting myself in the shoes of someone who has not worked on such a proxy, because we both have, and that's the word that stood out. Yeah.
Amanda Majorowicz: This is also very helpful for me because that's exactly how I was picturing it until you explained it to me, so thank you.
Amos Wenger: Thanks, Amanda.
James Munns: Gotcha. Yeah, and so really it just means there's a computer in front of another computer, is proxying. It means you are transparently talking to something in front of what you actually think that you are talking to, and in a lot of cases this isn't even exposed. You wouldn't know. There's no like hint, Hey, I'm talking to a proxy. The idea is that ideally, most of the time it just is transparent.
And this gets used a lot on the internet for client server interactions. When your laptop connects to a server somewhere, it's the client and the server is the server that it's talking to, and.
These get used a lot for proxying because when there's this direction to it, it means that when you go in, you talk to the bouncer that gets passed on, the person on the inside passes it back to the bouncer and back.
If you were doing a true peer-to-peer connection, it's a little more difficult. Then you're kind of a full blown router at that point because you're doing kind of like anyone-to-anyone connections, where proxying is more specifically like someone on the outside comes in, gets bounced to someone privately on the inside. Someone on the inside privately replies, and then that gets taken back to the client. There's other cases where this isn't the case. If we're talking about more proxies, there's like internal, what are they called?
Sidecar proxies get used a lot. And that's for like mutual authentication, where you both have your own bouncer that sets up a secure connection between you and things like that. Yeah. But that's... a totally different topic.
Amos Wenger: I was confused about that for the longest time. So it's possible that I'm still confused about it, but I think we are defining reverse proxies.
You're thinking of the thing that is like Google front-end, or you mentioned Nginx for something that often has a one or a few websites behind it. There's also forward proxies, but those are not as used as they used to be. Wow, that's a lot of "use".
James Munns: Yeah, that's true.
Amos Wenger: They used to be used to accelerate connections.
You, you'd have like, you'd run Squid locally and it would cache the internet for you and it would just be served from disk instead.
James Munns: I think like internal business networks used to use those too.
Amos Wenger: I think some still do...
James Munns: like all your outgoing connections went through a bouncer and things like that. So yeah, you're right- I'm mostly talking about reverse proxying. But there are definitely a couple flavors of proxying.
Amos Wenger: And I'm just gonna answer my own question 'cause I raised it. One of the reasons you might want, you do want a reverse proxy, is because your application probably only speaks HTTP/1, first of all, but you wanna serve HTTP/2 and 3 to the outside. You're not gonna want to worry about TLS. Translation is a big part of it. And thirdly, performance also, if you have a bunch of different nodes around the world that can do the TLS handshake with your clients and then talk to your application, even if your application itself is only in one location, that's still gonna speed up the initial handshake and exchange. And then also just because the internet is a scary place. Like when the internet first started. Before everyone got on it, it was just like, "Yay, we can exchange information!"
But now it's, as soon as an IP comes live and starts accepting connection, it's like, are you a PHP thing? Can you leak credentials? I'm gonna try all the known paths.
James Munns: Are you a WordPress thing? Yeah.
I love seeing those in logs for like their HTTP server and it's like-
Amos Wenger: I have so many of these.
James Munns: WP login admin.
Amos Wenger: So instead of people attacking your application directly and potentially finding, I don't know, something in your Python code that makes it uses up all the memory, something in your C code that crashes and dumps private, user data, you'd rather have them attack Google or Amazon or whatever you put in front of your website.
James Munns: Exactly. Yeah. And a lot of times there tends to be a lot more security hardening. If people use commonly popular reverse proxies like Nginx or Apache, there's a lot more work that goes into hardening those. Yes. Where sometimes people put very squishy application servers behind them that go, I haven't thought of all the edge cases like bufferbloat and all of those.
Amos Wenger: Absolutely.
James Munns: And you can just configure the... you know, instead of having all hundred of your servers be immune to the same, you just have the bouncer.
Amos Wenger: But on a personal note, I think it's absolutely bonkers that Nginx is still the default choice for a lot of people when it's, it's written in C. I personally, I'm on the record as like the Go hater and I use Caddy, because I may hate Go, but at least it's memory safe. So yeah. Okay. I'm losing out on 10, 15% performance, but I don't know. It feels better.
James Munns: Yeah. So I mean we have this sort of proxying setup where there's that bouncer in the middle. So we have this sort of setup where the client talks to the proxy. The proxy in this case appears to be the server. So it's acting in the server role, but as soon as it's received that request, if it decides to move on, it then turns around to the inside and it makes a request to the real server.
So in that role, it's a client. Yep. So it essentially kind of does both. It pretends to be a server to the outside and then turns around and pretends to be a client. So we get the, you know, the server talks to the proxy. The proxy talks... I screwed this up. This is what I get for skipping around slides.
And then in the other way, when the server replies to the proxy, the proxy is the client, and then same, it turns around and pretends to be the server.
So we just pretend at every step along the way that this is a direct connection. So in this case, the internal server knows nothing really about the real client unless the proxy passed it on. And in the same way, unless the proxy passed it on, the real client knows nothing about the real server, which gives you sort of this, you know, blinders you can put on where you just say, it's just a direct peer-to-peer communication of a client and a server.
Proxying upsides and downsides
James Munns: How more simple could it be than that?
The thing is, we can also use this for embedded devices, and this is exactly what I'm doing with Poststation, is instead of having networking and things like that on the inside, we make things cheaper by saying, "Well, these devices could already do point to point connections."
You could connect to them directly. What Poststation introduces is, okay, well I will be your delegated proxy for this. So then Poststation is in charge of talking to the actual embedded devices, can speak with them in whatever protocol they'd like. And in the same way you were talking about the difference between HTTP/1 or whatever, it speaks exclusively a very compact, binary format of postcard to these devices.
But you could have your clients speak JSON and REST to Poststation and it will do that translation for you and pass it on. So we can make a lot of things cheaper because we don't have to think about routing. We don't have to think about, you know, sending JSON to a tiny microcontroller. We don't have to worry about any of those things.
But the downside is, I said that proxying, at least reverse proxying like we're talking about here, there's no real any to any comms here. Like if you want a device to talk to a device, and they're in this perfect little simple brain world where they only take incoming connections from one direction, and then all replies go back in that direction.
It's more challenging to say: Hey, please send this to a specific person. Which if you've set up all of your devices to say: Hey, I receive commands and I publish things, you know, everything coming in goes that way and everything going out goes that way. And life is simple. It's great. But if you really do want peer-to-peer networking, then it makes things a little bit more challenging.
But often for most microcontrollers or things like that, that's good enough. You might have like a small network of devices. But they're all doing their job. They're running the elbow, the wrist, and the hand motors. And the elbow doesn't usually need to talk to the hand very often. They're usually all talking back to the brain, and the brain is talking back to all of the motors.
Good enough: embedded systems, PC bridges
James Munns: Yeah. This is usually good enough, like I said, for website backends, you're usually going client to proxy to API server and back down the line. That's like the people who have worked at backends, that's how they think. But for embedded systems, it might be more like your PC is talking to some serial port or RS485 bridge, which then is talking to a device.
So before I was talking about a client talking to Poststation, talking to the embedded device. And the cool thing about proxying is you can proxy multiple times because you don't necessarily have to know, because I said it was transparent. You can proxy multiple times. So I could proxy from my computer application to Poststation, Poststation two USB to serial port adapter firmware that is then talking to another device on the far side of that, or some wireless adapter or something like that. And both of them can be very, "Hey mush brain, I don't have to know what the entire network looks like."
And so you end up with this, you know, a proxy of a proxy of a proxy of a proxy where instead of being that iconic like internet picture of every node connected to every node, you get this directed acyclic graph of networking. Where as long as you're always going downwards, you can go down until you actually hit the node you were looking for. And that node can then fire the communication back up the line and it always ends up back at the client that it was interested in talking to.
And this is one of those things I did over Christmas break is I wrote a wireless protocol for devices around my house so that now I can talk from my computer to Poststation to the adapter over the wireless network to the wireless device.
It can talk all the way back up to me, and I don't have to know that there were however many hops there are in that, but it all still generally works.
Amanda Majorowicz: There we go. Like, don't hold me accountable. Don't hold me liable for any of this.
James Munns: Yeah. But yeah, I try to not get too deep in the slides so we could actually talk about stuff.
Amos Wenger: Sure.
James Munns: Because there's some fun side effects to this. Can you see the other price you pay when you don't have a routing table? If proxying is like taking the envelope and putting it in another envelope to pass it on, when you end up doing this proxying multiple layers deep, instead of having a routing table, you end up having to build this stack of nested envelopes that have the address on each side.
And so if you want to go to a destination that's six layers deep, you take your envelope and put it in an envelope and put it in an envelope. And put it in an envelope, which I haven't decided if I love yet. It's simpler for the devices in the middle 'cause they just take the envelope out.
Do I know this person directly connected to me? Yes: pass it on. No: just throw it away or bounce it back so they don't have to keep track of routing. But if you go more than a couple hops deep, essentially you're putting like a MAC address on every layer. And so you end up paying that. But my hope is that most of these networks are fairly shallow.
Like I can't think of very many use cases where you'd go more than three or four deep, but...
Amos Wenger: Plus, I can't imagine the envelopes being that big, right?
James Munns: Right now, well, for postcard-rpc specifically, I use eight bytes or 64 bits as like the unique idea of devices. So I end up needing to stick that. And then one bite that says, "This is a proxy message!" in each one.
So it's nine bytes on every hop, which for us doesn't sound a lot, but when you have like little tiny wireless networks or serial port networks, that adds up pretty quick.
Amos Wenger: You know what you should do? You should reserve one value- you should have a niche value for "this is not a proxy message," and then you can save one bite on every message.
It's not a joke. This is an actual protocol design.
James Munns: So how postcard-rpc works in this case is you have messages that have unique IDs. Mm-hmm. So what I do is I hash the schemas and get the IDs and then you figure out if there's a, you know, you, I do the, perfect hashing of that. So proxying ends up just being another kind of message.
So the actual header on that is just that. So that was the one bite you're kind of already paying for any kind of messages? Because in my networks I'm also talking directly to the bridge device as a real device. And also it has an end point that says, "Please pass this on to your friends on the other network," Those kind of things.
So that one's free, but I still have to include the serial number on that. Although I could maybe be smart and then do a mapping of, I don't know. But then it, then it just becomes a bad routing table, I think. I have to draw the line somewhere.
Amos Wenger: I was gonna ask like, did you do varint encoding, which is where even if you can technically have up to 64 bits with integer, you don't always send eight bytes.
You can kind of like smaller intes take fewer bytes, but it doesn't make sense for IDs, right? 'cause they're more like unique IDs, random values.
James Munns: You assume that they're, yeah. You assume that they're randomly distributed. An eight byte value could technically be up to 10 bytes. You only get seven bits per byte when you use a varint.
'Cause everything else in postcard is, varint. But this is specifically an array of eight bytes, just so I don't pay the varint costs. Because in many cases, if you had the top bit set just like the highest bit set in the serial number, then all of a sudden now it's the largest varint because it has the highest max value. So.
Amos Wenger: Varints are easy. They're like UTF-8, but for numbers and that explains everything I think. I think we're all clear on that now.
James Munns: Yeah, and it turns out they're actually really fast and this is probably just like an artifact of computers being optimized for this. Or at least on desktops, they're so memory IO bound.
The cost to actually just sit there and iterate over the top bit and things like that is actually astoundingly... I tried a bunch of clever other variable sized integers other than like the classic. Varant or LED64 that like Protobuf uses and whatever. And it was like a negligible difference, even if you were doing something.
It's one of those things where you look at it, you go, "That's gotta be way faster!" Superscalar architecture punches you in the face. Every single time you make assumptions like that.
Amos Wenger: Mm-hmm. Mm-hmm.
James Munns: Proxying, it's good for you.
Amos Wenger: All right.
Episode Sponsor
This episode is sponsored by Depot: the build acceleration platform that's on a mission to make all builds near instant. If you're tired of watching your builds in GitHub Actions crawl like the modern-day equivalent of paint drying, give Depot's GitHub Actions runners a try. They’re up to 10x faster, with unlimited concurrency, faster caching, support for Linux, macOS, and Windows, and they plug right into other Depot optimizations like accelerated container image builds and remote caching for Bazel, Turborepo, Gradle, and more.
Depot was built by developers who were tired of wasting time waiting on builds instead of shipping. It's made for teams that want to move faster and stay focused on what actually matters.
That’s why companies like PostHog use Depot to cut build times from over 3 hours to just 3 minutes, saving tens of thousands of build hours every week.
Start your free 7-day trial at depot.dev and let them know we sent you.