'eating your vegetables' for software
James discusses the concept of software traceability tools, used in safety critical software development, and his wishes for an open source version that could give all of the benefits with a minimal amount of fuss.
Video
Audio
Show Notes
Episode Sponsor: Depot
- tracing crate
- traceability, safety-critical system
- "The Pot Roast Principle"
- IBM Engineering Requirements Management DOORS (Dynamic Object Oriented Requirements System)
- Ferrocene, Sphinx (documentation generator), WHATWG (Web Hypertext Application Technology Working Group), Bikeshed World Wide Web Consortium 'W3C', Simon Sapin & Manish Goragaokar (@manishearth)
- Ada programming language, Cucumber tests
- Modified condition/decision coverage 'MC/DC', requirements based testing
- Typst
- Traceability matrix, Distributed Version Control System (DVCS), Docs.rs
- Ferrocene Traceability Matrix, Jorge Aparicio (@japaric) and Pietro Albini at Ferrous Systems
- Postcard serialization format & postcard-RPC protocol, current version of the specification, next version of the specification
- rustc compiler, Cargo package manager, rustdoc
- Rick Astley, Rickrolling and "Never Gonna Give You Up"
Transcript
James Munns: -- slides, because I haven't looked these in like a week and a half.
Amos Wenger: James.
Amanda Majorowicz: I was like organizing before this, like all of, like getting all of this audio and everything into the season two blah, blah, blah, DaVinci Resolve thing.
Amos Wenger: It's like you're unionizing.
Amanda Majorowicz: And then I was like, "Oh shit," like what was it season two episode three? And I was like, "What did we even do? I can't find any slides anywhere. James, like what did you present about?" And then like I opened the audio and it was just like, "And today I'm not gonna have any slides." And I'm like, "Okay, good, good."
Amos Wenger: I remember that. You were supposed to make some slides, yeah.
James Munns: Oh yeah, I should probably go back and make those slides, yeah.
Amanda Majorowicz: I have not started editing that one yet, so I did not yet, but. (Sighs)
Amos Wenger: Yeah, I don't think I'm that vain, but listening to yourself in studio grade headphones is fascinating, it's mesmerizing.
James Munns: I can't do it. Like I get like one watch of any talk that I give, then I have to not listen to it. Or I have to be in a very good mood when I listen to it, because otherwise it's just like, "UGH!"
Amos Wenger: So you're saying I am a narcissist. That's what you're saying.
James Munns: No, you have less anxiety of your own speaking patterns. I don't know.
Amos Wenger: Have I just done it more than you, is that possible?
James Munns: That's probably, yeah. You've listened to yourself a lot more than me probably, because you've done all your own editing and production.
Amos Wenger: I have been working way too much, like 60 hours last week, it's not good. And I'm not getting startup founder money, is the sad part.
Amanda Majorowicz: Oh darn.
Amos Wenger: I know.
James Munns: Yeah, I have Poststation shipped. Amos Wenger: Welcome back to, "So... the podcast."
James Munns: So today-
Amanda Majorowicz: Okay, bye, I'm leaving.
Amos Wenger: Bye, Amanda.
James Munns: Get out of here, Amanda.
Amos Wenger: Anyway, James.
Traceability
James Munns: Okay, I have a topic that's a little out of left field, comes from my background in safety critical, but it is a thing that I wish we had more of. And that thing is traceability. And really what I wish is that we had good open source traceability tools. So, Amos, have you heard of traceability before?
Amos Wenger: I have heard of the tracing crate. Are those related?
James Munns: Okay, nope, this is one of those traditional software, one word means 30 different things in 30 different contexts.
Amos Wenger: Is it reproducible build?
James Munns: No? No. It does come from safety critical fields. And generally what it means is it's a way to link different pieces of information together. So whether you're talking about requirements or documents to your code, to your tests, to even documents to documents like, hey, the government says we have to do this, this and this, and we promised our customers we were gonna do this, this and this. We should probably make sure that we've done all of those things or like the superset of all of those things. And then we should probably make sure that we've tested that we do all of those things, that we do them correctly. And we should probably make sure that we've tested all of the code that we've written or that there is a test for everything or we didn't just invent something that we're doing for no good reason if we didn't say we were gonna do it or we're not required to do it or things like that.
Amos Wenger: Right, do you know the classic story about this where like there's a mom that always cuts the edges of some meat before she cooks it in the oven and then eventually the daughter asks like 'why you're doing that' and then she doesn't. So she asks her mother, she doesn't, they go back all the way and it's like, 'oh, because the oven was too small. So we had to cut the edges so it would fit in.' They did it for generations.
James Munns: Exactly that kind of thing. The example that was always given to me... so in safety critical fields, usually traceability is a requirement. It's one of those things you have to include in your big piece of paper that says like: yes, all of my requirements are documented, all of the code exists to fulfill a requirement, all the tests exist to fulfill code and a requirement and we're not missing anything. There's no extra code, there's no extra tests.
The example that was given to us in safety critical trading was: you could put like a Pac-Man game in a piece of avionics but you better have requirements for it and you better have tests for it because if you just put Easter eggs in there and they don't map to some requirement, then they should not be there because you've never said what they're supposed to do or not do. And it's a little silly because you wouldn't wanna sit down and verify Pac-Man is working correctly to aviation standards.
Tools for this are outrageously expensive
James Munns: But I think this is one of those things where a lot of the tools that exist for this are outrageously expensive and usually their UX is terrible because their market is like 50 automotive companies and 10 aviation companies and some medical device companies and the iconic document capture tool for this is IBM DOORS which is like a tool straight out of the 90s and it's like a very fancy version of Word with way-- I don't know, I'm probably casting aspersions on it. I haven't used DOORS in 10, 15 years but it's a tool that you only use if you have to and you pay a lot of money to a few set of companies.
Amos Wenger: I bet.
James Munns: Because you have to--
Amos Wenger: I thought I was edgy with Apple scripts and you bring out IBM stuff, okay.
James Munns: And there are some companies, like I've worked at companies that rolled their own, but they were companies that were big enough where the cost benefit ratio, it was cheaper to pay an in-house dev tool team, like 10 people full time to work on that tool than it was to pay the licensing cost for that. Whereas if you're a smaller company or you're only making a couple products, you pay through the nose to have a tool that when you show it to a regulator, they're gonna go, "Oh yeah, you use that tool, I know how to check your configuration, I know how to check your reports, no surprises."
A lot of engineering in general but especially like safety critical engineering is just avoiding surprises to regulators. So just making sure that they go, "Yeah, I know what I'm seeing and I know how to verify it." But I've also worked in teams where this wasn't just a checkbox, this was like part of the whole process. And there's some real like development flow stuff that I desperately miss from having that tooling set up. And I've not found any open source tooling... There's some open source tooling that gets near this and there's some people who have built their own on top of it. For example, like Ferrocene is written, their requirements document is written in Sphinx which is a Python tool for specify requirement. I think they either use some extensions or built some extensions and they can like annotate requirements and that's where like the Ferrocene language spec is specified in. It's some dialect of Sphinx. So it's just a way of marking up documents.
Amos Wenger: Interesting. Python would not be where I would start for something like that but I also don't do it.
James Munns: There's a lot of these tools. The other one that I can think of is from the, like the web world, WHATWG I think, they use a tool which is hilariously called Bikeshed which is a requirements capture tool, which is excellent.
Amos Wenger: It's self-aware.
James Munns: Yeah, exactly. And it's made to make these requirements documents and things like that, but--
Amos Wenger: The WHATWG is the version of the W3C that matters. Well, it's a different consortium that actually works. From what I know, I just offended like 12 different people listening to this podcast. I will send you individually penned apologies.
James Munns: These are things where I'd have to go in like DM Simon Sapin and like Manish to figure out like: okay, how should I feel about these groups? Because those are the people who have the opinions that I trust of those groups and things like that.
Amos Wenger: The logo for the WHATWG is just a question mark which I feel is very appropriate. It's how I feel about it.
You already have: docs, code and tests
James Munns: So let me explain why I miss this. So at least in Rust, we have a pretty good culture of having docs, code and tests. You usually have a book written in like MD book or you have doc tests or you just have like module level docs, so you have some level of docs and those are basically requirements like... or they can be think of to take the place of requirements. They're saying what the code should do in a non-code form through examples or written prose or things like that.
Amos Wenger: But they're not like preconditions like you got in some other languages. Like... I think people coming from maybe Ada or something would think that what we have in Rust is not good enough. It doesn't go far enough I think.
James Munns: There's all different kinds of things and like this is like formal verification, not formal in the same way safety critical is, but like formally mathematically verified stuff is- it's kind of like using code to write your docs or there's even like, what is it? Cucumber tests and things like that, where you say like: as a user, I want my session to be retained when I logged in and you write prose to write tests and things like that. All right, I'm presenting to the cat now.
Amos Wenger: You may continue.
James Munns: So I mean, there's different ways to slice all of this, but in general you have some kind of documentation, some kind of description of what your code's supposed to do. You have your code where you actually do it and then you probably have some kind of unit tests or integration tests where you say like, yes, it actually does what I intended and... You just kind of maintain those all as a bag of things in a repo. And like these days in open source, usually the split is better because most people are putting all three of those things in one Git repo. So they do travel together, which is nice. Like they travel as a crate or as a workspace or whatever, which is not always what like commercial or corporate projects used to do at least where you'd have like whatever Excel docs or Word docs that were captured somewhere and you might have your code in one weird source control thing and tests might be some paperwork that you give people to do on every release.
At least like in open source, we tend to have a fairly grouped set of all of these assets in one place. But if you go through and you're doing some like, optimization or just fixing a bug or something like that, you probably change some stuff and you probably make sure your tests work. And if you have doc tests and you did change something fundamental enough or your doc tests now break, because in Rust, the examples that you put in your docs will get run as unit tests, at least that they compile and sometimes that they run as well. But outside of that, if you rearchitected something major or if you fundamentally added or changed something, nothing's really gonna tell you if you have a whole section of your documents that are totally not relevant anymore or totally incorrect or you don't know if you changed something and all of your tests pass, but you just added a bunch of new code that is completely untested and maybe doesn't work the way you think it does.
What if...
James Munns: So when I've worked in these environments before where you had real traceability tools and you had linked all of your docs to your code and your code to your tests and then usually your tests back to your docs, what if instead of just when you do a PR, the things that changed pop up, what if also everything one step away from that thing also came up for re-review to make sure it was still accurate? So if you said this paragraph of docs goes with this chunk of code, whenever you change that code, it would present to you: hey, here's this paragraph. Is this paragraph still right? Or hey, there are these tests. You didn't tell me where the new tests for the new functionality are or are all these tests still testing something worthwhile or are they just trivial or they're passing for the wrong reason or something like that now.
Amos Wenger: I would imagine the value in that is only as good as the annotations that you leave in your docs sources though.
James Munns: Yeah, there's no magic here. At least in these different tools, usually you get a unique ID for functions and lines of your docs and your tests and things like that. So you have some like project global numeric store and they act as like permalinks for all of these things. When you write your tests, you might say: verifies either this function name or verifies some ID that goes with that. Or when you write the code, you might say: this goes with this text section and things like that. And you don't usually do like section 2.2f. You have some ID that is consistent even if you reorder the docs or you reorder the code or whatever. And those tools like DOORS or whatever will, they have essentially like a numeric allocator where you can assign these numbers and they only ever go up. And it's a large enough space that even over a very long project, it's like eight or nine digits worth of numbers. So you're not gonna run out of them.
Amos Wenger: To be fair, it's really hard to, if you have tests, you can run them and you can instrument the code to measure code coverage essentially. But measuring lines is garbage. Measuring branches is not much better because yeah, you can make sure that in all tests taken together, each branch is at least taken once, but I don't think it tests for like all possible combinations of all possible branches. So there may be behavior that's described in the docs that is not actually tested at all. And there's no automated way to really find that out.
James Munns: Yeah, so I mean, there's two things in safety critical, depending on how safety critical you're talking, where you'll get into something that's called MC/DC coverage or modified condition/decision coverage, where kind of like you're talking about if A and and B or C, you don't need to just exercise getting in that if and out of that if, you have to exercise all the preconditions and the MC/DC versus DC means that you could at least cheat and or not cheat, but you don't have to prove that the ones that shortcut because if you have like A and and B or C, if A is false, you can just take it for granted that you don't have to come up with different test cases for B and C. So that's the MC/DC versus just raw decision coverage.
But the other thing is, at least in avionics, you get into something called requirements based testing in that you can't just write tests to hit code coverage. Every test case has to be mapped to some requirement. And so you end up in this case where like, you can't just be like, "Ah, I need to cover this." It kind of forces you to make sure that your docs are accurate in that like, you need to explain why there's an edge case here and what you do in response to those edge cases or why you do something differently when there are more than 64 items where you might just do it linearly less than that, but you might do binary search or something for more than that.
Amos Wenger: So what you're getting at is that you think we should not write markdown for documentation. We should not write MDBook. We should write a bunch of tests and then in the doc tests for those tests, that should be the entire documentation.
James Munns: No, actually the opposite.
Amos Wenger: I know that's not what you're actually saying, but if we did that.
James Munns: Yeah, I think that's one way of doing it, but I think what I really wish is that we had a way, regardless of syntax, like if I write it in Typst or if I write it in markdown or restructured text or whatever, what I want is a cargo tracing or cargo trace or some tool that wasn't necessarily tied to any language ecosystem where you'd be able to put these annotations in and some way of understanding between them. So actually when I advise teams, I tell them to keep doing whatever's effective for them. If they use a wiki for docs, keep using the wiki. It has change management, it has things like that. If you use markdown files, great. Use whatever is useful to you, but I wanna be able to add that linking of metadata in a way that you can get that awareness.
It shows all the connections and gaps
James Munns: Because like I said, safety critical does this because you have to have what's called a traceability matrix, which shows that you've got like 100% traceability coverage more or less. But just how useful it is when you go like: I'm changing this chunk of code, why is this code like this? I wish that language servers existed, so you could like right click on a function and jump to the docs or jump to the tests or even like within a function, like why does this change behavior above 64 elements? And you'd be able to jump to that and say like: ah, it's like this for this. In the same way that we have just that ability to jump back and forth, and once you have that metadata, the ability to build tools that help you with that is a stunningly useful piece of kit. Because you always know like, if I change something, what are the ripple effects of this? And you don't end up changing something six months ago and only realizing six months later that it changed, and you go, "Why are these docs out of date?"
Amos Wenger: So you mentioned wikis earlier, but usually those have a separate revision control system than like-- I would think that with the tooling that you're describing, you would need to store the docs alongside the source code so that you can travel back in time six months and see what like things were then. Otherwise, you only get the picture of what things are now, you need to refer to timestamps, which gets complicated with DBCS and branches and whatnot.
James Munns: Yeah, I think, yeah, paid tools do that. They have integrations with Jira for wiki or the integrations with other tools and stuff like that. I guess what I'm saying is in open source, we actually sort of have it easier because they do all travel together. So I do think that if you were building something, you'd wanna lean on that requirement that like: okay, we're just gonna say that they all travel in the same repo and so when branches happen, it's only coherent within like a single commit or maybe through some like history of that as well.
But yeah, I think you definitely make your life way easier when all three of those travel together. And yeah, the other thing you can just do is in the same way that like on Docs.rs, you can say like, what percentage of my code is documented? So you can say like: on all your public modules or public functions and things like that, now on Docs.rs, you can poke the dropdown and it'll say like "70% of functions have documentation" or whatever.
Amos Wenger: Wait, is that new?
James Munns: In the last year or so, I think, but yeah, it shows you like documentation coverage.
Amos Wenger: Oh yeah, wow, I never noticed that.
James Munns: It's relatively new.
Amos Wenger: Yeah, it is in the dropdown.
James Munns: Yeah.
Amos Wenger: Cool.
James Munns: But like, what if you could have the same thing? What if it was just easy to say like: how much of my code has docs? Or how much of my docs match my code? Or what is my test coverage? Not necessarily like what is my code coverage, but like how many places did I say: hey, I'm testing this function or I'm testing this module or I'm testing the whole system or something like that. And you can see that linkage. And then kind of to complete the circle to say like, how many tests are backing up what I claim in my docs? How confident can someone be that like, if I write a big paragraph of how this works, that didn't change six months ago when I totally changed the algorithm or something like that.
This is the Traceability Matrix
James Munns: And this, like I mentioned, this is what gets reported and is required in safety critical projects. This is the traceability matrix. And in a really cool way, the Ferrocene Project publishes their traceability matrix. So you can see, hey, here's a version of the Rust language specification. And then what is the traceability matrix of that? How does that link to code? And how does that link to validation of that in the form of tests and things like that? And I think you have to pay for the real one that you can use when you submit to whatever your governing authority is. But just as an open source artifact, you can go and look at it and you can tell like, "Ah, they said that this unit test goes to verify this portion of the language specification," which is like a really validating and useful thing to have because you know, otherwise it's hard to tell when docs get stale because they aren't tested nearly as thoroughly as code is and things like that.
Amos Wenger: Yes, I'm looking at one of the UI tests from the Ferrocene test suite. And it has-- they basically just use a bunch of normal commands. They're not even code comments. They're just a Ferrocene annotations, colon, and then an identifier starting with FLS_, and then some random bunch of letters. And those are all listed in, yeah, in the traceability matrix report, which I can see without paying. I don't know which part is paid, but yeah.
James Munns: I think it's one of those licensing things. I think it's free to view, but you can't use it as part of your like, certification argument for those kinds of things. This is all stuff that I started when I was at Ferrous, but the actual execution was done entirely after I left. So I remember what the plans were, and I always have sort of a blurry time of what we were originally planning versus the folks like Jorge and Pietro and things like that, that actually executed on it after I left. But I give them a ton of credit because they did it in the way that we wanted it to, which was not intrusive to the Rust language. And as something that releases with the same cadence, using tooling and things like that.
Because a lot of these like safety critical areas, they don't release quarterly, let alone yearly a lot of the time. And the fact that they're generally keeping up with the open source project while not being a bummer to them, says that these kinds of tools don't have to be very intrusive and painful over an expected like quality open source workflow. Like if you're just like shoving commits and not writing tests or whatever, like this is not the first thing that you should be doing. But if you're like: I have a pretty stable project and I wanna make sure that I don't accidentally break things or people build a lot of stuff on top of this. So like right now I'm sitting down and writing a new revision of a specification for Postcard the serialization format, some internal pieces that are used in different parts, and then Postcard-RPC the protocol.
And this is really where this itch comes from, is I wish I had a way to say like: okay, I'm writing a human text version of how this encoding format and this protocol works. And instead of just like, "Go read the RFC and maybe there's some code comments that tell you why this test exists," I wish I had an ability to say like, "Okay, there are 2000 lines in this protocol definition," and every single one of those lines, you can jump to a chunk of code or a test or the same way if you're digging through the code, why is this hashing like this? You can jump to the spec and it will explain exactly why it's like that. Because that's one of those things that like-- yeah, I don't know, it's totally valuable, especially when you're like refining things and making sure they're all still correct, that you didn't leave any loose edges. So that's where that itch comes from now, is I'm writing a lot of documentation and I don't have a better way than just putting on line numbers on my documentation and in my code saying, "This is version 1.1 line 247, this paragraph," you know what I mean? And that stuff goes stale immediately and is painful. So it requires a lot of care and tools can help with that.
Amos Wenger: How is that we don't already have some open source tooling for that? We have open source tooling for everything, even if it's crap, we usually have some version of it. Like, yeah, someone came from the industry and they were like, "Oh, I wish I could do the same for my..." why, like... are you really the first person to ask for it in 2025?
It's like eating your vegetables
James Munns: I don't know! I wrote up... I'd have to go find it. I wrote up a pitch of if I were to do it, this is how I would do it in the Rust ecosystem a couple of years back. And a couple of people were like: yeah. I think it's a mix of one, it's seen as like a vegetable of the process. Like it's something you do because it's good for you, not because you enjoy it. It's like eating your vegetables is how I should say it.
Amos Wenger: Yeah, that's a great expression.
James Munns: And so like a lot of people, especially people... I don't know, this is true with any like process where people go with the motions without understanding the why. I think there's a lot of people who hate doing traceability because it's just a box they check and they don't get any value from it. But if you do it well, you do it because it helps you. And it actually, I miss not having the ability to do it.
Amos Wenger: And I would think if any language community is receptive to this, it would be Rust who's already like subjecting themselves to lifetime annotations and whatnot.
James Munns: Yeah, I think it's one of those things it's just the population of people who have seen it done and seen it done well, and are interested in taking the time to build the annotation and tooling in a way that is pleasing-- or really not even necessarily pleasing, but not disruptive to the development process where it doesn't feel like eating your vegetables. That's a very narrow set, which is I think a lot of that stuff becomes paid dev tools. All that stuff where like you either have to have it or only the people that really have seen enough shit that they know that they need that. A lot of that stuff becomes paid dev tools because the upkeep of it takes effort.
And especially if you weren't in Rust, a community that has docs and tests and code in the same repo and has a care for validation and correctness... like you're saying, I think Rust is the first maybe large scale community of people who are likely adopt that. And I think there's just not enough overlap with the safety critical domain. And even in that population, there's a lot of people who like don't get why you would want to do it or haven't had a good experience doing it because they were just forced to do it. That I think they're just, the Venn diagram has a very narrow sliver in it.
Amos Wenger: Yeah, now that I think about it, even though the Rust people might be receptive to it, the others needed so much more, I think, because they don't have those like the safety provided by the language itself. So they need the whole test of specification code trial much more.
James Munns: Because there's only so much, so many invariants you can put into code. Like that is a thing that Rust tries to do is you try and make your invariants encode so the compiler can help you enforce them. But there's some stuff that's just not reasonable to encode or not efficient to encode or--
Amos Wenger: I mean, you have unsafe interfaces as well. You have to deal with the real physicality of the world at some point.
James Munns: Or some like generics and type states sometimes just make the code so egregious to actually use. So like the actual enforcing makes it so unpleasant to use. It becomes awful, you know what I mean? So there's a balance for sure.
Amos Wenger: I keep thinking of my question like, "Why isn't this already done and great and available for everyone?" It's because it's the docs of the docs. Like there's already-- the docs tooling is already overlooked compared to everything else. Like if you look at the number of contributors to Rust core projects, I think it goes first rustc, then Cargo, then rustdoc. So if we were to make something like that, there would be like 0.5 of a person working on it because--
James Munns: It's the vegetables of the vegetables really. You know what I mean? Like it's not just making sure that you're testing and documenting, it's making sure that you're testing and documenting correctly. So I mean, yeah, like I said, it's one of those things that I think if there was a good tool for it where people could just add it as a step and it's something that you could do incrementally... You do have to hit some threshold.
Like you said, the value of it comes from how good your annotations are. So you do have to get some care to it. And I think there's some design to be had of how do you litter a document with all of these annotations without making the document worse. And that's how these tools like DOORS and stuff, like it's built into the editor where you have one column with your requirement numbers and then with your actual requirements. So like it's intuitive, but how do you do that in a given markdown file or in a given Typst file or restructured text or whatever? And I think that's what like Sphinx is doing, but that's a population that is motivated to deal with those annotations because it's an entry criteria for existing.
Amos Wenger: I think it could happen though. Cause the whole memory safety thing was like a fringe thing. I think a decade ago, like a bunch of people did it cause they saw the value before everyone else. And now the government says, "You have to do it that way." So there's been a hell of an adoption curve for that. So I could totally see something else like that taking off. We need it. Cause the documentation situation for most projects is dire. And I'm the first guilty party here. Don't look at my stuff.
Once there is a good tool - call to action!
James Munns: I think it just becomes once there's a good tool and it's desirable. Well, first people need to know to ask for it. So maybe that's what this is, is me telling people that like:
Amos Wenger: Thank you James.
James Munns: Hey, there's established art on how to make sure your docs and your code and your tests are all coherent with each other and you can use tools to help you make sure it's right instead of just 'get good.' So like that's sort of step one of knowing that there is like established art and we could go and look at those areas for ideas, but also because we're not tied to safety critical tooling, you can really focus on the UX or like, hey, we only support things where your docs and your code and your tests are in GitHub repos and you're willing to install a cargo extension or some binary tool that will check it for you.
And you probably already have CI on your repo so you can run trace check and make sure that it's still good and you know, make sure you at least never move backwards and stuff like that. So this is my call to action. If that sounds appealing to anyone, let me know. Cause I have some ideas of how I'd want to do it. I might end up building it for some of my Postcard stuff.
Amos Wenger: Please do.
James Munns: Just because I want it to be right.
Amos Wenger: I have one note. I know it's called traceability in... for real, but don't call it that. Because the word is already so overloaded.
James Munns: It's true. All right. I need some pitches for names.
Amos Wenger: Because people think tracing is logging when it's not. So people are going to think traceability is tracing, even though it's not. So you need to make up a new word. I don't know.
James Munns: Yeah. All right. I guess then email me if you have a-- Go to sdr-podcast.com/ ...contact? Is contact-- We always say the episodes page. I don't know sdr-podcast.com. There's a contact button on there. Send us an email.
Amos Wenger: We have an about page...?!
James Munns: Maybe it's on the about page.
Amos Wenger: Yeah. alt contact goes to the clip from the guy with the red hair. My brain is completely drawing a blank.
James Munns: Carrot Top?
Amos Wenger: Rick roll. Rick roll.
James Munns: Oh, okay. Rick Astley.
Amos Wenger: Rick Astley. Please cut all of the-- please cut the entire-- please cut me entirely out of this episode. I'm... good Lord.
James Munns: Okay. So yeah, let me know. Cause I want it to be a thing. If I write it, I at least love for some other people to help or some other people to use it. Cause otherwise people are just going to look at my repos weird, but hit me up! I'm excited for it. And I wish this was a thing.
Amos Wenger: I would try it.
James Munns: Hell yeah.
Amos Wenger: I think it's interesting.
Episode Sponsor
This episode is sponsored by Depot: the build acceleration platform that's on a mission to make all builds near instant. If you're tired of watching your builds in GitHub Actions crawl like the modern-day equivalent of paint drying, give Depot's GitHub Actions runners a try. They’re up to 10x faster, with unlimited concurrency, faster caching, support for Linux, macOS, and Windows, and they plug right into other Depot optimizations like accelerated container image builds and remote caching for Bazel, Turborepo, Gradle, and more.
Depot was built by developers who were tired of wasting time waiting on builds instead of shipping. It's made for teams that want to move faster and stay focused on what actually matters.
That’s why companies like PostHog use Depot to cut build times from over 3 hours to just 3 minutes, saving tens of thousands of build hours every week.
Start your free 7-day trial at depot.dev and let them know we sent you.