Matt (00:00): Hey, Ben. Ben (00:00): Hi Matt. Matt (00:01): We should do a podcast, right? Ben (00:02): Oh, Oh, that's a fine idea. Matt (00:05): You and I have chatted a lot about loads of interesting things. Right. But I think wouldn't it be interesting if we shared these views with the rest of the world. Ben (00:13): We could do a lot worse than that idea for sure. Matt (00:16): Yeah. All right. Ben (00:17): Let's give it a go. I dig it. What do you want to talk about? Matt (00:19): Good question. Well, you know, I was thinking the other day, you and I started out in essentially the same spot. Ben (00:26): We kinda did, didn't we? Matt (00:26): and then something happened and then we arced around. We went in two completely different career trajectories, and now we've landed up working together. And, yeah. Do you want to talk a little bit about that? I mean, I started out in the games industry, so age 16, 17, I was making games at home. I was hacking around, I went to university. That was a bit of a mistake, really, most of the time I spent in the computer lab doing basically video games. And that was fun. It was a lot of fun. That was a thing you did. Right. And then I got a job doing that. Ben (00:58): So that's when our paths diverged. Matt (01:01): (and spent about a) decade doing that. But yeah, you did something else. Right. So why don't you talk about it? Ben (01:04): Yeah. That happened to you? Well, yeah, so when I was a kid, I, I very much wanted to be, uh, wanted to make video games. And I, you know, had been programming in some degree in high school and really, really, really enjoyed it. And I got, I was a little bit into graphics, but more into AI. I very much wanted to build, you know, the bots that live in video games. And I spent most of my college career, you know, I got a computer science degree, but I spent a lot of time sort of playing around with not only that, but also some graphics libraries like OpenGL and Direct3D at the time, mostly because I knew I had to have some understanding of that to get a reasonable job in the industry, but really I just wanted to, I just wanted to make the AIs. Ben (01:50): But, um, when I graduated, I got an offer from a company in Houston and I at the same time had, uh, proposed to my now-wife. And we had agreed that whatever town or city we both got job offers in: That's where we moving. And this company was in Houston and she had gotten an offer from an oil company in Houston being a chemical engineer. And so we were moving to Houston and we had planned our wedding and had gotten an apartment and all this stuff. And this company calls me up and said, "Hey, Ben, we're moving to, uh, Austin, can you move to Austin?" No, no, I can not move to Austin. And they're like, well, we're moving Austin. So if you want a job then come to Austin, so fast forward 20 years and I have never worked in the games industry. Matt (02:40): Oh no. Ben (02:41): Um, I've worked in a lot of other industries, but not that one, Matt (02:46): Uh, in the mid late nineties when I was in the games industry. I mean, you didn't really miss much. I don't think, I mean, it was a fabulous experience, like loads of stuff I learned and, you know, crawling further and further, further down the tech stack towards the hardware and trying to get more explosions on the screens, trying to get more triangles drawn, trying to get all the core special effects done. That was what I was doing. And, you know, eking out the performance from these, you know, 200Mhz machines as they were back then was super, super fun. But working 18 hour days for six or seven days a week for many months was not fun. I'm very glad to be out of it from that respect. Uh, I believe it's gotten a bit better from talking to friends of mine who are still still doing it, but, you know, it's, it's definitely an industry where it's cool to be in it. Matt (03:32): So it's like sort of Hollywood, there's a certain amount of caché from doing it. And unfortunately they, the, the industry knows that and will happily milk the people coming in for all their work to get that. But, and, and in a lot of ways, I mean, again, certainly in the late nineties when I was there and there were a lot of people like me who weren't trained computer scientists or software engineers or any of that thing, we were just, you know, bedroom hackers. And so if you have an entire industry founded on people just making it up as they were going along, then there's a different tenor to the way software is engineered. And in particular things like testing was just not something I didn't learn to do until like 15, 20 years later when I worked at Google for a bit. And they were like, yeah, we need to write tests for these things. Matt (04:16): I'm like, yeah, well, we kind of had a couple of asserts in the code somewhere we would run. And if it, if it passed through the asserts, then it was good enough to ship. And testing to us was you hand the, the, the code, a build of the code to somebody with a VHS recorder and PlayStation controller and said, you know, can you find any bugs in this that was testing for us, but you know, you've kind of built a career out of, I wouldn't say career. One of the facets of your career is take talking to people about testing. And certainly I've learned a ton from you about how tests, how testing should be. And in fact, it kind of, one of the things that got into my head about really talking to you about this podcast and getting us on online was the something you said the other day, when you said, if you're doing testing, right, you should be able to go faster. And I think that was a huge mind leveler for me realizing that that is actually true. And it's not just lip service that you do. I mean, what do you mean when you say that? Ben (05:17): Yeah, so that's, I mean, there's a lot, there's actually a lot of things to unpack in that statement. Um, it, you know, testing, what do you, what do I mean by testing and faster is both of those are important things to figure out. Right. Um, so, Matt (05:32): But all of the words are up for grabs. Ben (05:36): Pretty much, it's a what is is is? kind of situation with this, but, but yeah, just to explain a little bit, so the kind of testing that I'm talking about are what some people call unit tests what some people call micro tests, um, I have sometimes just called them fast tests. They are tests that do not hit the file system, do not talk to the network, do not talk to a database, not do much of anything, except exercise a very small piece of code. And these tests run extremely fast, hundreds of tests per second, depending on the language that you're working in, Matt (06:08): I would say you've done just a sort of concrete. You've done these in languages, such as Python, JavaScript, what, uh, Ben (06:14): Python, Java, JavaScript, Ruby, Clojure, a little bit of C and C++ now and again, although I don't purport to be an expert in that. When I was working in rust a little bit, I tried to write tests in Rust, that was a very interesting experience. We should talk about that sometime, but, but these kinds of tests serve a whole lot of purposes at once and that's sort of where they get their magic is, is they help you in an obvious way, test correctness, right? Give you confidence that your code is correct. They help you design your code as you're going. And they serve as a sort of executable documentation for the people that will come after you. When, you know, a lot of times the people that will come after you are just you six months later, right. Matt (06:52): Or even the next day or the next day given how my brain works these days. Ben (06:56): Yeah, yeah, yeah, exactly. Matt (06:58): So these are like literally hundreds of tests that you can run in very short order. I press a button and it does this, or is there a, what, what other ways? Ben (07:07): I mean, I, the way I generally work is I want to make sure that the tests run on every change to the code that I make. So every time I hit save, it's a little bit like a micro commit, right? Like I'm saving these changes. I expect some results in the test. It might be for the test to fail in a particular way. It might be for them to pass. It's usually not more than, you know, a few seconds, maybe a minute in between saves, but it's very rapid. And every time I hit save, I want my test to run. Maybe there's a compilation step in there first, depending on the language again. But I hit save, the test runs, I have results. Matt (07:41): And you're literally got that like up in another window, like adjacent to your editor saying, Hey, yeah, I noticed you changed something. You ran the test and here's, here's a bunch of green or here's some red and you were expecting the red because you've literally broke the failing test or whatever, that, that kind of thing. And like you call it a micro commit. I think that's a really interesting way of phrasing it. It's like I have done some typing. I have done some rumination, I've moved some things around. I've made some decisions about what my code should look like . The very act of saving is kind of my I'm leaning back in my chair for a half second, just to think about what I'm doing next. Where am I? Is that a fair assessment? Yeah. Ben (08:17): Yeah. And you need a fair amount of real estate screen, real estate for this, because if the tests are green, like they're green, that's fine. Whatever, if they fail, you really want to make sure that they fail in the way that you expected them to. And even more than that, before you hit that save button, you really want to have in your mind what the failure's going to be based on your current understanding of the state of the code and the problem and everything else. Because if it doesn't meet those expectations, you're probably, you might be doing something wrong, right? Like the test failing doesn't mean that you're doing something wrong. The test failing means that you're gradually adding behavior to the code and changing it in ways. Matt (08:54): You may be anticipating a failure that doesn't happen. And then you're like, I'm sure I had a test for this, but I've just commented out a key piece of the code. And clearly I'm not testing the opposite case or I'm right. Ben (09:05): Or I'm not testing the thing that I thought. I mean, one of the, probably classic. Yeah. Like the classic mistake that, that people both make and anticipate when they're learning these kinds of techniques is, well, you know, if you don't trust my code to be right, why would you test my trust, my tests to be right. Right. And, and the problem is a lot of times this stuff is taught in very static ways. It's like a blog post, or like a pull request that somebody sends somebody and you see the code and you see the tests and seeing it in that static form, you do kind of have a little bit of a question about like, how do I know that this is right, the real way that you get that confidence that the code is properly tested is by assembling both the code and the tests in the right order, such that you see that failure that tells you like, Oh, this test is absolutely testing that I'm writing to this file because I can see this error that says that the directory doesn't exist. And I haven't created the directory cause I'm not going to create the directory. Cause I'm going to mock that out later or whatever. But I can see that failure that like really gives me a very strong confidence that this code is doing what I think it's doing. And you won't ever see that if you just look at the resulting code and test together, right. You have to see the process of how it was created to really... Matt (10:24): I think it's like a living breathing process where perhaps those who haven't seen it done this way, like myself included really. I go to get hub. I look at the code, I sort of page up and down a bit. And then maybe I'll look for a test and page up and down in that as well. But that's not really capturing what you're talking about. This sort of living, breathing interactive thing. So what, when you say, uh, we go faster when we have these tests, what is your faster, faster? Mean? Yeah. Ben (10:55): Faster. Often. Not always, but often means literally just the fastest possible route to production working code. Right. I am taking it as an assumption and I think this is mostly true, but you know, maybe, maybe my perspective is off on this. So I'm going to, I'm going to put this premise out there and make sure that, that my, that I I'm thinking about this correctly, my premise is that these days people don't think of code being done until it's actually running somewhere. Right? Like it's in the hands of a user or it's in a, at least a testing environment, if not a production environment. Right. Like, you know, gone are the days where we write a bunch of code and we sort of throw it over the wall. And you kinda like you were saying with the, uh, with the PlayStation controllers. Yeah. Matt (11:39): Right. Ben (11:41): Ah, compiled, that must be good enough. And I'm going to throw it over to the wall of the testers. And then the testers are going to maybe find some bugs, but they won't because I'm perfect. And that those days are gone. Right? I think so. So the question is, do we get to that point where, where people can use the code that we write to do their jobs and make their lives better and accomplish whatever it is that they're trying to accomplish. You want to try to get to that state as quickly as possible. So when I say faster, I mean that state. Matt (12:06): Faster from the point you were making the change to the point that it's usefully impacting somebody's be it the user, or the downstream, person who is going to take your code and move on to the next step or, or it's deployed. And it's actually running in some environment somewhere, be it the actual live environment or, or whatever that, yeah, that makes sense to me. So when I interpreted that original statement, you made about testing, making you go faster. The thing that I've internalized about writing tests is going faster because I have the confidence to make changes that otherwise I might either avoid doing, or I would spend a long time picking around the edges before I actually committed to, because I know that if I make a change and I break something, the test will catch it. Right. That state is a really interesting state to be in. Matt (12:57): And it's hard if you're trying to like graft, if you're trying to preach to somebody who hasn't already got a decent amount of tests in their system, or if they have got tests, but they like take two and a half hours to run, then the safety belt that you're putting on with that, it doesn't really ring true. Right. You're like, yeah, I go faster because you know, I, Hey, I'll make a change. And then I hit the button and then I get a green and I know I'm good to go. I check in, I don't have to do anything else. That's that's my, my process. And so for me, that definitely has been liberating. And in particular, in some of the things we were talking about over the weekend, one of the projects that I was working on, which has got nothing to do with anything, but like it's a text editor for a dead language, right. Matt (13:36): And the fact that the syntax highlighting, I could write tests for the syntax highlighter meant that I can take a bug report that a user's given me, write a single test that reproduces it, make the obvious change to me and know that I haven't broken anything else that was previously there. Whereas before I was playing whack-a-mole with the, well, okay, I load it up and I click around a bit. And so that aspect is how I think about going faster. I could knock out like a dozen bug reports in a couple of hours, knowing that I haven't broken anything, know that I'm not going to be called back and said, Oh, you know, that thing that stopped working, the other thing has stopped working. Right? And so that liberates me and I, you know, obviously that was in, uh, like JavaScript, which is traditionally more associated with these things. But I can also have this in my day job with some of the C++ code that I'm in. And I feel that a lot of C++ engineers are missing a trick by not having a similar setup to this Ben (14:27): Well, and, and I mean, my experiences working in C plus plus are limited, but I can tell you that one of the things that definitely can happen is you sort of get this broken windows effect with the compilation slowing you down, right. Where if you get into a situation where your build takes a minute, right, 60 seconds, you might say, well, that's a pretty fast build. Well, but the problem is, is that there's a huge difference in the way that you interact with something. A friend of mine once told me about this thing called the rule of eights, where if you, if you think about how you interact with it with a computer or any sort of digital system in, you know, 800 milliseconds versus eight seconds versus 80 seconds versus 800 seconds, 800 milliseconds for most things is, you know, almost instantaneous. Matt (15:13): I like how you qualified that. Ben (15:14): Most things! Matt (15:15): For human interaction purposes, sort of all right. The gap between me pressing the fire button and my guy on screen and shooting is maybe that that's different, but yeah, for most things, 800 milliseconds is as near damn it Instantaneous. Ben (15:30): Yeah. It's certainly interactive enough to where you're not going to lose your train of thought 800 milliseconds is you just keep on streaming. The flow keeps flowing, right. Eight seconds. There's a, there's like a, there's like a beat. There's a pause. And you're sort of waiting in anticipation for like maybe something that's going to, what am I going to see? Right. Yep. You might in a moment get distracted 80 seconds, or you're definitely getting distracted. You're going to go check your email. Yes. You're all tabbing over to something, you know, like the train of thought is gone and then 800 seconds is just, that's just right out. It's like Matt (16:00): We're in batch-job territory. Ben (16:03): So thinking about the interaction in those terms, if your build gets up to 60 seconds, you've lost the flow, right? You're no longer in that territory. And so not only do you have the burden of maintaining a fast test suite, but you also potentially, depending on your language, have the burden of maintaining a fast build. But I argue that those are investments that are absolutely well worth making on a number of different fronts, not the least of which is it creates a development workflow that allows you to just keep thinking, right? You never stop. You never get distracted. You never lose your train of thought. You just keep the flow. Matt (16:45): Exactly flow, I think is the key word there. And I think that's probably why you and I bonded so well over late the way that I structured the projects, the C++ projects that we worked on together when you came in and worked together, because I've always felt this like my, the, the, the need for long builds for most things is, is gone. Now, I think at least for like debug and for know, uh, essentially a fast build, which lets you work on a relatively small area of your codebase and have that kind of turnaround time, which I love it to be 800 milliseconds. But I think with the best will in the world, it's going to be closer to the eight seconds. Ben (17:20): Eight seconds is okay. Right. You can deal with eight seconds. Matt (17:23): Yeah. Right. And there's certain with a certain amount of mental pipelining and understanding that like, okay, I'm, I'm saving and running the test now. And then I'm going to be paging up and down looking at the code while I kind of make that last stage commitment, whatever those kinds of things. But, but that's very, very different from like the usual 30, 40, 50 seconds to do something, which is a big deal. And I agree completely that that flow state. Right. And I think that it's such an, an, an enabler, it changes the way that you write and develop your software in the way that TDD sort of does because it makes you be your own client to start with. And if you can get the very human pleasure of writing a dozen lines of code at most and saying, okay, I, now I have a test that's, that's using my API in a way that doesn't exist yet. Matt (18:06): So I'm going to build it. And it's going to say, no, there's an error and you're going to go, okay, well, I'm going to go and stop that out. Okay. I'm going to run it now. Oh, now, now what the error is, the test fails. And we're talking now the four seconds from the point, which you click the button to get in that, and then you start working on, well, let's make it pass. And then oftentimes that can be a few minutes worth of work. And then you get a green and you're like, Hey, I've got the reward. Endorphin kicks in. I did something important. I went forward in, in towards the goal and I got my little reward and I can move on with my life. I can even commit that that's useful. I have a useful change. That that makes sense. Is this just you and I bonding over the fact that we both think that that's the way the software, that's the good way to make software, because I know that I've worked with people for whom the lay back in your chair and think like close your eyes and put your fingers on your temples. Matt (18:56): Kind of, and just think Zen-like for 45 minutes followed by leaning forward and typing the perfect code out is a valid way of developing too. Ben (19:06): Well, I, that's a great point. And I think it actually speaks to something that you said earlier, uh, which is you're trying to get confidence, right? That's what the test we're providing you in that instance. But there are lots of different ways to get confidence. You know, I've I've I had a talk at one point. I don't know if you saw this, but I talking about confidence in software engineering and saying how software engineering is essentially a faith-based activity because there's this sort of magical state that as much as we like to think that it's completely objective and it's all ones and zeros and everything is a meritocracy and all these other things that we tell ourselves about how software engineering works, the way it actually works is it just comes down to belief because you have to have confidence to deploy a piece of software. Ben (19:49): I'm going to take this, I'm going to push it into production. Millions of people are going to potentially use it. How do I know that that's okay. It comes from an internal sense. That is often based on experience, but it comes from an internal sense of whether or not you feel confident in that code to do what you expect it to do. There are lots of different ways to get that confidence. Automated tests are one way. Manual tests are another way observability and tracing is another way just sitting down and thinking about the problem for 30 minutes is another way. Honestly, if I'm doing anything multithreaded, that's the way I do it. I sit down and I think about it, really hard! For a really long time! Ben (20:31): First. I asked, why are we using threads? That's the first question I asked, not use threads. Matt (20:35): Is there a way that we can avoid this? Ben (20:37): Yes, exactly. Um, but that's, that's sometimes, so it's not, it's not that there's one particular. And when I say, you know, if you're doing tests, it, should you go faster? That's true. That's not the only way to develop software for sure. It's not even necessarily at like the, at the individual level, the right thing to do. It's I was just saying like the multi-threaded code, you know, I don't rely on unit tests to test the thread safety of multi-threaded code. That's just not effective for me. Matt (21:05): Right. And I mean, there's broadly, I think things, you know, similar fuzzing and testing, things like that, they're testing a different aspect perhaps of your code, like sort of throwing the chaos monkey at it. Ben (21:15): Right, right. So, so to that, to that point, it's like testing makes you go faster in the aggregate, right? Like it's there as a mechanism for gaining confidence. It is a mechanism that I think people can use quite often. It's maybe an 80% solution for like, how do I gain confidence? There are definitely people in the world that are able to do that without, uh, writing tests. They can literally just, you know, think of the code they want to write and bang it out. But I'm definitely not one of those people. And, and I, and I have not found a lot of those people in the industry. And so the problem then becomes, have to start thinking about the dynamics of working within a team and what does the team need? What does the team expect? For example, the team has to at least in some way, be committed to building this suite of tests. Otherwise you're not going to have the experience that you had working on this, uh, syntax highlighter. Because if, if the, if the existing behavior that you're afraid of breaking isn't tested or is tested enough, you don't really have that confidence after you run it. Matt (22:22): Yeah. The belief system being destroyed slightly, by the way that I test it now is back to my old ways. My old tricks are like, when I run it and I kind of like page up and down the logs, or I poke them the click buttons in the UI as fast as I can to try and break it. And that's not a good feeling. That's not, it's not a sort of intersubjective thing. It's not like I can check it in and say like, this atomically is a sealed okay change. And anyone can check it out and be the computer checks it for them. Right. And we've, we've worked with what I've worked with, definitely with, with folks who are verry definitely the sit-back-and-think-a-lot about it. And on a project at a previous company, I definitely had that experience of there was the lead developer who is absolutely awesome. Matt (23:03): One of the cleverest people I know very thin on the ground when it comes to tests because he didn't make the mistakes, he just wrote the code right. And that was amazing. And that worked perfectly well, but it didn't scale when I was added to the team because I'm not that person, I'm definitely not that person, but also I didn't understand the code that he had written. And I couldn't add to it without like asking him a hundred times, does this work? What about that? Oh, no, that won't work. That'll break because of something. I don't know. I just have to keep asking you. And that was, that's not a great experience necessarily for someone coming onto the team as it happened, we eventually ended up with an expansive enough test suite that we were both happy. And then we could add more people onto the team. And it was, it was a joyous moment. Right. But that isn't always the case. Ben (23:46): Yeah, absolutely. And it is, uh, and I don't know if your experiences on that team would, would mirror this, but I've certainly found that unfortunately, writing these kinds of tests, these sort of fast running tests is it's a little bit of an all or nothing thing. Right. Because doing it, some of the time still leaves you in a situation where after you've made a change, you don't have confidence with it. It works and you have to go test it manually anyway, or find some other mechanism for testing or whatever mechanism. And there are lots of them. We'll find some other one, the tests don't give you the confidence that you need to move forward. And so it's not that they have no value, but they have, you know, a 10th of the value that they would have, if you could maintain that confidence. And, and, and the, and the thing about it, that one of the things that makes it hard is that a confidence has to be shared among everyone on the team. Ben (24:35): Everyone has to believe in the test and everyone has to maintain them to a level that is extremely hard to quantify. I mean, you can try to do things with code coverage and all this other nonsense, but you know, to, to, yeah. It's, that's going to tell you anything. Um, but yeah, it's really hard to quantify it. So you have to sort of have this like shared sort of communal knowledge of like, what does it mean to write a good test? And do I believe that these tests will tell me, right? And as soon as you lose it, as soon as you lose it, it's just, it's really hard to get back. So, yeah, it's hard, right? I mean, and you know, this is another kind of point we were talking about this earlier this week. I think the reason why a lot of this stuff falls down, we're talking about all these different techniques. Ben (25:16): They're hard, right? It's hard to learn how to do this. I think the, the mindset that a lot of people have in the industry, and this is especially true of technical managers, is that writing these kinds of tests is a choice. It's a choice that individuals can make to write tests or not. Well, to me, that's pretty ridiculous. It's like, all right, well, you Python programmer with two years of experience, I'm going to put you on this legacy C++ code base. And if you don't perform to the level that I expect, well, then you made some bad choices and I'm going to fire you. It's like, no, you don't have the skills. You don't have the skills. It's not a choice. You don't know how to do it. You don't know how to do it in a way that isn't, I'm going to spend the next six months learning how to, you know, mock out, uh, interactions with the socket library to make sure that I've handled all the different disconnects situations in a realistic way, right? Like that's a skill. You have to learn how to do it. So if, as an industry, if we're not providing the resources and the time to teach people these skills that they don't necessarily have, it is totally unrealistic of us to expect for them to do it. So you have to take that into account. Uh, and if you don't, you won't get good tests or what you will get is a smattering of tests. You'll get a test. All the tests that were easy to write will be written. Matt (26:28): You're taking the words out of my mouth, here, which is that that's typically what one does. I mean, you know, designing software to be testable, observable. The things that we, we, we, we, we covet is, is a skill in itself. And I think that's another problem. Like, so I've sort of been sort of mulling this thought through, in my head about whether, you know, how, how would we advise somebody coming in who had a project that wasn't tested? Like we just described it didn't have a build. Like I would like to have, if it was a native project or it had like some, some, some first step that took a long while, every single time, you know, there's the WebPacks of these worlds these days is the new build time. Maybe not even lined up, uh, or organized in to be testable in the first place. Matt (27:13): There's a lot of things that need to be lined up to be in our lovely world. And you and I have been in a luxurious position recently of starting projects, pretty much afresh. And so we can start out the way that you and I like these things to be. And, you know, you incrementally add on, and then the tax doesn't seem so high, but I wonder if for anyone listening, thinking, well, this sounds lovely. I wish I was in this world where my tests took under eight seconds to build and run and gave me feedback, you know? And then one of the things you sort of said is like, if, if you don't have the a hundred percent confidence or some thresholding level, so let's say 90% confidence measured in whatever system of confidence you can have across the team, then it's not as valuable as like ... an 80% confidence is like much less valuable than a 90% confidence, because now everyone's a bit FUDdish about, well, I made a change and I'm not really sure, I guess, am I going to push it? Am I going to run it in the test environment forever? What am I going to do? Like, there's this sort of thresholding, you've said, so how can one get from a project such as I described more towards, uh, our mindset or your, I should say your sort of thoughts. Ben (28:17): This is literally the hardest thing to do with testing is taking a legacy code base. And I'm using Michael Feathers' definition of the word legacy here, which is a code base without tests, take a legacy codebase and put tests around it Matt (28:31): That's worth underlining actually that, yeah, Michael Feathers suggests that the definition of legacy is an untested codebase, which I think is interesting in itself. I just, I just loved that, that, that, that, uh, Ben (28:43): It's a great definition. I think it's particularly interesting when you sort of get flexible on the definition of tested, right. Uh, because it's like, you can have code where it's like, I don't know that this code has ever been run, Matt (28:57): We've all found those lines of code. You're like looking at how did this ever work, and then you realize, but it never, it never worked. Thankfully it was never called because if ever was, you'd be in trouble. Ben (29:07): Yes. Yes, exactly. So, so I, so I love that definition, but yeah, I did as an unfortunate truth that it is the hardest thing to do is to take a legacy code base and, and write tests around it because the code was probably almost certainly not designed to be tested in this way. And you have no real automated tools for checking to make sure that you didn't break it. So you have to change it. And you don't have a way to have a mechanism to make sure that those changes are okay. And I've dealt with a lot of code bases like this. Not every project that I've ever been on has been a Greenfield project. That would be a charmed life: wouldn't it? But the, the ways that I've dealt with this, and I also worked as a consultant for a while. And, and, you know, we, we worked on these kinds of code bases. Like that's what you do as a consultant. Um, but also in, in, you know, my, my sort of non consultant career, I don't know what the adjective is for that, that whatever the normal career, I guess, um, in different industries and, and, and all over the place. And I think there are some, there's some things that you can do to sort of deal with this. The first is decomposition, right? So try to find the sort of seams in the software where you can start to pull things apart in a safe way, and... Matt (30:18): The key there is safe. You've got to find something small enough that you have confidence extracting the three lines of code in the buried in some horrible giant class and saying, I'm going to make this its own little thing and I'm going to use it. And I can't test that. I haven't broken it by extracting those three lines. Ben (30:35): Yep, yep. Yeah. And I mean, there's, there's, you know, uh, sort of like very, coarse-grained kind of replay tests that you can sometimes pull out of legacy systems where it's like, okay, component A talks to component B over some, you know, protocol we're going to just tap that protocol and, you know, play in the input and record the output. And then we're going to play that back. And we can all sit in a room and agree that if we get the same output, we're pretty confident we haven't broken anything. Yeah. Matt (31:02): Yeah. Like state dumping, and before, and after I run the whole program and I redirect the output to a file, and then I run a sed script to remove, pull out all the timestamps, and then I say, did anything change apart from the timestamps? You're like, yep. Okay. We're good. And I, yeah, I know that are there actually are testing frameworks that I've seen the, um, the, that people use that, that have this kind of characteristic about them. And have a workflow around a sort of almost a temporary crutch test system where you write a test where you're like, okay, the diff comes out and then you can click a button to say, this diff is okay this time. And it kind of blesses it there, and then, or else it fails, which is only an interim, but it allows you to sort of leg into writing tests the way that you've just described, where you say, I know that extracting those three lines of code, putting it into its own class, that I can now go off and test in my own wonderful world. And in fact, in the idealized world, if you can get to this wonderful point where in a compiled language, you're literally only compiling that file and its test and linking them against each other, and then running the test and going hurrah! Look, it takes 300 milliseconds, 800 milliseconds, sorry, 800 is the magic number. Right. You can do that and that's feasible, but you can use this, this golden system to give you a little bit of confidence that even extracting that hasn't broken things. Right. Ben (32:14): Right. Right. Yeah. And, and like you said, those, those tests should be very temporary, right? Like we don't want to live with those for a long time. They're just there to get us to the point. But you know, another problem that people will ask, and this is sort of a, almost an orthogonal question of, okay, well, there's all the technical hurdles of, you know, changing a system to be tested in this way. What about the organizational hurdles? What about the, my boss is going to let me spend six months, you know, redesigning a system so that we can add tests or, you know, whatever. And that's a very hard argument to win. And I think that some, some pretty reasonable arguments can be made in some situations that it like, as much as it, as it pains me as a developer, sometimes the right answer is not to make these changes, just to sort of limp along. You will suffer the pain of doing that. But, you know, it's hard to argue against dollars sometimes. The opportunity that I've seen for this to happen and really be a positive impact for everybody is when people start talking about a rewrite, when people start saying like this system is old and crusty and every, and we have the critical mess problem. Every time we fix one bug, we create two more bugs. So I don't know how to come back from that. Right. Um, when people start talking about a rewrite, you can start offering an alternative. Now it's hard to not be tempted by the siren song of the rewrites, you know, Oh, I get the Greenfield all over again, redo it. But trust me, when I tell you it is not going to be nearly as fun as you think it is, right. Ben (33:43): There are some situations in which it makes sense, but more often than not, when people start talking about a rewrite, the actual best thing from an economic perspective is to start applying some of these techniques, right? Like, let's start looking at the systems that are changing recently, right? Like these old legacy systems, some parts have probably stabilized and you know what? Those parts are never going to have automated tests. Just let it go, man. Like, you're just never going to change it. But there's probably the reason you're talking about the system in this meeting or wherever you're at is because you're trying to change it. Right. So where are you trying to change it? Where are you trying to make those changes? If you can localize your efforts to those parts that are changing frequently now, and find ways to decompose them safely and start to get tests around them, you can wind up with a system that has a safe area where the changes are generally happening and then an unsafe wilderness. Ben (34:34): And yeah, when you go into the unsafe wilderness, it's going to be rough, man. You know, bring the shotgun. It's going to be a hard time, but you can spend most of your days. And most of your time in that safe area where you can make changes with confidence, you can work in short cycles, you can do all of the things that we're talking about and you can live like that for a pretty long time. So you have to find the right moment to sort of bring up the, the approach because otherwise the people who are cutting your paychecks are going to be like, "really six months of nothing? That's what we're doing?" Matt (35:04): That's an awesome observation that, yeah, there are economics at play here and the area under the graph sometimes doesn't warrant the time out. But like you say, if you can, if you can cut off that part of the giant monster, that's causing the problems and make it a nice place to live, then that's, that's, that's the sensible way to start. So I think that probably answers the question. Yeah. I think that's a great place for us to stop. I mean, there's a lot of us more to talk about. Testing is a huge, huge area. I'd love to talk more about how to make C++ programs, testable. Um, there's some really interesting challenges there that I'd love to pick your brains on at some point, because, you know, there are some interesting, uh, usually if you're picking C plus plus you're picking it for a, a very specific reason rather than anything else. And so that brings with it its own challenges. Yeah. I, there's definitely more meat on the bone. I mean, we should be, maybe we should even try and talk to people like Michael Feathers and see if he wouldn't mind talking to us sometimes. So I've got some ideas. Yeah. We should, we should speak to some people, but this has been a lot of, yeah. I guess we'll do this again? Ben (36:04): Yeah, absolutely.