Seth W. Barton

Strong opinions loosely held on Web Dev, Business, Education, and Books

Stop the CYA Game

There’s a sickness that can take hold of large enough organizations where software teams who depend on each other start to blame each other for organizational failings. Once the blame game starts, there are two paths the battle can take:

  1. The organization will devolve into a fury of political intrigue powered by fear and threats. This is a pathological culture. Or…
  2. The organization will become so locked down in documents, processes, and rules, that nobody can get anything done. Each department fends for itself and enforces their rules strictly. This is a bureaucratic culture.

These are two of three options defined by the Westrum IT culture model. The third option is a healthy, “generative” culture.

I’ve never experienced a pathological culture, but I have certainly experienced a bureaucratic one, and there’s one activity that screams “I work in a bureaucratic culture,” more than anything else—CYA (cover your ass).

I hate CYA because it’s bad culture. But I really hate it because it optimizes for problems that don’t exist yet.

Premature optimization is the root of all evil.

- Donald Knuth

CYA Promotes Premature Optimization

Teams who believe that they must CYA will bend over backwards purchasing insurance policies against the potential bad actions of other teams.

They might purchase an insurance policy against a bad API by wrapping it in so much error handling and logging that 50% of that error handling is code that will never run. Simpler code would have worked.

They might purchase an insurance policy against misinformed designers or product managers by writing comprehensive documentation about how a portion of the product ought to behave, down to every detail like which code snippets to use, what errors must be handled, and what states need to be shown to the user. Pedantic documentation like this insults the intelligence of the other teams’ programmers, the product sense of the product manager, and the taste of the designer. It’s an expensive insurance policy purchased because of a vague anxiety that someone is incompetent.

These insurance policies are not necessary, and the price you pay is loss of culture, creativity, camaraderie, time, and teamwork. What is necessary is open communication, an open codebase, clear leadership channels, and a willingness to share information and thoughts across teams.

It is a much better deal to believe that the other teams are just as smart as yours. Choose to believe that their designers have just has much taste or more. Choose to believe that their product manager gets the business case just like you do. Choose to believe that they have something to teach you.

And you know what? If it turns out that they are incompetent, then you can buy that insurance policy. But don’t do it before you know you need it. That’s premature optimization.

Thanks for reading Re:Fundamentals! Subscribe for free to receive new posts and support my work.

Documentation Hell

Documents are farts in the wind—pungent for a brief moment, and then forever out of sync.

One thing CYA teams like to do is document, document, document.

Documents have their place, but let’s not forget what they are.

Documents are a management tool. Gross! They exist so that managers, who don’t have time to look at the code, can feel involved and up to date with the work.

Documents are wishes about how the product and system should behave.

Documents are farts in the wind—pungent for a brief moment, and then forever out of sync.

Documents are arguments to upper management that someone did wrong or right.

But, documents are not the truth. The truth about the state of the system is only in the code. The truth about what the system can or can not do is again only to be found in the act of coding. The truth about which team is at fault and which team needs to update is in the code. Someone somewhere will have to make a discovery in code to fix that bug, speed up that process, or handle that edge case.

So, why do CYA teams optimize for documents over code? Because they can be more easily emailed to a manager and held up as evidence against another team. Evidence that they did their due diligence, but the other team didn’t. But the evidence doesn’t tell the whole story, only the code can.

A much better alternative is to optimize for coding. Optimize for finding out real things about the system. Optimize for trying an experiment to observe reality with real code really running on a computer. Optimize to do the thing programmers are meant to do! Proving ideas that way is much more positive culturally, and a much more effective persuasion device.

Insisting in dealing primarily with working software simultaneously sets the bar of excellence higher for every team, and maximizes programmer happiness and productivity.

A programmer met with a challenge to produce working software to prove a point will have a blast. But give them a document to write? Much less fun.

Missing the Point

My next problem with CYA is that a team who is concerned with it is not aiming at the company’s goals. They contort otherwise productive goals into destructive goals. They might not say it outright, but their actions betray it.

“We want to help our customers” becomes “we want to protect our product.”

“We want to work with others” becomes “we want to protect our team.”

“We want to move technology forward” becomes “we don’t want to make a mess.”

“We want to try something new” becomes “we will sit and wait to be forced.”

I’m Protecting My Team

Managers CYA the hardest.

They’ll play games to make sure they and their reports never look bad. They will deflect questions, answer with lengthy documents, setup meetings, pad estimates, and generally waste time, all to the tune of, “I’m protecting my team.”

They create this vision of self sacrifice and inform the team that they are taking on a wave of requests and competing priorities which the team could never handle, not in a million years.

I think, sometimes, it’s justification for a job that’s not necessary.

Programmers are not children. A manager is not a parent. Chief among a lead programmer’s jobs is to say “no” to unhelpful requests that don’t move the needle. Or, better yet, “no, but you are welcome to investigate. Pull requests welcome!”

This type of response encourages play with the actual stuff we all care about—the code and the software—over documents and vague ideas. Treating software as actually malleable and available to other teams is a powerful alternative to needing manager approval and sync-up sessions and prioritization meetings.

Those are all insurance policies for problems you don’t have yet.

Mourning My Loss

To say I have conflicted emotions about AI coding is an understatement. It’s like I’m going through the stages of grieving.

At first, I was dumbfounded. Incredulous. You can see it in my article AI Coding is Finally Here. I didn’t know what to make of it. I still don’t.

It’s been a few months since I wrote that article, and I’ve still been using AI. I’ve found some things it’s really excellent with, and some things it’s not so great with. I still write some code by hand, but usually when I go in to type something out, I think, “it would be faster to prompt this.” And so I do, and it is.

Today, that made me sad.

It’s a Murder

I am starting to miss the days when there was really only one way to get programming done, and that was doing it all by hand, and becoming an expert in the syntax and the form and what letters to type. I’m missing the days when I had to google for documentation, read it carefully, and filter out the useful information from the stuff that didn’t apply. I’m missing that learning process. I didn’t appreciate that it was necessary learning back thenand now I’m sad that it’s not necessary any more.

It feels like I lost a friend. I liked the explorative, learning, creative work of coding. Even worse, it feels like that friend died of unnatural causes - it feels like they were murdered. Taken from me. And, to add salt to the wound, the murderer got away with it using some of my intellectual property.

It’s fair to say that with AI you still have to learn things. It’s true. With AI coding, you can’t just let it rip and not look at the code, not understand anything, and not worry. You get into hot water really fast going down that road. So, you still need to understand how the system works. But, the exploration, if there is any, isn’t deeply technical any more because any questions you have about how to implement something are already answered.

But, I liked the detailed technical exploration. I liked answering questions myself like: What’s the best method for this? What’s it’s signature? Is that how most people use it? Oh, that doesn’t return what I thought, let’s try a different one.

When you had those types of questions in the past, you had to read and find out for yourself. That learning process got more information into your head and you became more proficient at the tool you were using.

All that kind of exploration? Gone. It was murdered.

Replaced the Wrong Parts

At work today, I needed to migrate a large chunk of code from one repository into another. It needed to get moved along with all of its tests, any dependencies, and some utilities that had it securely nested in the larger codebase which we were breaking up. I didn’t really do any of that work. I pointed OpenCode at the problem, explained where it needed to move the files, told it to move them first and then solve errors after (doing it the other way blew the context window too fast) and then I let it rip. I went to a meeting in the mean time.

When I was done with the meeting, OpenCode was done with the migration. I checked it. I tested the code. It all ran just fine.

This was not a trivial task. There were 80 files moved from the old codebase. A suite of 150 tests to migrate. It all works.

Did I read the whole PR? No, it’s 80 files for crying out loud! I just ran the code, saw that it worked, looked at the folder structure, and trusted it. When the PR got approved, I threw my head back in my chair, face in my hands, and said out loud, “I’m so cooked. What are we doing?”

More sadness came when I realized that I had just replaced the fun part of my job with a meeting.

Back from the Dead - TDD is Ancient

Test-driven Development is Ancient

It’s usually uncouth to talk about test-driven development at the dinner table, but I’ve been on a tear with it recently, and my family and team members have heard enough – so I need to write it down. What have I been ranting about? How old TDD is.

The software industry has developed an interesting pattern. As an industry, we’re growing fast, and everyone is always happy to talk about how fast we’re growing. But what doesn’t get talked about is where we came from.

Software companies hire huge swathes of new graduates each year and we always come up with new technologies and methodologies–new fashions – if you will. But most of these methodologies and technologies are actually old. Some are even ancient.

New graduates typically have no idea what came before, because they’re too busy learning what’s here now. And all the experienced programmers don’t have time to explain what came before because they’re too busy teaching the current stuff – or they’re in meetings. And so it is that our industry lies in a perpetual state of inexperience. The new not knowing the history, the old being too busy to teach it. In my view, this is the primary reason we continuously regurgitate old methods and tech ideas. One of those ideas is test-driven development. In my small effort to end the cycle, I’d like to write a bit about its history.

I’ve been searching for quite some time on the history of test-driven development and one of the earliest sources I’ve found is from a 2008 interview with Jerry Weinberg, who worked on project Mercury (the U.S.’s project to put a man in Earth orbit before the Soviets), Jerry recounts a co-worker teaching him to write tests first way back in 1957:

Interviewer: Computers don’t break down as they used to, so what’s the motivation for unit testing and test-first programming today?

Jerry: We didn’t call those things by those names back then, but if you look at my first book (Computer Programming
Fundamentals, Leeds & Weinberg, first edition 1961 —MB) and many others since, you’ll see that was always the way we
thought was the only logical way to do things. I learned it from Bernie Dimsdale, who learned it from von Neumann.

When I started in computing, I had nobody to teach me programming, so I read the manuals and taught myself. I thought
I was pretty good, then I ran into Bernie (in 1957), who showed me how the really smart people did things. My ego was a
bit shocked at first, but then I figured out that if von Neumann did things this way, I should.

So programmers have done test-first development since very near the beginning of software. Of course, things in those days were different, and these 1957 tests were manual. If you only had a few minutes with the computing machine, you’d write your program on the punch cards and write your expected output elsewhere. When you got to see your program’s output, you could quickly compare the two and save everybody time. Jerry Weinberg said that this was seen as “the only logical way to do things,” but there are more reasons for this. In my view, the best of these is in an address from Edsger Dijkstra in 1972. Dijkstra lays out the problem beautifully:

Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to
show the presence of bugs, but is hopelessly inadequate for showing their absence. The only effective way to raise the
confidence level of a program significantly is to give a convincing proof of its correctness. But one should not first
make the program and then prove its correctness, because then the requirement of providing the proof would only increase
the poor programmer’s burden. On the contrary: the programmer should let correctness proof and program grow hand in hand.

How often do programmers today allow a “convincing proof of correctness” to grow with their program? I fear it’s not often, but this is the goal of test-driven development. As Dijkstra said, a bug can easily be proven to exist with testing after programming, but can you prove the absence of bugs with that sort of testing? Not easily. When we write tests first, we provide with our program a proof that each piece works as intended. Of course, this also does not guarantee the absence of bugs related to side effects, but it does guarantee (in theory) the absence of bugs related to the intended behaviors. This is the primary argument in defense of test-driven development – tests written before the program provide more convincing evidence of correctness.

There are more references to old-school test-driven development which I won’t bog down this article with. But I think it will suffice to say that this was in practice in many projects in the early days of computing. It was then in the 90’s that Kent Beck “rediscovered”, and perhaps re-popularized, test-driven development. He describes his discovery as follows:

The original description of TDD was in an ancient book about programming. It said you take the input tape, manually
type in the output tape you expect, then program until the actual output tape matches the expected output. After I’d
written the first xUnit framework in Smalltalk I remembered reading this and tried it out. That was the origin of TDD
for me. When describing TDD to older programmers, I often hear, “Of course. How else could you program?” Therefore I
refer to my role as “rediscovering” TDD.

Kent Beck’s TDD revival is why most programmers today are familiar with the practice. Even Robert Martin (one of TDD’s most ardent missionaries) learned it from Kent, saying that to learn it, they pair programmed, and Martin was struck by the granularity of the practice. Kent Beck wrote a line of test code, then a line of production code to make it pass. This, to my understanding, is the beginning of the adage “red, green, refactor”. Eventually these steps were codified into rules and The Three Laws of TDD were born:

  1. You must write a failing test before you write any production code.
  2. You must not write more of a test than is sufficient to fail, or fail to compile.
  3. You must not write more production code than is sufficient to make the currently failing test pass.

The Three Laws of TDD, in my view, are the gold standard of test-driven development. They put a programmer into a minute-by-minute cycle of writing a test, making it pass, and refactoring. In my experience, following these rules and trying to do so as prescribed by Robert Martin and Kent Beck increases productivity, enables the creation of some ingenious algorithms and designs, and improves code stability and developer confidence. I also think it’s more fun! I won’t be going back to testing after, but I know many developers struggle to apply these rules.

Many developers cite concerns with code quality, working at the edges of systems, and working with legacy code that doesn’t have tests. Indeed, this is where TDD gets difficult or even infeasible. How do we get around these barriers? There is a way, but that’s a topic for another article.