Whack-A-Method

I Feel a Framework Coming On

[Agile Programming: Lesson One, Part Four]

I’ve been pitching to the world this notion of a gradual ascent into programming excellence, starting with just a bit of refactoring the innards of classes in a small problem space, and just a bit of test-driving in the same problem space.

I ask people to do the exercises, keeping methods really, really small, and really simple. But up until now, if someone asked, Got a Test for That?, my answer, sadly, was No, Sorry, Use Static Analysis Tools – There Are Tons of Them Out There. Lame answer.

Inspired (again) at a recent CodeMash by another whack at the excellent Ruby Koans, and having heard great things about similar Scala Koans, I wanted a koans-ish learning tool for refactoring. You know what’s cool and fun about these koans? Everything, that’s what. Learning should be fun. Yes, that’s right. You heard it here first. These test-based koans are super fun.

When I asked people to keep methods under some number of lines of code, and under some cyclomatic complexity threshold, I Wanted a Test for That. And now, indeed, I do. I have two Whack-A-Method exercises; one for refactoring a sub-optimal implementation of BankOCRKata, and another for continuing to test-drive a pretty-good-so-far partial implementation of the BankOCRKata. The first exercise has a couple of really Eager end-to-end tests, but Hey! They give you 100% Code Coverage! Yay for anything!

So, these are Eclipse projects. If you have Eclemma installed, then you can pull down Coverage As > JUnit Test, and behold: you see your code coverage, of course. And in either Whack-A-Method exercise, you see the results of the tests for current production code behavior, AND YOU ALSO see results of tests that automagically recurse through your output folders for .class files, and ask a little jarred-up, embedded static analysis tool called CyVis (Tx, Guys!) to instantiate a test case for each of your classes, and instantiate a different test case for each of your methods.

Because I used the Junit 4 built-in parameterized test runner extension, you don’t get much info from the tests that run green, and you cannot (at least as yet) click through ugly classes or methods in the failures (though I’ve provided meaningful failure semantics).

But Hey!  It’s totally version 0.1, and it works OK.

Caveat Lector: Yes, We are Refactoring on Red

So, in the real world, you never want to refactor on a red bar, because you cannot tell whether you are screwing up existing behavior.

And, on the other hand, in the world of koans and learning tools, tests are an awesomely addicting, engaging learning mechanism. So, friends, consider this current Whack-a-Method a proof of concept. It’s easy to try, requires no installation, works right outa the box, and is fun.

But you must indeed distinguish between red bars in the tests that cover the production code, vs whackAmethod red bars. In this pair of exercises, it’s OK if your whackAmethod tests are red while you refactor. The other tests are saving your bacon with respect to code behavior. It’s not OK if those non-whackAmethod tests are red.

Of course, you are heading toward a green bar. And you want to get there as fast as you can, and stay there as much of the time as you can.

And ultimately this tool is likely to become plugins and extensions for editors like emacs and eclipse. Not because the world needs scads of additional source analysis plugins, but because most of them are as simple as a Saturn V rocket, and this one is dead simple. And for now, as I’ve said before, you really only need a couple of metrics.

Whack-Extract Till Green

In the refactoring Whack-A-Method exercise, you just keep extracting methods and classes until everything runs green. Easy? Try it and find out!

In the TDD Whack-A-Method exercise, you just keep test-driving your code till you have fulfilled all of the requirements in the Requirements.txt file, and you keep all the tests green as much of the time as you can.

Tweak the Thresholds, Once You Know What You’re Doing

You can tweak the constants in CleanTestBase.java to tighten or loosen the method/class size and complexity thresholds. But please don’t at first. The defaults help learning to happen.

Does it Eat its Own Dogfood?

Yes, it does. All the classes in the whackAmethod package are themselves fully Whack-A-Method compliant, given the default thresholds. Let’s see a plugin do THAT.

A Bit of 0.1 Framework Fer Ya

So, if you want to try this experiment on any Java codebase of your own, you totally can, with minimal copy, paste, config (well, as far as I know today — again, version 0.1).

  1. Copy the entire whackAmethod package from my test source-folder to your own test source folder. (If you want to refactor my source, go for it, lemme know, and I’ll likely give you commit rights to the project).
  2. Copy the cyvis.jar and the asm-all-2.1.jar to your project and build path.
  3. Change the constant STARTING_DIRECTORY_NAME in  RecursiveClassFileFinder.java to point to the root of your output folder (it currently defaults to “bin”).
  4. Run the tests. The Whack-A-Method tests should all run.
  5. Contact me with any problems.

Agile Programming: Lesson One (Part One)

What is Agile Programming, Anyway? Why Bother?

Great questions: I’ve tried to address them in this other post. Meanwhile, for programmers who wish to learn how to become more skillful, valuable, and excellent, the paragraphs and exercises below are for you.

The Bad News: Lots of Learning, Poorly Arranged

Agile Programming with object-oriented programming languages takes a good long while to learn to do “well enough.”

One chunk of practices at a time, or in a single chunk of total mastery, the books ask you to pick up technical practices, principles, and patterns:  TDDRefactoring. Clean Code. Etc, etc.

It’s not entirely unfair to say this:

As currently described by the various books and training available, learning Agile Programming is itself a waterfall project.

Your journey will be incremental and iterative, but the books themselves are, without intending it, phase-like. Even when they provide good tutorial, they provide it in the context of an isolated practice or set of practices. Too many books presume you have read the other books already. And some of them are truly only reference, with no real tutorial at all.

Even if the learning path were better (and a better learning path is exactly what I hope to provide here), and even if you are an experienced programmer, it takes thousands of hours to master Agile Programming well enough that you can be reliably productive on a mature, healthy agile team in the real world.

“Good enough” Agile Programming skill, by that standard, requires an amount of learning equivalent to an engineering Masters Degree, coupled with many more hours of supervised residency. You’ll need to learn the material using “breakable toy” codebases, and then (in a sense), all over again in real enterprise codebases. And it’s a moving target: the craft keeps advancing in the direction of saving us and our stakeholders time and money, which is what all software craft is intended to accomplish.

By the way, please don’t call it “TDD (Test-Driven Development),” which is one critical practice in Agile Programming, but only one practice:

Calling Agile Programming TDD is like calling a car a transmission. Trust me: you’re gonna need the whole car.

The Good News: You Can Start Small

What you want is to learn a thin slice of Refactoring, along with a thin slice of TDD, and a thin slice of Clean Code, etc. One iteration at a time, one increment at a time. I intend to help you with a learning path that looks just like that.

So, as a programmer committed to becoming skillful in Agile Programming, where do you start? I have a suggestion that I’ve been marinating in for some time now. It might not surprise you, at this point, that I do not recommend starting by learning how to Test-Drive. Instead, I recommend starting with just a bit of Refactoring. Learn how to clean up small messes. Learn how to turn fairly small Java messes into spotlessly Clean Code.

Programmers should begin by learning how to make a few methods in a couple of classes spotlessly clean, exactly because only when they are truly expert and speedy at that, will they be able to make sensible choices when faced with horrific codebases, tight deadlines, mediocre colleagues, or any two of those three.

Once you’re good at cleaning small messes, and at test-driving small chunks of code, you’ll be ready for larger messes and larger test-driving projects.

Lesson One: Refactoring to a Clean Method Hierarchy

If life throws at you a decently test-protected, somewhat ugly Java class (which won’t happen all that frequently, I admit), learn how to get it into a decent Clean Hierarchy of Composed Methods — a Clean Method Hierarchy. This is my term for a class whose hierarchy is outlined and organized into short, clear, descriptive methods at each hierarchical level.

Get good at cleaning up small messes in this specific way, THEN learn how to test-drive a replacement for the refactored code. (In a separate post, I’ll try to justify why I think this makes a good starting point; meanwhile, I’m going to ask you to make that leap of faith.)

Hierarchy: Outline Within Outline

Complex systems are naturally hierarchical, and we use outlining as a tool to help the right hierarchy emerge. Books, symphonies, organizations — all tend to have natural hierarchies. (They also tend to be more network-like than hierarchical, but that’s a concern for later.) If you were writing a book called How to Build Your Own Garage, would it’s Table of Contents start like this?

  • Buy more two-stroke fuel
  • Fuel up the big chainsaw
  • Take down any big Willow trees out back
  • Rent a stump grinder
  • Remove the tree stumps
  • Cut the trunks into logs
  • Have logs and stump debris hauled away…

Well, I hope it would not. Like any thoughtful book author, you would likely outline your book, then re-outline, then outline again.

One outline draft, translated into Java, might something like this:

But of course, few books, like few Java classes, can get away with hierarchies this shallow. Again, being the thoughtful book author you are, you would sooner or later see that another hierarchical level is trying to emerge here. It seems that buildYourGarage() really wants to be just a few methods at a larger hierarchical level than the ones above:

The rest of our hierarchy might then look like this:

Believe it or not, and like it or not, the consensus of Agile Programmers is that this is the sort of hierarchy into which we need to organize our code.

This is your ends, the way you want your individual classes to read when you are done test-driving or refactoring them. A class that is a Clean Method Hierarchy reads like a really well-written book outline, with clear and sensible hierarchical levels: chapters, sections, sub-sections, and sub-sub-sections. You can see how a good author has thought through a book’s natural hierarchy in a really well-organized Table of Contents.

Most Java code I read does not read this way. Indeed, much of it reads not like a Table of Contents, but like an Index that someone has resorted in page-number order. Like an exploded view of a car’s parts. Like the inventory list for a large hardware store after a tornado. You get the idea.

So Again, Don’t Test-Drive Clean Hierarchies First

Again, for reasons I’ll provide elsewhere, based in my coding, coaching, and training experience, I really think you just start by refactoring code that could have been test-driven to a Clean Method Hierarchy, but WAS NOT. This drives home certain principles, patterns, practices, and techniques in a way that will make your test-driving more meaningful and easier to learn later. I predict. We’ll revisit the whole idea of test-driving Clean Method Hierarchies in another post.

Some Exercises to Start With

I’ll give you the following two little code laboratories in which to practice learning how to get to a Clean Method Hierarchy state. These are inspired by Uncle Bob Martin’s Extract Till You Drop blog post. The rules for the Challenges are fairly simple:

  • Keep all existing tests, such as they are, running green as much as possible. Also try to purposefully break them sometimes, to see what kinds of coverage they give you in the code (more on that later, too).
  • Rename each entity (project, package, class, method, variable) at least twice, as you learn more and more about what it should be doing
  • By the time you are done, no method should contain more than 8 lines of code (this also applies to test methods), and most methods should be in the neighborhood of 4 or 5 lines of code, including declarations and return statements.

The “Somewhat Ugly” Challenge

Learn to start with a somewhat-ugly class (an implementation of part of a kata called the BankOCRKata), and refactor the code till it’s gorgeous. Do that lots of times (dozens), several ways. Here are three ways, right off the bat:

  1. Refactor all of the methods in the AccountNumber class, without extracting any of them to new classes. You can change variable scope however you like.
  2. Same as first way, except try not to turn method-scope variables into fields on the class. Try instead to find some other way to accomplish your method extractions.
  3. Same as first way, except extract groups of related methods to new classes however you see fit. Try to keep each class to no more than 3 public methods, and no more than 5 private helper methods. As you extract behavior to new classes, write new tests for the public methods in new test classes.

You can find the Java code for this first Challenge here, in the Pillar Technology github repository. It’s set up to work out of the box as an Eclipse project. (BTW, as I describe here, it is now wrapped up in a little testing framework called Whack-A-Method, which can help you police your compliance with the above goals as you go.)

EM-Pathy: The “Way Uglier” Challenge

Then tackle a much uglier class, and refactor it till it’s gorgeous. Again, repeat this lots of times. You can find the Java code for this second Challenge here, in the Pillar Technology github repository. It too is set up to work out of the box as an Eclipse project.

Clean Method Hierarchy: Practices and Principles Under the Hood

In other posts, there are a few things I’ll introduce you to (should you not already know them) as you learn to master this technique of refactoring classes into well-organized hierarchies. For the exercises in this post, I believe the list includes the following items (I reserve the right to revise this list as smart people give me feedback):

You can read about several of these in more detail in this following post.

If you practice understanding and using these and only these principles, patterns, and practices, you can get to a point where, indeed, your methods and classes are very clean, and you have great test coverage.

And maybe, for a little while, that’s not just OK, but awesome. At that point, you will be ready for different learning.

What Not To Learn Yet

Eventually you will have to learn agile patterns, principles, and practices that touch on dependency injection, test doubles, storytesting/acceptance-test-driven development, complex object models (Gof Design Patterns, small and large refactorings), databases and data modeling, continuous integration, version control strategies, continuous deployment, heavyweight frameworks, convention-based frameworks, concurrency, pair-programming, domain-driven design, dynamically typed vs statically typed languages, etc, etc.

But not today. In fact, not this month, nor in the coming few months. You may very well want to explore some of that material; good for you if you do.

In the meantime, though, if you really are committed to becoming expert at Agile Programming, and if you really are puzzled about where to start, why not start by mastering the art of keeping one class gorgeously clean (which may involve breaking it up into a few classes), and gorgeously tested?

You might say that with this first bit of knowledge to master, what you are really learning to conquer is modular programming. That’s fine. We’ll have more for you to learn later, but truth be told, most programmers in the software industry have not yet mastered modular programming as initially described well more than 3o years ago — and Agile Programming gives us just the tools to measure how modular our code truly is.

I guarantee you this: Agile Programming Lesson One a bite-sized bit of learning to start with, and it cannot possibly hurt you. If you get stuck in the above exercises, try reading ahead to this next bit of explanation, to see if it helps.

Automated Acceptance Tests: Hold on Just a Second Here

Long Live Storytests, Dang Blast It

The recent claims made by a well-known agile coaching thoughtleader notwithstanding, I work hard to get clients to adopt real Storytesting practices, with real Storytesting tools (FitNesse is still my tool of choice; I work mostly with Java teams). I will continue to do so. I find no false economy or Faustian bargain with FitNesse tests, and I suspect it is because I am coaching the use of them differently than James Shore is.

Manual Regression Testing = Really Bad Thing; Agreed

Regression testing of any kind is classically about proving that we are Building the Thing Right. For true regression protection purposes, I want manual regression tests to be replaced with a requisite blend of automated tests (using the testing triangle pattern for allocating test automation resources) plus a requisite amount of manual exploratory testing.

Whoa Nelly: Storytests Are Not About Bugs

But Storytesting / Automated Acceptance testing is really an entirely different kind of testing. It is indeed an unaffordable way to attempt to prove that we are Building the Thing Right, but in my experience, the perfect way to prove that we are Building the Right Thing. I want these kinds of tests to simply demonstrate that we mostly got the scope and Definition of Done right for a given story. This is a far cry from all of the edge cases and unhappy paths that make up complete regression protection for a story.

If, as James claims, clients are trying to use Storytests for what they are not good at, I stop it. I suggest other testing avenues for regression protection.

This difference really is critical. Storytests merely tend to prove, better than anything else, that we got the story done.

Granted, a story is not Done Done Done until we have squeezed all the defects out of it. I hope to heck the bottom of my testing triangle, where the giant, high-speed suites of tiny little xUnit isolation tests / microtests live, does the lion’s share of the regression protection for me. Yes, TDD/BDD are design practices. AND, only from my cold dead hands will you pry the regression protection those tests/specs provide me. Please, please, don’t try to use FitNesse for that work. Wrong tool, man.

The Benefits of a Precise, Deterministic Definition of Done

So if I do have awesome xUnit test suites (and a bit of other regression protection) to prove we have Built the Thing Right, my Storytests need only prove, to some customer-acceptable extent, that we have Built the Right Thing. What benefits do I give up if I give up this use of Storytesting?  Well, I have a list, but here is the number one item on it:

  1. My best tool for story breakdown. You want me to prove that a story in a web application without browser-resident behavior got done as estimated in this Sprint? Some small increment of useful service layer code or biz logic or whatever?  Storytesting is the first thing I reach for.

    Without that practice, I have teams (especially new ones) presuming that stories can only be as small as the smallest bit of browser resident behavior they evidence. That is a truly horrible thing, because then my stories can clandestinely grow to ginormous size. This leads, in turn, to late cycle iteration surprises (“Uh, it turns out that we just found out that this 6 foot-pound story is really gonna be something like 67 foot-pounds. It won’t be ready for the verification meeting tomorrow.”)

    Heck, one recent team I coached had an app with no GUI-evident behavior anywhere. FitNesse was the perfect way for them to show progress. Indeed, to them, it now seems in retrospect that Storytesting was the only way to fly. Without something like it, there would have been no way for product owners to verify anything at all.

Retiring Old Storytests

Large suites of automated functional tests, in any tool, are notoriously expensive to maintain, especially compared to xUnit microtests. FitNesse, being a web app without in-built refactoring support for things like multiple references across tables and pages, can make things worse. (People are slapping FitNesse front ends on top of Selenium suites these days, which strike me as truly  horrible for regression suites.)

Fine. Storytests are functional tests. They run slow and are very expensive to maintain  Therefore let’s only keep our Stortytests for as long as they are useful for verification, requirements scope, acceptance kinds of purposes.

Do I really need to prove, in Sprint n+10, that I got scope correct in Sprint n?  I suggest that I don’t. That’s old news. Deleting old Storytest suites also applies healthy pressure on the team to derive their regression protection from healthier tests and mechanisms.

Small Groups of Stakeholders Can Learn to Collaborate on Storytests

Don’t believe for a minute that this is impossible to do. I have frequently done it. I am happy to show you how to do it.

Yes it is difficult, but compared to what? Teaching teams OOD?  Teaching teams TDD? Configuring a Tomcat cluster? Please.

I’ve had several successes getting small sub-teams of developers, testers, and (critically) product owners to collaborate on Storytest design and development. No, I don’t want testers writing acceptance tests alone. No, I don’t think Product Owners can or should write such tests on their own either. And also, perhaps controversially, I am starting to think that good old fashioned Decision Table style permutation tables, as a Storytesting semantics, is the sweet spot for Java Storytesting. BDD step definitions, as developed so far in at least two ways for FitNesse, leave me cold: either I have several tables referring to each other in a way that makes refactoring cumbersome, or I have complex fixture code that uses regex and annotations. I will use these things if pressed by a savvy, committed product owner, but otherwise, give me Slim Decision Tables.

Honestly, I have on several occasions found ways to craft suites of Decision Tables (nee ColumnFixture tables) so that they are plenty expressive for all concerned. I’ve had several teams, including product owners, fall in love with the practice and the tool. I’ll keep rolling with that.

Summary: Be Careful What You Throw Away, and Why

Used as, I would claim, originally intended, Storytests / Automated Acceptance tests are a wonderful and essential tool for agile teams. More on this in later posts. I personally cannot control scope, carve stories small enough, or show deterministic enough definition of done without them.

Yes, client teams can learn to use the practice and tools improperly, in which case, it’s our job to step in and suggest alternatives.

Let’s try to come to agreement as a community on the ROI, the uses, and the best practices and patterns for Storytesting before we declare it dead.

Your API Ran Over my DSL

When You are Programming in an OO Language, You are Always Creating Domain Specific Languages

As is often said and written, the history of programming, and programming language design, is about programs becoming more expressive, fluent, lingual. All of the Object Oriented programmers I know and trust most would say that when they program, they are creating what are, in effect, languages. So, as a community of software craftsmen, how intentional and explicit are we about programming — and teaching programming — in a “lingual way,” and what does that mean?

I care a lot about helping novice OO programmers learn OOD, because when I set out to learn it years ago, my education was flawed. And because most programmers in OO languages are very bad at OOD. And because when I try to teach OOD to others, these concepts are still not as easy to teach as I want them to be. And finally, because there is no better or more natural metaphor for expressiveness than the notion of a spoken language itself. Perhaps by definition.

I want to encourage novice OO programmers to think in an expressive, lingual, semantic way. And the terms of art in OOD are not helpful enough there.

Turns out there is an emerging popular notion, the Domain Specific Language (DSL), that is all about how expressive we are in software development. Cool! Let’s hijack that term to critique the expressiveness of the code we write.

(The definition of Domain Specific Language offered by Martin Fowler does not strictly permit us from designating something as non-fluent as a Java API as a DSL — I think this is a mistake. I use the term DSL to mean “anything that we program, for a given domain, that can and should be as fluent and lingual as possible.”)

I Dislike a Lot of OOD Terms. DSL is Not One of Them.

Don’t get me wrong. There is a rich vocabulary about being clear, clean, and expressive in programming, but none of the terms helps me the way the noun DSL helps me.

Here are some terms that are helpful, but not in the right way: programming by intention, meaningful names, expressive names, abstraction level, object modeling, domain-driven design, etc, etc. Each of these terms is either just plain hard to learn (“abstraction level”), or it is  focused on too small a lingual granule (a meaningfully-named method), or it is not especially lingual in focus at all (“object modeling”).

An object model can be healthy in the sense that the classes and their methods are about the right size, and focus on about the right scope of responsibility. And it can have OK, not great names, that are not semantically coherent. And the whole thing can feel like a non-German speaker driving through Germany: “What a beautiful little town!  Where the hell are we?”

An abstraction level can be decoupled nicely, and still not be expressive in a lingual way. “Wow, nice clean scoping!  But, uh, what’s going on in here?”

A group of classes, methods, and variables can be pretty well named, in their own narrow, individual  contexts, and still not form a semantically consistent little vernacular. This is not common, surely. My point is that if we focus on programming in a lingual way, constantly focusing on how well our code reads as a language, we can get all of the stuff we want: SRP-compliance, decoupling, expressiveness, clarity, DRY, etc.

In Charges this Shiny New DSL Term

There is a sudden sexiness emerging around Domain Specific Languages (DSLs), and Martin Fowler’s book will increase the buzz. To Fowler’s mind, based on his research into the prior art around DSLs, the term should be reserved for a level of fluency that is “sentence like” in its grammatical richness. The chief application is to make problem domains, more than solution domains, very fluent and expressive, especially to non-technical stakeholders, rather in a Ubiqituous Language, Domain-Driven Design fashion. Fair enough. It’s a great idea, and frequently worth the effort. I am 110% in favor of it.

But FitNesse/Slim Given/When/Then DSLs (for example) don’t solve my problem, which is this: encouraging OO programmers to program in a lingually expressive way, within the limits of whatever programming language they are using. You can create real DSLs in Java using techniques like method chaining, and tools like Hamcrest matchers, but that ain’t exactly novice-level craft.

Fowler’s book draft defines DSL to explicitly exclude traditional command-query class APIs in languages like Java and C#. I want a term that encourages, describes, and defines what it means to make those command-query APIs as lingual as possible. I want novices to have guidelines for creating command-query APIs that form consistent, lingual semantics as collections of verbs, nouns, and other parts of speech.

That thing. That thing I just said. That’s what I want a term for. Why can’t I use DSL as a term to mean that? Well, I’m gonna. It’s too useful a term for programmers everywhere.

DSLs and Lingual Design

One typical scope for a DSL, in Java programming, is a package that adheres to all of the package coherence principles of common reuse and common closure. Those classes that tend to be released, reused, and changed together and therefore tend to cohere in their own package together, really ought to be crafted as a little language. That’s an example of what we mean by a Lingo.

And, a class can be a DSL, and should be when it can: the semantics of the method names within the class should be grammatically consistent.

Now, Lingual Design is simply an orientation toward ensuring that our command-query APIs have clear, SRP-compliant boundaries (e.g., package boundaries or class boundaries), and tend to hang together as coherent, consistent DSLs.

No, Java and C# and many strongly typed languages do not make this easy to do, and make you jump through fancy hoops to get all the way to Fowler’s definition of DSL fluency. So what!

Even without the fancy hoops, you can make classes small and expressive, and methods small and expressive. You can have can have separate classes for Operators/Operations and their Operands.

You Are Always Creating a Language

Whatever general purpose programming language, or Domain Indifferent Language (as Mike Hill puts it) you are using, no matter what sort of API and object model you are crafting, you are always creating another language. More or less crude, more or less fluent, more or less semantically consistent, whatever you end up making will be read by other programmers in the high hopes that it reads like a consistent little bit of spoken language.

Try thinking, for a bit of your programming, in terms of Lingual Design. Try to see the boundaries between, and the hierarchical tiers, of your DSLs.

How does it feel, and how does it work, to be intentional about OOD in this particular way?  Can this be a useful way to teach and learn OOD?

Software Execs: Do You Have Toxic Code Assets?

Simple “Clean Code” Metrics for C-Level Execs

A recent Twitter thread I was involved in goes something like this. Someone claimed that  software managers and executives should not have to care whether their developers are test-driving Clean Code.  They should be able to presume that their developers are always courageous, disciplined, and skillful enough to produce that level of quality. Ultimately that quality turns into least Total Cost of Ownership (TCO) for any codebase asset.

Sigh. Well, would that that were true. To my mind, it’s like saying that all consumers should presume that all cars are built as well as Hondas. Sadly, they are not.

So, yes, of course, developers should take full ownership of the extent to which they create  Clean Code. Developers should own their own levels of skill, discipline, knowledge, passion, and courage. Absolutely so. Developers should refuse to cave to pressure to hack junk out the door to meet deadlines. That’s not what I am debating. I am debating whether or not the industry has accountability systems and quality monitoring systems to ensure that developers are in fact doing all of that.  My premise is that something like the opposite of that is going on.

If managers and developers are still being rewarded for hacking junk out the door, and executives and managers cannot and do not measure the TCO consequences, well then no wonder our culture of Software Craftsmanship is not spreading. We have a crappy incentive structure.

Are Hondas still better made than GM cars, all these years later, and despite quality gains on both sides? Of course. The car industry does, in fact, have accountability systems in place to measure asset quality, duty cycles, TCO. Too much money is at stake.

As a responsible car buyer, I inform myself with exactly the least info necessary and sufficient to determine whether I am about to buy a great car or a lemon. I have data to help me predict the outcome.

Managers and executives in software cannot expect that every codebase is as good as a Chevy, nor even a Yugo. Most enterprise software these days, 10+ years into an agile movement and Software Craftsmanship movement, is still gunk that continuously deteriorates.

And managers and executives cannot see that. They are buying lemon after lemon, not knowing any better.

We want managers of developers to insist on Clean Code, so we want them to be able to tell the difference between Clean and Mud, and to hire programmers who code Clean. And we want executives to hire managers like that. These inevitably will be executives who can distinguish between managers who know Clean Code and those who do not. I posit that these executives will in turn need to know how, at a very basic level, to distinguish between Clean and Mud. Only then can they preserve their asset value, and hire delegates who can.

Two Metrics You Can Use to Measure Code Asset Deterioration

At my current client, each team self-governs several kinds of objective and subjective Clean Code quality measures, including test coverage, cyclomatic complexity per module, average module size, coupling, etc. There are all kinds of details here around issues like automated deployment, test quality and semantics, etc. They don’t publish it all, they use most of it tactically, within the team boundaries, to hold themselves and each other accountable for continuous code improvement. The teams can and should own that stuff, and they do.

But you know what?  Each of these teams is also publishing at least two metrics to other teams and straight up the management chain for their codebase assets: test coverage and cyclomatic complexity per method.  The Continuous Integration plugins publish these metrics for all to see. And all any team is held accountable for is this: do not let these numbers slip between iterations. Anyone can see historical trend graphs for these numbers for any of the projects/codebases currently covered (there are several so far, and more each month).

Yes, these two measures are imperfect and can be gamed. Yes, test coverage is a one-way metric. But let’s presume for a moment that we are not hacking the coverage config to exclude huge whacks of yucky code, and we have good-faith participation on developers’ part. If average complexity per method goes from 4 to 6 over a two-week iteration, and if test coverage slips from 80% to 60%, does that not often mean that the codebase, as an asset, probably deteriorated?  My experience has been that it does.  As an owner of such an asset for which numbers had slipped like that, would you not care, and would you not want some answers?  I would, and I counsel others to care and dig in. I hereby counsel you, if you own such assets, to care if those two numbers are slipping from week to week. If they are, I bet you dollars to donuts your software asset is getting toxic.

A Culture of Accountability

So at this client, if those two metrics slip, teams hold each other accountable, and execs are learning to hold dev team managers accountable. Why not? Every car buyer understands MPG these days. Why not educate executives a little bit about how to monitor their code asset health?

Could there be a better 2 or 3 metrics to publish upstairs?  You guys are smart; you tell me. So far, these 2 are working pretty well for me. The published metrics are not sufficient to protect against asset deterioration, but so far they sure seem necessary.

So guess how this is turning out?  We are growing a top-to-bottom, side-to-side culture of Clean Code accountability in what once a garden-variety, badly-outsource-eviscerated, procedural-hacking sort of culture.  Partly by hiring lots of Been-There-Done-That agile coders, and partly with these metrics. Suddenly, managers who were only measuring cost per coding-hour (and slashing FTE jobs to get to LOW $/hour numbers) are measuring more meaningful things. Could we do better? Doubtless. Stop by, see what we are doing, and help improve it.

What metrics would you publish up the reporting chain, and between teams?  How would you help executives detect when their code assets are slipping into that horrible BigBallofMud state?

Speak up, all you shy people. ;)

The Metric I Want for Christmas

Enterprise Software Blight

I get hired to help teams learn agile software development practices. Most of the practices in my tool bag — not all, but most — come from experience, books, articles, blogs, conferences that focus mainly on greenfield development. And as an agile consultant pal of mine, Mike Hill, says, “First step when you are digging a hole:  Stop Digging!”  Turning around how we launch greenfield projects, and the standards of craft, quality, feedback, accountability, and ROI we establish for them — Hey, that’s obviously all good. Most teams, most enterprise are, in fact, still digging everytime they launch a new project. Still making bad enterprise situations much, much worse with more stinky code.

But they are making things worse in more ways than my favorite tools reveal. And perhaps I have been, we have all been, focusing on the wrong kinds of damage. That’s what I want to explore here.

We have spent a number of years trying to help enterprises learn how to, at least, stop digging holes in the object model, in the architecture, in how the team works.

The thing is, these tools in our toolbags really do work best in greenfield situations. Meanwhile greenfield opportunities seem to be slowly drying up. Over the last 10+ years, the software best practices community has been acquiring agile experts, expertise, books, conferences, entire processes, that I think are slowly turning around greenfield project standards. If you work on a project where the issue is how to get everyone up to speed on iterating, velocity, OO, TDD, CI, build and deployment automation, and simple design on a new project, well, good for you. Count yourself extremely lucky. It’s still darned hard to do, but it can be done. It is deeply gratifying work, given enough skill, knowledge, courage, discipline, and management advocacy.

But as arose as a topic at Agile 2008,  and has been arising for me with clients a lot, most developers in the industry can work for years without the opportunity to start from scratch. For most of our careers, we are basically hamstrung by the legacy code issues that keep so many software development professionals living in worlds of constant emergency, constant production defect repair, very slow progress.

Worse than this, our legacy code is accumulating faster than we can cope with it. Our release schedules and iteration schedules are more pressing, while we are increasingly dwarfed by these enormous, stinky, towering piles of crap. We really, really need a way out of this situation — and not just for one team, but for the entire enterprise. And not just in the object model, and not just in the architecture.

Legacy Complexity: It’s All Over the Place

When we do start talking about legacy codebase repair, we often start talking about how to get part of the object model under test. How to start repairing the Java, or the C++, or the C#, or whatever. As far as this goes, this too is 100% goodness. We certainly need characterization tests, opportunistic refactoring in high-traffic, high-business-value neighborhoods of the code. Again, all goodness.

But I suggest that that too might be the wrong thing for us to start with, or at least the wrong thing for us to focus most of our consulting energy on. I suggest that without a better measure of overall complexity from the top to the bottom and from back to front of the enterprise, we don’t really know the best place to start.

I have seen more and more teams engaging in agile software development followed immediately by waterfall integration and deployment.

The more I work at this, the more convinced I am that the legacy complexity that is hurting us all the most is all of this contextual enterprise complexity. Our biggest problem, and biggest potential point of leverage, is the massive legacy bureaucracy that makes inter-system integration, promotion between environments, environment configuration, version control, configuration management, and production deployment such stupendously, horrific nightmares, release after release.

“Total Enterprise Software Complexity”

The main problem is not within the individual systems (as crappy as most of them are, and as tempting it is for us to start diving in and refactoring and test-protecting them). The main problem, as far as I can tell, is between all of these systems. I don’t care how many million lines of stinky legacy untested Java you have. I bet dollars to donuts most of your worst problems are actually between those piles of Java.

I read somewhere a great little discussion (I forget where) about how cyclomatic complexity, for OO code, captures or covers most of what is healthy or unhealthy about a codebase. All the other kinds of dysfunction you would likely find in stinky OO code, and might measure separately, can be covered by cyclomatic complexity. As readers of mine know, I would amend that only slightly, using Crap4J for example, to measure how well test protected the most cyclomatically complex code is. Anyway, the point is that if you are smart, you end up with a single number. Cool. I love single numbers.

So I want a new kind of number. For a given enterprise, before I start determining where to focus my consulting, the metric I want for Christmas would be a single number that blends or relates to at least the following objective and subjective categories of enterprise mess:

  • How many total development teams do we have?
  • How many total developers do we have?
  • How many total systems do we have that are interacting with each other?
  • How many distinct technology stacks do we have in play (e.g., .Net, J2EE, SOA, AS400, Tibco, Rails, etc)?
  • How many distinct frameworks do we have in play (e.g., Struts, Spring, Hibernate, Toplink, Corba, EJB)?
  • How many total  languages are we using (including SQL, Perl, shell scripts, Groovy, Ruby, XML, etc)?
  • How automated is the process of deploying from a dev machine or CI machine to a QA target? From a QA target to Production?
  • How many total lines of XML are there in play in the enterprise?  How many total lines of build-related properties files? XML is so nasty it really does deserve its own measure. XML is a powerful carcinogenic force in organic enterprise systems.
  • What is the average ratio of the lines of code in each system’s build scripts to the total lines of code in the system itself? (Feel free to substitute a better measure of build complexity here)?
  • How much automated end-to-end functional test coverage do we have? Granted (as I advocate elsewhere) you don’t want to lean forever on huge suites of automated functional tests. But as we start healing a quagmire, how many have we got?
  • And yes, what is the complexity of the average object model? How bad off is the average Java or C# or C++ project? A Crap4J number is great here.

So. For the while frigging enterprise, I want a metric, on a scale from zero (ideal) to 1000 (doomed) that describes how much of this mess is interfering with everyone’s ability to make production deadlines, much less transition to a continuously improving, manageable, agile approach.

And I want to be able to tailor a consulting approach — somehow — for an enterprise with a “Total Enterprise Software Complexity” score of 200 very differently than I would for an enterprise with a score of 750.

That’s all I want for Christmas this year. Is that what you want too?  Let’s talk about it. :)

Caveat Lector: This really is an early draft. I throw it out for feedback. If smart enough people review it, I’ll likely be able to refine it greatly. That is my hope. So smart people, please comment and email me.

The Whiteboard-Space to Wall-Space Ratio (WBS/WS)

Filed Under: Seriously Cheap Wins

Why this is true, I really do not completely understand. I want to understand it, and not judge it, but I admit I have difficulty there.

In the kinds of companies at which I have been doing agile software development consulting — coaching, mentoring, training, development — over the past few years, there is an odd trend: lots and lots of wall space, and too little whiteboard space.

I have been seeing lots and lots of conference rooms, team rooms, and miscellaneous rooms in which software development works gets done. And there are acres of wall space around. And there are tons of ideas that must be worked through collaboratively. Brainstorming that must happen, and design and architecture, and project tracking, and planning, and learning and mentoring, and training, and you name it.

Yet, there is this incredible dirth of whiteboard space. As if whiteboards were made of platinum. My favorite example of this is the very large conference room with a 20′ table that seats 24, and at the end of it, a tiny, 4′x4′ whiteboard, folded away in a little closet of its own (as if to say, “Only to be used in dire imaginative emergencies!”). Oh, and best of all, those little round whiteboard erasers maybe 3″ in diameter. They don’t so much erase as they smear.

Closely related to this: the dry-erase marker to whiteboard ratio (DEM/WB), and the dry-eraser-size to whiteboard-size ratio (DES/WBS).

How in the world do people get any creative, collaborative work done in such environments? In high-function agile teams of yore, I have seen walls covered with whiteboard stuff, and we have blithely scribbled floor to ceiling and wall to wall on it, with genuinely useful information. When I walk into a high-function team room, this is one of the things I immediately look for: huge whiteboards slathered with passionate creation and communication and clarification.

At one past engagement, 7 or so of us on a client site shared a little room the size of a large walk-in closet, with no windows, and a single 5′ square whiteboard. We positively crammed that poor board with ideas, then took digital pix of it, then erased it and crammed it with ideas again.

Our ability to think and create and collaborate in software development can literally be constrained by the whiteboard space available to us.

Coming Soon: Whiteboards On Me

I haven’t begun doing this, but I suspect I shall shortly. When I am brought to one of those conference rooms with the tiny closeted whiteboard, I shall say “Hey, I’ll work for you tomorrow for free, if you’ll let me put up 80 square feet of whiteboard on that empty wall there, at my own expense.” I’m going to start building that into my bill rate. [My fall back position will be that suggested by my pal Mike Gantz in the comment below: I'll bring in several whiteboards on wheels.]

Meanwhile, here is my contention around Whiteboard-Space to Wall-Space ratio (WBS/WS). The higher it is, the more time it takes to get things done, the more waste and rework you are likely to have, and the more, in particular, people end up communicating across one week and 50 emails what could have been handled elegantly in 5 minutes with a decent whiteboard diagramming session. Talk about muda.

Go forth, agilistas, and shrink the WBS/WS. Increase the DEM/WB, and the DES/WBS. Every room should have at least one wall where at least half the wall space is covered with whiteboard. Every whiteboard should have at least 8 markers on its little ledge per 30 square feet. And you can get these awesome extra large erasers that clean the boards faster and better. Every whiteboard should have one of those, regardless of size.

Surely this falls under the “cheap win” and “low hanging fruit” category for agile coaches everywhere.

Maybe I should just become a whiteboard consultant. Then I could wear my leather toolbelt and tools everywhere. I love to wear that thing. It’s all pockets and loops.

Continuous Refactoring and the Cost of Decay

Refactor Your Codebase as You Go, or Lose it to Early Death

Also, Scrub Your Teeth Twice a Day

Refactoring is badly misunderstood by many software professionals, and that misunderstanding causes software teams of all kinds – traditional and agile – to forgo refactoring, which in turn dooms them to waste millions of dollars. This is because failure to refactor software systems continuously as they evolve really is tantamount to a death-sentence for them.

To fail to refactor is to unwittingly allow a system to decay, and unchecked, nearly all non-trivial systems decay to the point where they are no longer extensible or maintainable. This has forced thousands of organizations over the decades to attempt to rewrite their business-critical software systems from scratch.

These rewrites, which have their own chronicles of enormous expense and grave peril, are completely avoidable. Using good automated testing and refactoring practices, it is possible to keep codebases extensible enough throughout their useful lifespans that such complete rewrites are never necessary. But such practices take discipline and skill. And acquiring that discipline and skill requires a strategy, commitment, and courage.

So, First of all: Refactoring – What is It?

The original meaning of the word has been polluted and diluted. Here are some of the “refactoring” definitions floating around:

  • Some view it as “gold-plating” – work that adds no business value, and merely serves to stroke the egos of perfectionists who are out of touch with business reality.
  • Some view it as “rework” – rewriting things that could, and should, have been written properly in the first place.
  • Others look at refactoring as miscellaneous code tidying of the kind that is “nice to have,” but should only happen when the team has some slack-time, and is a luxury we can do without, without any serious consequences. This view would compare refactoring to the kind of endless fire-truck-polishing and pushups that firemen do between fires. Busy work, in other words.
  • Still others look at refactoring as a vital, precise way of looking at the daily business of code cleanup, code maintenance, and code extension. They would say that refactoring is something that must be done continuously, to avoid disaster.

Of course, not all of these definitions can be right.

The original, and proper, definition of refactoring is that last one. Here I attempt to explain and justify that. But first let’s talk about where refactoring came from as a practice.

What problem does refactoring try to solve?

The Problem: “Code Debt” and the “Cost of Decay” Curve

What is Code Debt?

Warning: Mixed Metaphors Ahead

Veteran programmers will tell you that from day one, every system is trying to run off the rails, to become a monstrous, tangled behemoth that is increasingly difficult to maintain. Though it can be difficult to accept this unless you have seen it repeatedly firsthand, it is in fact true. No matter how thoughtfully we design up front and try to get it entirely right the first time, no matter how carefully we write tests to protect us as we go, no matter how carefully we try to embrace Simple Design, we inevitably create little messes at the end of each hour, or each day, or each week. There is simply no way to anticipate all the little changes, course corrections, and design experiments that complex systems will undergo in any period.

So enough of dental metaphors for a moment. Software decay is like the sawdust that accumulates in a cabinetmaker’s shop, or the dirty dishes and pots that pile up in a commercial kitchen – such accumulating mess is a kind of opportunity cost. It always happens, and it must be accounted for, planned for, and dealt with, in order to avoid disaster.

Programmers increasingly talk about these little software messes as “code debt” (also called “technical debt“) – debt that must be noted, entered into some kind of local ledger, and eventually paid down, because these little messes, if left unchecked, compound and grow out of control, much like real financial debt.

The Software “Cost of Decay” Curve

Years ago it was discovered that the cost of correcting a defect in software increases exponentially over time. Multiple articles, studies, and white papers have documented this “Cost of Change Curve” since the 1970′s. This curve describes how the cost of change tends to increase as we proceed from one waterfall phase to another. In other words, correcting a problem is cheapest in requirements, more expensive in design, yet more expensive in “coding,” yet more costly in testing, yet more costly in integration and deployment. Scott Ambler discusses this from an agile perspective here, talking about how some claim that agile methods generally flatten this curve. Ron Jeffries contends, alternately, that healthy agile methods like XP don’t flatten this curve, but merely insist on correcting problems at the earliest, cheapest part of it. I agree with Ron, but I claim that’s only part of how agility (and refactoring in particular) helps us with software cost of change.

There is a different (but related) exponential curve I dub the “cost of decay curve.” This curve describes the increasing cost of making any sort of change to the code itself, in any development phase, as the codebase grows more complex and less healthy. As it decays, in other words.

Whether you are adding new functionality, or fixing bugs, or optimizing performance, or whatever, the cost of making changes to your system starts out cheap in release 1, and tends to grow along a scary curve during future releases, if decay goes unrepaired. In release 10, any change you plan to make to your BigBallofMud system is more expensive than it was in release 1. In the graph-like image below, the red line shows how the cost of adding a feature to a system grows from release to release as its decay grows.

Classic cost of decay curve.

The number of releases shown here is arbitrary and illustrative — your mileage will vary. Once more, I am not talking about how, within a project, the cost of detecting and fixing a problem increases inevitably over time, as the Cost of Change curve does. I am saying that we can use the cost of any sort of change (like adding a new feature) to measure how much our increasing decay is costing us. I am using the cost of a change to measure increasing cost of decay.

Back to the dental metaphor. If, in the last few minutes of programming, I just created a tiny inevitable mess by writing 20 lines of code to get a test to pass, and if that mess will inevitably ramify and compound if left uncorrected (as is usually true), then from the organization’s perspective, the cheapest time for the organization to pay me to clean up that mess is immediately – the moment after I created it. I have reduced future change costs by removing the decay. I have scrubbed my teeth, removing the little vermin that tend to eat, multiply, defecate, and die there (I never promised a pleasant return to the metaphor — teeth are, let’s face it, gross).

Again, if a day’s worth of programming, or a week’s worth of programming, caused uncorrected, unrefactored messes to accumulate, the same logic is imposed upon us by the cost of decay curve. The sooner we deal with the messes, the lower the cost of that cleaning effort. It’s really no different than any other “pay a bit now or pay a lot later” practice from our work lives or personal lives. We really ought to scrub our teeth.

Little software messes really are as inevitable as morning breath, from a programmer’s perspective. And nearly all little software messes do ramify, compound, and grow out of control, as the system continues to grow and change. Our need to clean up the mess never vanishes – it just grows larger and larger the longer we put it off, continuously slowing us down and costing us money. But before we talk about how these little messes grow huge, helping to give that cost of decay curve it’s dramatic shape, let’s talk about the worst-case scenario: the BigBallOfMud, and the Complete System Rewrite.

Worst-Case Scenario: The BigBallOfMud, and the Complete Rewrite

Most veteran programmers, whether working in procedural or object oriented languages, have encountered the so-called BigBallOfMud pattern. The characteristics of this pattern are what make the worst legacy code so difficult or impossible to work with. These are codebases in which decay has made the cost of any change very expensive. At one shop at which I once consulted, morale was very low. Everybody seemed to be in the debugger all the time, wrestling with the local legacy BigBallOfMud. When I asked one of them how low morale had sunk, he said something like “You would need to dig a trench to find it.”

With a bad enough BigBallOfMud, the cost of the decay can be so high that the cost of adding the next handful of features is roughly the same as the cost of rewriting the system from scratch. This is a dreadfully expensive and dangerous outcome for any codebase that still retains significant business value. Total system rewrites often blow budgets, teams and careers – unplanned-for resources must be found somewhere for such huge and risky efforts. Below we revisit the cost of decay curve, adding in a blue line showing how we strive to increase our development capacity from release to release. At best, we can achieve this growth linearly, not exponentially.

BigBallOfMud! Busted!.

At the point where the two lines cross, we have our BigBallOfMud. We are out of luck for this particular system – it is no longer possible to add enough resources to maintain or extend it, nor shall it ever be again. Indeed, the cost of decay, and the cost of making any sort of change, can only continue to increase from there, until it becomes essentially infinite – change cannot be made safely or quickly enough at all.

We are then faced with a total system rewrite, because we have lost all of our refactoring opportunity, along with our ability to make any other forms of change. How many expensive, perilous total system rewrites have you seen or taken part in, in your career? How many “legacy codebases” do you know of that just could not be maintained any longer, and which had to be replaced, at great expense, by a rewrite, perhaps in a new technology or with a new approach, perhaps by a completely new team? I have personally seen several over the years. They have not all gone well.

Continue reading

Death by a Thousand Cuts: Time-Slicing and Matrixing

One thing I find repeatedly in dysfunctional software development shops is managers and executives who, instead of encouraging and enabling their staff to form healthy, cohesive, high-function, self-organizing, fulltime project teams or product teams, essentially ask everyone to be part-time members of lots of teams. They micromanage everybody’s workweek, or worse yet, workday.

This means that people must do things like work Monday on one project, work Tuesday through Wed on another project, and Thurs through Friday on another. Worse yet, I see people whose days are actually subdivided into work on various projects. This is variously called matrixing, time-slicing, etc.

There are roles and people for whom this is no big deal, but they are few. For the average programmer on the average enterprise application of non-trivial size this continual context-switching is a big, costly, wasteful, unnecessary deal, and a terrible idea. It expends tremendous amounts of energy in thrash, churn, and waste. Let me explain.

The Cost of a Context Switch

Let’s say some executive runs a cab company that owns every cab in Dallas, Chicago, and Miami. And lets say that for some odd reason, there is no way to get enough cab drivers to cover all the demand in all three cities. This guy signs big contracts with, say, event coordinators, to supply all of the cabs to meet all of the demand at all of the convention centers in each city.

So, to try to spread his supply around so that no city and event is starving for cab drivers at 6:00 PM when all the convention attendees decide to go bar-hopping, he has this brilliant idea. He’ll make his cabbies time slice! He’ll have each of them work Monday in one city, then get on a bus and travel to the next city on Tuesday, and work through Wed. Then the hapless cabbie will take a bus to the third city to work Thurs through Friday. Brilliant! The executive will have the same number of cabbies in each city at each time, roughly.

So, how well would this work? Cabbies would spend lots of time in buses they didn’t spend before. That would be wasteful. And they would have to spend a lot of time familiarizing themselves with the non-trivial street maps and optimal driving patterns of each city. That would consume more time. And every time they switched cities, they would have to refamiliarize themselves with the city they were now in.

So if the cab company executive is anything like software executives, he does measure how much time each cabbie is in a cab, working. But he does not measure how much time the cabbie spends on the bus between cities. Nor does he measure the time spent getting reacquainted with the new city. Nor does he measure the customer disappointment that results from groggy, confused cabbies taking too long to get from one place to another, or just plain getting lost.

For a programmer to switch, for example, from one complex J2EE codebase and problem domain to another, with their different architectures, designs, codebase details, configurations, etc, takes time. Just getting a complex Eclipse project checked out in its current state and up and running again can take a good while. Getting reacquainted with the current backlog of tasks takes time. Finding out what work has been done in your absence takes time. All this context switching takes lots of time. It’s wasteful time, too. It’s muda. It’s unnecessary and silly.

Another metaphor: lots of context switching per day is like asking your staff to work on the 1st floor of a building with no elevators for a couple of hours, then work on the 45th floor for a couple of hours, then hoof it back down to the 1st floor for the rest of the day. It is exactly this idiotic and unnecessary.

The managers who make these matrixing decisions do, to be fair, live in a world of continual context switching. That’s what management is about, for better or for worse. When you sign up for management, you sign up for that (though my opinion is that average management context switch is far simpler than the average programmer context switch).
These same managers are happy to notice the time programmers spend heads-down in their cubicles. But they don’t measure the cost of all these bus rides and long head-scratching sessions staring at street maps. They don’t time the long trips on the staircase (I know, multiple metaphors requires you, reader, to context-switch. Try to keep up, will you?!)

Cabbies should spend as much worktime as possible driving, in a single city. Programmers should spend as much worktime as possible coding and interacting with customers, on a single system. Managers and executives should strive to enable cohesive, self-emergent teams to spend as much time focusing, collaboratively, on a single system or product. Managers should strive to minimize all kinds of muda, including context switching.

Time-Slicing and Matrixing Mask a Lack of Courageous Leadership

I believe this to be true. Managers who ask their programmers to work on several complex projects at once are ultimately compensating, poorly, for their own lack of courage to stand up to their own stakeholders and say (1) There are not enough resources to deliver everything under the sun that you are asking me for, and (2) You are going to have to prioritize your projects.

Things will go so much better if you learn about teams. Enable teams to work together and stay together on the same system for long periods.

Don’t be fooled into believing that this time-slicing is some kind of management fact of life. Hogwash. It’s a crazy practice. Don’t give into the crazy pressure. Instead, have enough backbone to tell your stakeholders that they cannot have everything under the sun.

Because whether or not you tell your own managers and stakeholders, it is true. There are limits to your teams’ capacities. And time-slicing and matrixing everybody will not make that problem go away. In fact, it will make it worse. It will increase waste, reduce throughput, reduce quality, reduce morale, increase defect rates, and increase turnover rates. When people have to keep switching from one thing all the time, back and forth, they start to go crazy.

Note: After I completed this post, Ken Ritchie made me aware of a post on a similar topic by Rick Brenner, here. It’s interesting, and parallel to my thoughts. Well worth the read; provides lots of additional good ammo for the good fight. Thanks, Ken!

Post-Agilism is a Crock

I’m trying to meter my rants — to restrict myself to a certain number per, say, 10 blogs. But here comes another one.

Here and there in the blogosphere you see folks claiming that the agile software revolution (or evolution, or paradigm shift, or whatever) is somehow over, irrelevant. You see folks claiming that mainstream software development is all better now, having absorbed the truly useful bits of “agile DNA.”

Post-Agile?

This is all such unmitigated cow poo. We’re barely started getting agile, harvesting useful patterns and techniques from agile. The post-agilists are anti-agilists wearing plastic Groucho Marx noses and glasses.

I am fortunate to spend bits of time in many mainstream software shops all over the place. I get exposure to a pretty good cross-section of software development shops around the country. I visit them, I read their blogs, I read books, I go to conferences.

So the bottom line is that the software development industry is still, thank you very much, decidedly pre-agile, at best. Yes, the use of jUnit may be close to ubiquitous in mainstream North American Java development (though the use of unit testing tools in the .NET world is much less common). And Yes, more and more recruiters want to see words like “agile” and “scrum” and “XP” on people’s resumes. That doesn’t mean that we are at the point where more than a tiny percentage of software projects are highly successful, highly agile, highly efficient, and high-ROI.

We have a whopping 1600 people slated to attend the single major U.S. agile conference this year. What does that turn out to be, expressed as a percentage of all software developers in North America? I think you’d need several decimal places to express that fraction.

I just don’t get it. What metrics are the post-agilists looking at? Raw number of references to the word “agile” in google searches?

 

Where We Really (Still) Are

If we are really ready to be “post-agile,” then why do I keep seeing the following things over and over again?

  • Hilariously broken requirements processes, replete with massive “analysis paralysis,” wasteful elicitation and formal articulation, huge amounts of ambiguity, no real prioritization. This all leads to late-cycle acceptance catastrophe, replete with “blamestorming meetings,” mutual recrimination, FUD, firings, and other kindergarten antics.
  • Codebases with nearly zero unit test coverage
  • Big Ball of Mud codebases with massive duplication, dead code, static util patterns and promiscuous global sharing, and best of all — extremely tight coupling throughout
  • Programmers with rudimentary understanding of Object Oriented principles and practices
  • Close to zero familarity with Fowler’s Refactoring pattern language
  • Systems that fail in production deployment, over and over and over again
  • Programmers toiling, heads down, in their cublicles, tiny isolated islands in vast seas of muda and chaos

If the post agilist age will be a Space Age, then we are in the Bronze Age.

If We Actually Were Ready for “Post-Agilism”…

So OK. Let’s look ahead to a possible Utopian future state. Once the average software shop has something like 85% unit test coverage of all business-critical apps. Once average codebase method size is under something like 30 lines of code. Once the average shop has a continuous integration server that reports on code health (cyclomatic complexity, test coverage, etc). Once we have those who write sql and build schemas automating their changes, and storing everything in version control. Once more than 2% of the software world understands the Fit testing revolution.

Once the average programmer understands separation of concerns, and gives a mouse fart about code extensibility. Once we have widespread passion about continuous learning, best practice evolution.

Once we have at least all of that, trouble me again with articles about a golden, post-agilist age.

In the meantime, please return to your previously scheduled assortment of emergency deployment meetings, angry phone calls with betrayed stakeholders, notifications that your best and brightest programmer has just quit to move to San Francisco, and urgent emails from Mercury QTP sales reps.