Unsung Agile Principle: End-to-End Early (e2e) — Part 1

Speculation = Bad

Most of the work I do will — at some point — be reviewed by someone for completeness, accuracy, and quality. I will need that feedback. On a team, we often need to incorporate everyone’s great ideas. Again, feedback. Whether work is individual or collaborative, the less work I personally do before I get the feedback I need (customer review, team-member review, compilation, running test, etc), the less speculative work I have done. The less work I am likely to have to redo. The fewer defects I am likely to have introduced. The less work, as Pillar Technology VP Chris Beale says, I “put at risk.” The less likely I am to have completely wasted my time barking up the wrong tree, chasing a wild goose, you get the idea.

The agile literature is not exactly rife with formal references to the principle of end-to-end early. Yet all good agile teams practice it, more or less formally. At Pillar Technology, we strive to be explicit about this interesting agile principle, and to apply it ubiquitously and rigorously.

e2e Defined

In this and future blogs on this topic I’ll explore a few specific applications of e2e, and also what it “feels like” to have this principle woven into your DNA (I admit to some Chutzpah there, but I’ve been at this awhile). So, first, a couple of concepts at the Principle level, before we dive into example e2e in practice. How about starting with an attempted definition? Open to your feedback, of course:

End-to-End Early, applied to any practice, gives us faster, cheaper feedback. Keeping feedback loops small tends to drive out speculation, which reduces rework, waste, and confusion. This tends to keep costs lower, keep options open, and increase ROI.

Tiny little feedback loops are a rare commodity in software development, where traditional waterfall development defers feedback like end-to-end integration, end-to-end testing, and customer feedback until very late indeed. Many agile practices are designed to give us earlier, better feedback, but it is worthwhile understanding the e2e principle in general, and its universal power to save time and money by reducing waste and rework.

So enough theory. How does this work? How do we use this principle to make practice-level decisions?

Horizontal e2e:
Dividing a Release into Value Stories and User Stories

As part of a so-called Pillar Plan & Define phase (where we practice what we call Agile Project Scoping, in order to commit to a pretty hard dollar figure and date up front, where our clients typically insist on getting them), Pillar software consultants divide valuable work into successively smaller, iteratively-deliverable chunks. They first elicit and prioritize the big chunks – Value Stories – and plan for those to be delivered in early releases. (In other agile methods these Value Stories might be called epics or themes, but would not likely be tied back to concrete expressions of business value, while our Value Stories typically are.)

Value Stories are then divided into (at least) User Stories, which in turn are prioritized, and divided into tasks (using a technique called Construction Modeling). Finally these tasks are estimated and role-assigned. When we set about to divide a Value Story into these smaller parts, we will use one of a handful of these and/or other techniques that focus us all on thin, “horizontal” end-to-end paths. (So Yes, to speak briefly to agile purists, we go through this exercise of boiling all of this out up-front, for estimating purposes, whether or not these specific User Stories and tasks end up surviving as iterations unfold. More on this in a future blog.)

First, a definition of “thin,” since that is much of what gives us the “early” part of “end-to-end early.” The thinnest, smallest bit of business value our User Story can capture, the likelier we are to deliver it quickly and without defects. The quicker and better we deliver it, the quicker we can obtain real feedback from numerous stakeholders, at different steps along the delivery path (BAs, Product Managers, users, testing staff, etc). The sooner we get that feedback, the more cheaply and easily we can make course corrections And again: the sooner and cheaper we can make any kind of course correction, the more speculative work, rework, and waste we are “driving out of the system.”

By “horizontal” end-to-end, I mean a series of chronological steps. In particular, I mean the entire trip that a chunk of work must traverse in order to deliver its freight of business value, so that ROI can begin to be earned. So if XP calls the running, tested, delivered, and accepted state “Done Done,” then perhaps I mean “Done Done Done.”

Specifically, this includes (at least) all steps from the inception of a User Story, through requirements elicitation, acceptance test writing, design, development, user acceptance and verification, and production deployment. Even that set of steps is not enough, for most clients, to truly capture end-to-end. If a User Story is part of an application that must integrate with other applications (in a SOA architecture, for example) before stakeholders consider it truly “done,” then that integration step should be part of the end-to-end path.

End-to-end journey of a feature.

If we cannot begin to earn ROI on a User Story until, for example, some documentation is written, some sales have occurred, and some user training is complete, then perhaps those steps too should be in our definition of “end-to-end.” If we are consulting at a level that includes mentoring both software development process and business process, then we might very well go that far in our e2e definition. We might not tick something off on the backlog until people who have nothing to do with software development have verified that our feature is in the field, saving or making money.

Vertical e2e:
Designing User Stories to Traverse Architectural Layers

Technical uncertainty is project risk, pure and simple. Such risk might pounce on us at any moment from behind any piece of technology through which our data must pass (a framework, a data repository, a UI technology, an external service, you name it). To mitigate such risk, we want some of our early work on a project to deliver running, tested code that actually uses all of the risky touch points as early as possible. We want our e2e paths to give us the soonest, best possible feedback on how workable they actually are.

Martin Fowler has a great article on software architecture that wanders toward a definition of architecture that sounds like this: “architecture
is the decisions that you wish you could get right early in a project, but that you are not necessarily more likely to get right than any other.” He goes on to discuss how the least reversible things we commit to in the design, the really big important things (according to us) are what we might agree constitute the architecture. If we accept this definition, then we want to keep the cost of such irreversibility as low as possible, and e2e can help us quite a bit with that. Big, irreversible architectural decisions need not be committed to all at once, in one go. We can and should experiment, or at least explore, using e2e. If we can and must back out of such big decisions early, then let’s find a way to do it.

When we at Pillar create User Stories early in a release, part of our purpose is to have the data traverse “vertical” round trips through all of the anticipated architectural layers or touch points in a system. This is especially important when our architecture includes layers, frameworks, services, repositories, APIs, GUIs, or external dependencies about which we don’t yet know enough. Of course, such slices are not always truly topologically vertical, say, in an SOA context. The paths might explode all over the enterprise. But within an application, we’ll mostly have horizontal layers that we want to traverse vertically, as completely as possible.

Vertical end-to-end slice.

If a candidate open-source Java workflow engine or CMS system or OR/M mapping framework proves unworkable for a given system, project team, or technology stack, my architectural e2e User Stories or spikes will let me know quickly enough that I can change course, making an alternate selection while that change is still possible or perhaps even cheap.

Again, I want this work to be as thin as possible, while still delivering measurable business value. Thin, small stories are also nearly always easier to estimate, plan, track, and verify. Small and thin are great good things in agility.

With these thin vertical slices, I mitigate the “irreversibility” of my architecture, and I drive more speculation out of the system, which again reduces rework, waste, and ultimately expense. Can’t use that particular workflow engine? Whew! At least we are choosing to swap it out in week 2, as opposed to week 20.

Ubiquitous e2e Example: Simple Unit Tests

E2E can apply to nearly any activity, once you are experienced at spotting speculative work (which basically means planning to deliver too big a chunk of anything, which in turn delays your feedback, and potentially incurs more rework expense for you).

A teeny-tiny example: how big should a jUnit test be? How much behavior should it specify and test? Answer: one discrete path through the “System Under Test.” This is known in Gerard Meszaros’ book, xUnit Test Patterns: Refactoring Test Code, as the Simple Test pattern, and as the “Verify One Condition per Test” principle.

But lurking beneath this testing principle is the End-to-End Early principle. By test-driving, keeping my unit test simple, and getting to green quickly, I learn quickly about (A) how workable my existing design is, (B) how difficult it is to actually do the work, (C) how long it takes me to do that work, and perhaps also things like (D) unhappy-path edge cases and exception cases I had not considered earlier. If I am pairing or conducting code reviews, then I can also get faster reaction and feedback from another pair of programmer eyes, which will doubtless catch something I missed.

This is, of course, the original test-red-green-refactor cycle from classic XP. A thin end-to-end slice is nearly always just a tiny feedback loop.

Thin unit-testing End-to-End cycle.

All of this has the effects that e2e always has: faster feedback, faster and cheaper course correction, lower rework cost, time and money saved.

And in this particular example, I get additional, vital benefits: my Simple Tests tend to “localize defects” really well, which makes debugging trivial; they tend to be robust (hard to break by design or requirements changes); and they tend to be easier to read and understand, for programmers new to the codebase.

Next time on this topic: some more examples, less technical examples, and a qualitative sense of how e2e “feels.”

Death by a Thousand Cuts: Time-Slicing and Matrixing

One thing I find repeatedly in dysfunctional software development shops is managers and executives who, instead of encouraging and enabling their staff to form healthy, cohesive, high-function, self-organizing, fulltime project teams or product teams, essentially ask everyone to be part-time members of lots of teams. They micromanage everybody’s workweek, or worse yet, workday.

This means that people must do things like work Monday on one project, work Tuesday through Wed on another project, and Thurs through Friday on another. Worse yet, I see people whose days are actually subdivided into work on various projects. This is variously called matrixing, time-slicing, etc.

There are roles and people for whom this is no big deal, but they are few. For the average programmer on the average enterprise application of non-trivial size this continual context-switching is a big, costly, wasteful, unnecessary deal, and a terrible idea. It expends tremendous amounts of energy in thrash, churn, and waste. Let me explain.

The Cost of a Context Switch

Let’s say some executive runs a cab company that owns every cab in Dallas, Chicago, and Miami. And lets say that for some odd reason, there is no way to get enough cab drivers to cover all the demand in all three cities. This guy signs big contracts with, say, event coordinators, to supply all of the cabs to meet all of the demand at all of the convention centers in each city.

So, to try to spread his supply around so that no city and event is starving for cab drivers at 6:00 PM when all the convention attendees decide to go bar-hopping, he has this brilliant idea. He’ll make his cabbies time slice! He’ll have each of them work Monday in one city, then get on a bus and travel to the next city on Tuesday, and work through Wed. Then the hapless cabbie will take a bus to the third city to work Thurs through Friday. Brilliant! The executive will have the same number of cabbies in each city at each time, roughly.

So, how well would this work? Cabbies would spend lots of time in buses they didn’t spend before. That would be wasteful. And they would have to spend a lot of time familiarizing themselves with the non-trivial street maps and optimal driving patterns of each city. That would consume more time. And every time they switched cities, they would have to refamiliarize themselves with the city they were now in.

So if the cab company executive is anything like software executives, he does measure how much time each cabbie is in a cab, working. But he does not measure how much time the cabbie spends on the bus between cities. Nor does he measure the time spent getting reacquainted with the new city. Nor does he measure the customer disappointment that results from groggy, confused cabbies taking too long to get from one place to another, or just plain getting lost.

For a programmer to switch, for example, from one complex J2EE codebase and problem domain to another, with their different architectures, designs, codebase details, configurations, etc, takes time. Just getting a complex Eclipse project checked out in its current state and up and running again can take a good while. Getting reacquainted with the current backlog of tasks takes time. Finding out what work has been done in your absence takes time. All this context switching takes lots of time. It’s wasteful time, too. It’s muda. It’s unnecessary and silly.

Another metaphor: lots of context switching per day is like asking your staff to work on the 1st floor of a building with no elevators for a couple of hours, then work on the 45th floor for a couple of hours, then hoof it back down to the 1st floor for the rest of the day. It is exactly this idiotic and unnecessary.

The managers who make these matrixing decisions do, to be fair, live in a world of continual context switching. That’s what management is about, for better or for worse. When you sign up for management, you sign up for that (though my opinion is that average management context switch is far simpler than the average programmer context switch).
These same managers are happy to notice the time programmers spend heads-down in their cubicles. But they don’t measure the cost of all these bus rides and long head-scratching sessions staring at street maps. They don’t time the long trips on the staircase (I know, multiple metaphors requires you, reader, to context-switch. Try to keep up, will you?!)

Cabbies should spend as much worktime as possible driving, in a single city. Programmers should spend as much worktime as possible coding and interacting with customers, on a single system. Managers and executives should strive to enable cohesive, self-emergent teams to spend as much time focusing, collaboratively, on a single system or product. Managers should strive to minimize all kinds of muda, including context switching.

Time-Slicing and Matrixing Mask a Lack of Courageous Leadership

I believe this to be true. Managers who ask their programmers to work on several complex projects at once are ultimately compensating, poorly, for their own lack of courage to stand up to their own stakeholders and say (1) There are not enough resources to deliver everything under the sun that you are asking me for, and (2) You are going to have to prioritize your projects.

Things will go so much better if you learn about teams. Enable teams to work together and stay together on the same system for long periods.

Don’t be fooled into believing that this time-slicing is some kind of management fact of life. Hogwash. It’s a crazy practice. Don’t give into the crazy pressure. Instead, have enough backbone to tell your stakeholders that they cannot have everything under the sun.

Because whether or not you tell your own managers and stakeholders, it is true. There are limits to your teams’ capacities. And time-slicing and matrixing everybody will not make that problem go away. In fact, it will make it worse. It will increase waste, reduce throughput, reduce quality, reduce morale, increase defect rates, and increase turnover rates. When people have to keep switching from one thing all the time, back and forth, they start to go crazy.

Note: After I completed this post, Ken Ritchie made me aware of a post on a similar topic by Rick Brenner, here. It’s interesting, and parallel to my thoughts. Well worth the read; provides lots of additional good ammo for the good fight. Thanks, Ken!

Stumbling Backward into an XML Assertion DSL: Part 1

Never Saw a DSL Coming

I had some knowledge of DSLs (Domain Specific Languages) thrust upon me recently. And you know what? I rolled with it for awhile. And I’m still rolling.

A few months ago a colleague of mine and I at Pillar were tasked with an interesting little proof of concept. We used Fit and FitNesse to prove, for a client, that we could test some business data, in a transitory XML representation, as it made its way from one application to another along a long pipeline. A transaction representing an insurance policy made its way from point A, a web application, to an XML representation that was a brute-force transliteration of that application’s Oracle database, then through an XSL to point B, an XML representation that more closely represented a “canonical” data model to be shared by all other enterprise applications, and then to point c, an Enterprise Data Warehouse.

The teams involved had few automated tests of any kind, and were having trouble demonstrating that any bits of data could make it from Point A through Point B to Point C intact. They wanted to assert much more than that about these transactions, of course. They wanted to assert that if certain elements showed up in the XML with certain values, certain other elements were also present with certain values. There were several subtly different varieties of these assertions they wanted to make. Each was to use Xpath expressions to specify where each element in the XML was expected.

Now, using Fit ColumnFixture style tables as they are typically used, we might have come up with a different table style for each of these assertions. Let Fit do all the semantic work. We might have used a single ColumnFixture extension and let it do all of the work itself, with eight gazillion little custom methods. The problem was, we were going to need to be able to throw random sets of these assertions at different sample XML files representing different insurance policy varieties, for different business units, and for different policy states (new, renewal, cancellation, etc). The test tables involved, and the underlying fixture code, threatened to get hairy, within 10 minutes of spiking it.

So my smart colleague, Jason, suggested that we do something a bit outside the Fit box. He suggested we use one ColumnFixture table to specify the superset of all the assertions we might want to make about any sort of policy transaction. Who cared how many of them there were? We would load them all up using a FitNesse SetUp page. Then, with a big suite of different Fit test pages, we would throw just the subset we needed at any XML file under test. The test page would show only the tests we actually ran, the assertions we actually made, and the results we got. Simple, right?

Wrong. Great idea, but a problem with a bit of texture. While, one way or another, we were going to make it easier for business stakeholders to see that their policy transactions were making it successfully along the application and repository pipeline, we were going to have to have to write a fair bit of XML assertion-related Java to make it happen.

Keeping the Fixtures Thin

It was key to me, from blogs I had read, reports I had gotten firsthand, and from personal experience, that we keep the Fit fixture code just as razor thin as possible. And I managed to do that. Below is a bit of the Java fixture for loading what we chose to call an XMLTestCase — one of the specific tests we wished to throw at specific XML files.

Temporary null-conversion hacks aside, this was as thin as I could make it. The execute() method immediately delegated to a static singleton for purposes of creating and adding incoming actual “XML TestCase.” Here’s a chunk of the FitNesse table that specified these homegrown “test cases.”

FitNesse Table for Specifying XML Test Cases

Pardon the margin-spanning spread of the thing. It’s big. I cannot figure out how to show this without violating my margins, so we’ll just have to tolerate it, until/unless I adopt a more margin-capacious theme.

DSL? What DSL? What am I talking about?

So, yeah, I know. This is a lot of lead-up. Bear with me.

So this little chunk of the table shows the easy stuff — testing simply that specific Xpath-specified elements show up without null values in the XML test files. The “Node A to Value” in the “test case type” column means that we want to check the value of a single node, Node A. The “not null” entries in the “value expression” column mean any value will do, as long as it is not null. But that column is where things started to get hairy for Jason and I, despite our initial efforts at avoiding hairiness. When is it ever otherwise?

So eventually we ended up with what was a decidedly ugly little boolean-logic-like notation for specifying that Hey, if Element A has a value of some sort, then there had better be some sort of other value in Element B. In the table, this looked like this:

Nuther FitNesse Table for Specifying XML Test Cases

Check out the entry in that rightmost “value Expression” column this time: “0 IF UDE; 1,0 IF CC”. So, with the proper legend table, it might be understandable, right? (We had such a legend table.) No, not really. It’s nearly all operands and no operators, for assertion purposes. It’s inscrutable with or without legend, so I’ll explain it.

Note the little phrase in the third column from the left: “Node A to Value based on Node B Value.” That describes what kind of assertion we want to make in this case. We want to test that if Node B has one of a permitted set of values, then our test passes if Node A has one of its own set of permitted values. So in this case, if Node B has a value of “UDE” then Node A had better have a value of 0. (That’s what “0 IF UDE” means.) And if Node B has a value of CC, then the Node A value better be one of 1 or 0. (That’s what “1,0 IF CC” means.)

Now this test design (fixture and other code aside) was not heinous to us immediately, but almost. We eventually ended up, in our little spike of a Proof of Concept, with 6 different kinds of these custom assertions, and our notation just evolved wildly to accommodate it all, with a kind of clumsy, yodeling abandon. One value expression, for checking a value relationship between 3 nodes, ended up reading like this “not null IF ((NodeB = DIRBILL) AND (NodeC = Billing Category))”. Without proper lexical support, brother, that is heinous.

We justified this hackery by saying to ourselves and each other (and the client’s technical staff) “Look, this is a spike, a PoC. This is disposable code. If we have to do this for real, we’ll devise a real DSL of some sort, something far more fluent.” We didn’t want to end up with our own homegrown parsers and lexers (shudder), but we knew at this point that we eventually wanted a real DSL.

But we were on a tight deadline, and our task was not to prove we could produce a beautiful assertion DSL for XML element relationships. It was to show that Fit testing might work at all to prove that XML-based insurance policy transactions could take a little motorcycle ride without falling off the bike and dying in the street. We would crack the DSL nut another time. (Do those metaphors work together? No? Oh well, it’s late.)

So, for the next blog on this topic, I’ll show you a few more things:

  1. How the thin fixtures freed me up to evolve a little Java object hierarchy that specializes in making assertions about XML element relationships;
  2. How this is great because we can invoke those POJOs from any testing framework, not just Fit;
  3. How, because we had the beginnings of a lame DSL with no underlying language base, we started down a genuinely heinous road of hard-coded happy-path-only String splitting, trimming, and other crappy substitutes for real lexing and parsing;
  4. How I would like to evolve a real DSL, based on (maybe) Groovy, if we get permission to take this work a bit further.

Post-Agilism is a Crock

I’m trying to meter my rants — to restrict myself to a certain number per, say, 10 blogs. But here comes another one.

Here and there in the blogosphere you see folks claiming that the agile software revolution (or evolution, or paradigm shift, or whatever) is somehow over, irrelevant. You see folks claiming that mainstream software development is all better now, having absorbed the truly useful bits of “agile DNA.”


This is all such unmitigated cow poo. We’re barely started getting agile, harvesting useful patterns and techniques from agile. The post-agilists are anti-agilists wearing plastic Groucho Marx noses and glasses.

I am fortunate to spend bits of time in many mainstream software shops all over the place. I get exposure to a pretty good cross-section of software development shops around the country. I visit them, I read their blogs, I read books, I go to conferences.

So the bottom line is that the software development industry is still, thank you very much, decidedly pre-agile, at best. Yes, the use of jUnit may be close to ubiquitous in mainstream North American Java development (though the use of unit testing tools in the .NET world is much less common). And Yes, more and more recruiters want to see words like “agile” and “scrum” and “XP” on people’s resumes. That doesn’t mean that we are at the point where more than a tiny percentage of software projects are highly successful, highly agile, highly efficient, and high-ROI.

We have a whopping 1600 people slated to attend the single major U.S. agile conference this year. What does that turn out to be, expressed as a percentage of all software developers in North America? I think you’d need several decimal places to express that fraction.

I just don’t get it. What metrics are the post-agilists looking at? Raw number of references to the word “agile” in google searches?


Where We Really (Still) Are

If we are really ready to be “post-agile,” then why do I keep seeing the following things over and over again?

  • Hilariously broken requirements processes, replete with massive “analysis paralysis,” wasteful elicitation and formal articulation, huge amounts of ambiguity, no real prioritization. This all leads to late-cycle acceptance catastrophe, replete with “blamestorming meetings,” mutual recrimination, FUD, firings, and other kindergarten antics.
  • Codebases with nearly zero unit test coverage
  • Big Ball of Mud codebases with massive duplication, dead code, static util patterns and promiscuous global sharing, and best of all — extremely tight coupling throughout
  • Programmers with rudimentary understanding of Object Oriented principles and practices
  • Close to zero familarity with Fowler’s Refactoring pattern language
  • Systems that fail in production deployment, over and over and over again
  • Programmers toiling, heads down, in their cublicles, tiny isolated islands in vast seas of muda and chaos

If the post agilist age will be a Space Age, then we are in the Bronze Age.

If We Actually Were Ready for “Post-Agilism”…

So OK. Let’s look ahead to a possible Utopian future state. Once the average software shop has something like 85% unit test coverage of all business-critical apps. Once average codebase method size is under something like 30 lines of code. Once the average shop has a continuous integration server that reports on code health (cyclomatic complexity, test coverage, etc). Once we have those who write sql and build schemas automating their changes, and storing everything in version control. Once more than 2% of the software world understands the Fit testing revolution.

Once the average programmer understands separation of concerns, and gives a mouse fart about code extensibility. Once we have widespread passion about continuous learning, best practice evolution.

Once we have at least all of that, trouble me again with articles about a golden, post-agilist age.

In the meantime, please return to your previously scheduled assortment of emergency deployment meetings, angry phone calls with betrayed stakeholders, notifications that your best and brightest programmer has just quit to move to San Francisco, and urgent emails from Mercury QTP sales reps.

Learning Always Happens: a Gratitude Practice

Pleasant and unpleasant things occur to us. This is a fundamental Buddhist tenet, but also a pretty obvious fact of life. The older you get, typically, the more obvious it grows.

Life is often inconvenient, disappointing, frustrating. On the other end of the pain spectrum, life is sometimes quite painful. It sometimes seems unbearably painful. Eastern thought councils us to learn to evolve the quality of our reaction to unpleasantness of all kind. When we set about to evolve our “equanimity” and poise, the Masters recommend we start with the easy stuff, and work up to the hard stuff. Don’t start with something like losing your foot to the lawnmower, or your spouse having just said the one thing that is always guaranteed to completely send you off the deep end. Your temper is gone in such a case. It’s too late to work on your reaction, most likely.

No, start with things like work situations that are less comfortable or creative or productive than you hope for. I have been doing a lot of that lately. It sounds flip, but I mean it.

One of my recent spins on this is to look back at regular intervals on the lessons I have learned from unanticipated unpleasantness in my recent past. Examples. In 2007, I worked in several situations that seemed, at first and on the surface, quite intractable. How can I get my job done here? How can I make progress? Why don’t these people understand me or accept my message? Blah, blah, blah. Victim-talk.

I look back now and think, My Goodness, I have learned a lot in a year. As always happens, as I always say, Learning Happened. Learning always happens, and in retrospect is so much more golden than I have recognized in the past.

So, this is to say I am grateful for my learning opportunities, past, present, and future. I am grateful for my knowledge, my experience, and also for my ignorance and my mistakes. One friend says that experience is a tough teacher because she gives the test first. Another friend says that good judgment comes from experience, which comes from bad judgment.

So again, thanks. I am grateful for learning at a faster and faster rate. The learning keeps on flowing, and keeps on happening. And most blessedly, I am learning to be more and more grateful for my learning, and for each new day of it. The number of such days is finite for us all, and puts my mostly minor inconveniences in healthy perspective.

The Fallacy of Individual Accomplishment

Your Heads-Down Cubicle-Dwellers are Mostly Wasting Their Time

This one has a “rant” tag, because it’s not a friendly post. I have seen too much pain and needless waste resulting from this problem at various large enterprises.

The larger the percentage of their workdays your individual programmers spend heads-down in their cubicles, cranking away on their keyboards, the more screwed-up your software development operation is. You may not be measuring it well, but I guarantee you that if your programmers are heads-down cubicle-bound code-crankers 90% of the time, then much of that time is wasted. They are either blocked much of the time, whether they or you notice it, implementing requirements poorly (building the wrong thing, or building the right thing poorly), not refactoring their code adequately, doing lots of rework, re-inventing lots of wheels, devising algorithms and designs that are completely out of synch with the person in the cubicle next door, or all of the above. I can’t remember all of the other ways they are likely suffering. I ran out of breath.

Your continuously heads-down coders are wasting a huge amount of their time and your money, because they are trying to play baseball as if it were remote, fantasy-league baseball. They are playing what is fundamentally a team sport, as if it were a network game simulation of a team sport. This is likely because you, as a manager, are insufficiently familiar with the lean manufacturing concept of muda. Read up on it, and learn how to measure and avoid its 7 deadly sins.

Furthermore, if you continually monitor who is heads-down coding in their cubes and who is not, trying to herd them back to their cubes, you are creating disincentives for people to learn, solve problems collectively and creatively, work consistently, and share. You may in fact be helping to create a culture of fear, which is the death of true productivity, much less excellence.

Without Good, Cohesive Teams, You Are Throwing Money Away

Software development managers and executives, listen up. Again, I have no interest in being gentle here. After 28 years, I am just sick and tired of the individualism and mushroom treatment I see people continuing to give their programmers.

Individual programmers that work in teams cannot accomplish anything truly useful on their own. Stop asking them to. Stop worrying about how much time they spend in each others’ cubicles, trying to learn from each other, trying to get unstuck, trying to work consistently, trying to eliminate waste and rework, trying to estimate better, trying to make sensible commitments. Even your most skilled “power-programmers” have knowledge that they must convey, problems that get them stuck, and requirements they misunderstand. And if those “silos of knowledge” leave you someday, do you want them to take all of their knowledge with them? That is what I usually see happen.

Only the entire team can actually deliver the most, best, best-tested features per release or per iteration. Only the entire team can get the job done best, at least cost. Let me repeat that: only the entire team can get the job done best, at least cost. Let your teams of programmers actually be teams. Let them be cohesive and self-organizing, and reward them for that. Let them teach and learn from one another. Let them interrogate the business-side stakeholders and customers thoroughly and continuously, to figure out what the requirements really mean. Let them work creatively together, get inspired and imaginative together, and celebrate successes together.

By all means, hold them accountable! But hold them accountable at the team level. The team and their codebase are your software factory, not the individuals. If the entire team commits to an estimate, and they come up short, ask the entire team to devise a solution (better estimating, more spikes, more lunch n’ learns, more books, more training, more time in each others’ cubicles). Don’t second-guess or micromanage, even if you are a pretty darned good coder in your day. The new demands, the new practices, and the new technologies are different than what you remember. Let the team determine how best to deploy them.

Please also consider going the extra step of letting your teams sit in open team rooms. Preferably rooms whose walls are slathered wall to wall and floor to ceiling with white-boards. If you have read this far, then you deserve this prize: simply locating your team in a single open room (one that balances public and private space, but insists that real production code be written in a central, open workspace), you’ll likely double your team’s productivity. Amazing! There, don’t say I never gave you anything. :-)

Once you’ve experienced what a genuinely empowered, self-organizing team can do for you, you’ll never go back. You’ll never treat your staff the same way again.

Why did this take me so long?

I really have no excuse for taking this long to join the zillions of bloggers out there. I have some experiences to share, some music, some poetry. And I am a firm believer in the hilariously explosive zillion-to-zillion overpublishing of the blogosphere. Let us all share our gold, our dross!

So may I follow through with steady posting, daily rants and discoveries, and the journal of my life. If for no other purpose or audience, then for my kids, for whom I expect to have become inscrutable once I am gone. :-)

Kids, you may eventually be able to figure something out about me by digging through this detritus. Far more fruitful, I expect, than digging through my junk drawers.

But the flip language hides something more interesting, and worth exploring in later blogs: why did I procrastinate for so long? Especially for something so straightforward and enjoyable?  Because of old, negative childhood scripts about unworthiness, shame, whatever.

So, launching a blog — just launching it — can be a major accomplishment, even for a geek. It certainly was for me.