Ugly vs Clean Code; Part Two

The TicTacToe “Ugly vs Clean” Eclipse Project

In my first post on this topic, I set the stage. I had a need for two implementations of the same problem domain: one ugly, one not. As promised, by the way, you can anonymously download the entire codebase discussed in this series of blog posts from a google code project here. It’s an Eclipse project, all zipped up.

The project includes a GameGUI.java applet that you can run, to play the game (right click on source/ view.applet.GameGUI.java, and pull down “Run As > Java Applet”). There is a first-draft README file that describes the whole shebang, and suggests some exercises to try. See what you think of it all.

LegacyGame.java

The legacy version of the TicTacToe game is in legacy/ legacyGame.LegacyGame.java. Now, take into consideration that this version reflects lots of little refactorings on my part, dating back to when I had characterization tests for this “class” (I’ve since removed all of those tests — I didn’t want students and job candidates subjected to these exercises to benefit from them). I renamed a lot of methods that started out with names like “c24occx()”, assigning placeholder-quality names that I thought my characterization tests were revealing to me, like tryToFindPositionGivingSeriesOf4OnTwoOrMoreAxes(). In some cases my educated guesses were accurate, and in some other cases, I later learned that I was far off.

I extracted a few small methods from other, larger, stranger ones, naming them as meaningfully as I could at the time. I extracted lots of constants. I renamed variables. I killed a lot of dead code and inscrutable comments. I managed to extract the Java applet code (woven into the gameplay code’s DNA) into its own class. I just couldn’t stand not doing that. (Clue: what do you notice about that applet code?)

But eventually, I just gave up working with it. After person-days of jUnit poking and prodding, this codebase remained quite opaque to me. I’ve inferred a lot of its algorithmic meat from its external gameplayer behavior. But I’m still baffled by much of it.

So this is our first measure of inextensibility in a codebase we discover: what Uncle Bob Martin calls opacity. One of the characteristics I wrote about here. As we glance through it, as we write tests for it, we struggle to understand it.

But it seems that every month or so these days, there are new tools to help us grasp what we are up against. I ran Crap4J against LegacyGame.java, and of course it pegs the tool’s little meter at the far right, at 36.84, as if pressed forcefully against that right-hand fence, searching for a measure of even more non-test-protected cyclomatic complexity. Average Crap4J score (blue triangle) is just under 5, BTW. As you can see, that little yellow triangle is trying to leave the ballpark:

crap4j_snap.JPG

So, I did determine how the legacy game manages its board state and game state, and got lots of peeks into how it determines which move to make next. Enough so that I was able to run the game from a test harness, one move at a time. This is the TestCase that pits the two games, old and new, against each other, some number of times. Currently that number is 200. You can find this code in manualTests/ manualTests. OldGameAgainstNewGameTests.java.

The source folder and package names contain the word “manual” because at first, I was printing out a representation of the board after each move taken by each game.I was examining System.out.println() output manually, to learn.

It took a bit for it to dawn on me: I was doing exploratory testing.

Old Game vs New Game: Exploratory Testing

So I started with lots of high hopes, deep fears, and ignorance about my prospects of test-driving a decent version of this problem domain. My goal was for my game, if it took the first move, to beat the old game or play it to a draw most of the time. (As it turns out, I did much better than that. After my second run at this code, I ended up with a game that beats the old LegacyGame about 50% of the time, and beats it to a draw about 40% of the time. When I go first, the old game wins no more than 7% of the time.)

The new game clobbers the old game, after much research and development.

In my first test-driven version, my first few defensive algorithms were, in addition to being completely ineffectual against the old game strategically and tactically, pretty badly conceived. My object model was in parts over-engineered, and in other parts procedural, sloppy, and under-engineered. I paired with my good friend Dave LeBlanc on it for an hour, and he made several forthright observations about what I had done well and what I had done poorly. My design had some real flaws. I had pretty good test coverage, but nothing like what I wanted. For the next few days I pushed this first codebase version as far as I could, and got it to the point where it edged out the old game if it went first, on average. It performed OK.

But I was deeply disappointed at the results. I knew I had to rewrite it. I can get an A+ in any course I’ve already taken, if I take it again enough times. Dave had encouraged me with suggested new design approaches. I wiped the slate clean. I started over with an empty Game class, and a much better sense of which strategic and tactical behaviors I wanted to test drive in what order.

That’s when I started turning my attention more rigorously to the move-by-move board printouts I was logging in my manual game-against-game test harness. I started combing through each loss I suffered as I test-drove my second version, looking at the strategic setup patterns, while I looked for a cleaner way to represent the basic defensive and offensive patterns. I watched carefully as the old game, ugly or not, set itself up cleverly to defeat me a couple of moves into a new game.

And as all kinds of interesting patterns emerged from this manual exploratory testing, I began to understand the problem domain much more deeply. And this, or course, made Simple Design easier, and refactoring easier, and test-driving easier. I noted specific patterns, wrapped test data and failing tests around them, and produced new behaviors of my own that played the game better.

exploring1.JPG

Then suddenly one day, after I had finished one particular bit of strategy involving collecting all possible moves, ranked by tactical priority, and looking for any of the highest priority moves that also matched lower-priority moves (blocking the other player’s new series while simultaneously extending a series of our own, for example), I saw a huge new jump in my game’s performance. I added another bit of logic around responding to the other player’s first move, then another around making the first move on one of the center-most 4 squares on the board. With each of these well-thought-out bits of new behavior, I saw big jumps in my game’s performance against the old one. Meanwhile, there was not that much total strategic and tactical logic, and I was simplifying and consolidating as I went. I had a reasonably clean, reasonably well test-protected codebase that was kicking the other game’s keister. It was rewarding.

Challenge: Try the Exercise Yourself

Feel free to download, unzip, import, and play around with the codebase. Follow the instructions in the README file. Let me know what you learn, and how you would like this (or any of my exercises) to be improved.