Journal: 2005-02-03

I spent yesterday and today doing test driven development on an interesting algorithm. I'd like to post the code, because it's kind of a neat problem. Actually, I wish I had thought of it earlier, and I would have taken snapshots of the code and the unit test as I went along, so I could post an example of TDD in a real world environment, on a non-trivial algorithm. However, I'm done except for a little clean up so I'll probably just finish it and then post it. My wife reminded me that I need to find out what my company's policy is before I do. She's right, of course. I don't think the company stands to lose anything, but it's better to ask than to get in trouble.

(( Note: I am including everything on this page, even though some of the work is from yesterday and some is from tomorrow. ))

˜ ™

A TDD Experience Report
The Scenario

Assume I am maintianing a set of objects by receiving refresh update events (complete set snapshots) and change update events (set deltas). In the special case I am tackling here, the objects in the set also have validity periods. What I'd like to receive is an event stream so that I maintain a set of currently valid objects. For example, when time has passed causing an object to become invalid, I need some event to tell me to remove the object from the set. An example of where this would be useful was described on 2004-07-01.

I realized I could do this by creating a filter on the normal event stream. Here are my whiteboard notes from yesterday:

I realized I could simplify the problem by leaving out all synchronization. The filter object generally does not have enough knowledge of the surrounding threading issues to do an adequate job of synchronization anyway. By pushing generation of asynchronous timer events to an outer level, I could make a robust filter using TDD while leaving the threading issues to be solved at an outer level without having to worry (in that outer level) about how the filter did its job.
How I Wrote It

Note: I got approval to post the code, so I have included it at the bottom of this page. You may want to open it in another window so you can follow along.

Note: For the most part I wrote the tests (and then the code to make them pass) in the order you see them in the file. The main exception is that the last test testUpdateFailure was written concurrently with testChangeUpdate - mostly because I realized there were error conditions possible when I was writing code for testChangeUpdate and I wanted to test my error handling code immediately. Also, testAdvanceToTimeBackwards was added at the end chronologically but fit better in the middle of the file.

I started by defining the basic interface. I knew I would be writing a filter, so the class would implement an event stream Listener and take a reference to a downstream Listener to pass the events along to. I needed to be able to get validity periods for the object snapshots in the update events. Since I didn't want to restrict the type of the snapshots (for example, by making them implement an interface with methods to return the validity begin and end), I immediately created an interface for an evaluator object that could determine the validity begin and end of a given snapshot - ValidityEvaluator. Interestingly, I initially didn't have the equals and hashCode methods because I didn't need (or know I needed) them until I started implementing testChangeUpdate much later. My first test, testValidAlways validated this initial interface by checking a guaranteed straight-pass-though situation.

I was able to implement testRefresh and testRefreshNoInitialUpdate by just sticking snapshots in a single List. The trick was in coming up with good test cases and then the corresponding logic.

Note: in this code, a null validity begin or end means "indefinite" or "infinity". Also ranges are in standard CS half-open form. Thus you can easily represent validity periods like (∞, -1), [-1, 1), and [1, ∞). For comparison, previously I had used zero as a special value meaning "indefinite". The downside was null was an illegal value and there was a hole in my range. With the new model, there are no illegal values (which saves error handling and the associated tests) and there are no holes in the range that some user entered data could stumble into at some inopportune moment.

It wasn't until I implemented testAdvanceToTime that I ran into something that wasn't in my whiteboard diagram. There actually two types of time based events for a snapshot - validity begin events and validity end events. Once I had to start consuming the data I had stashed, I found I couldn't just use a List. I needed a data structure where I could quickly find / add / remove events keyed by time, and where I could quickly find when the next (first) time event was. A SortedMap fit the bill. I was able to implement the necessary (minimum) functionalty to make the current tests pass by creating a TimeSlot class containing two Lists, one of begin events (actually just the snapshot that would be dispatched as the core of that event) and one of end events.

Things really got interesting when I started on testChangeUpdate. I had come up with a good set of twelve test cases for testRefresh. For testChangeUpdate, I wanted to test all twelve secondary states against all twelve initial states. That's one hundred and fourty four test cases! And each test case would have muliple steps, since I would have to run a full advanceToTime sequence to see if the change really had the proper effect - about five steps, each with validations, per case. How could I write the unit test? Copy and paste is NOT the answer. That's where I quit last night.

I came up with my answer while lying in bed. I could try to come up with an algorithm that would predict the correct responses for each validation. However, to be an effective test it would have to be different from the algorithm I would write in the real class. I didn't even have one algorithm figured out yet, let alone two! One thing I do a lot for repetitive test cases is use object arrays to store the paramaters for each case - input parameters and expected results - making the test effectively table driven. However, with two validations per step, five steps, that's ten objects per test case. There are 144 test cases, so that's 1440 times I'd have to type new Something() - far more than I wanted to. My inspiration was to pack all ten input parameters into one string. I've got plenty of utilities for unpacking strings laying around, and that brings me to only 144 array elements. I figured this was at least doable.

This morning I came in and coded it up. I wrote out the code to parse the string and execute a single test case with all its steps. Then I created strings for all the secondary states of the first initial state. I implemented code until I had all those test cases working: the first initial state was actually pretty degenerate so I didn't have to do too much to make it pass. Then I worked on the strings for the second initial state. I quickly noticed that all the test case strings were the same for the first few initial states. Intead of copy and pasting, I realized I could take advantage of object arrays. I dropped an Integer into the array as a reference to the index of the element that held the actual test case strings. As testing progressed, I found a bunch more duplicate rows. My 12 states actually had some duplicates - I probably could have gotten away with 4, but then I wouldn't have had as good of coverage of the corner cases. Also in retrospect, I could have extracted out rows of the array as constants and then referred to the constant multiple times rather than doing the funky Integer trick.

Implementing the code for all the test cases of testChangeUpdate took a bunch of changes. Notably, I had to add equals and hashCode to ValidityEvaluator, and create a special Data object to hold the snapshots and expose the custom equals and hashCode. Previously I had been using an already existing data object that I happened to have on hand. (Too bad the Java collection classes don't have an equals and hashCode equivalent to the Comparable interface. If you want to redefine those methods for a particular collection, you have to create wrappers.) I switched the Lists of the TimeSlot to Sets to make lookups fast when dequeueing an event. Specifically, I used LinkedHashSet to guarantee that the order of iteration was deterministic so that the validations in my tests would not have to account for multiple possible result orderings.

I finished all this by the end of the day. I realized that the last thing I needed was a notification when the externally maintained timer should be changed. I decided to create a separate interface for that because I didn't want to require that it be the same object as the data update listener, and I didn't want it to be part of the evaluator because the rest of the evaluator's responibilities could be fulfilled by a single static instance. Here I ran out of time and had to catch my plane.

Note to self: in writing this, I realized there's another thing I need to test if I'm not doing so already:

The Next Day

I ended up calling the interface NextEventTimestampListener, though it took me a while to come up with a name I liked. Names are important, and that can make them a challenge. As an interesting point, I violated strict TDD (no surprise there) and wrote the interface before the tests. I had the method as

public Date nextEventTimestamp();
when I stopped to catch my plane. The next day when I started writing tests, I immediately realized I had goofed! Since this is a listener, the date must be passed in, not returned!
public void nextEventTimestamp(Date timestamp);
If I had written the test first like I was supposed to, I couldn't have made that mistake. Peel the carrot on me!

I added the mock object for NextEventTimestampListener and added extra validations to the existing tests, starting with testRefresh and working forward. I was going to skip adding them to testChangeUpdate because I didn't want to change all the strings. When I finished updating all the tests but that one, I found that I had made all the tests pass without actually putting any code to send nextEventTimestamp notifications in the change update handler! (Interestingly, this is known to happen occasionally when you are doing TDD correctly.) The reason all the tests passed even without code changes was that the only notification test after a change update was in testChangeUpdateMultiple and in that case there should be no notification. I either needed to add a new test for change updates to force a notification that could be validated, or I could alter my mega-test and all its strings. I realized I only really needed to add one validation after the change update step, rather than a validation after every step. Since I would only have to modify each string once (instead of five times), and since it would give me lots of test cases, I decided to go for it and update all the strings.

After that, I was basically done and just trying to think of any holes in my test coverage. I added a test for advanceToTime trying to go backwards, and I added the missing update failure case mentioned in the note above.

I also added more comments, since I planned to post this on the web and wanted to look good. :) I find I'm writing fewer comments than I used to. I don't know if that's because I'm writing clearer code or just getting lazy. It's probably laziness. :(

Ta-da! Done. Here are the results:

TimeReleaseListener.java
TestTimeReleaseListener.java
Closing Thoughts

Well, if I'm going to call this an experience report, I ought to put down some lessons or conclusions as a reward for reading the whole thing, don't you think?

Mostly this just reaffirmed to me that TDD works. Having done TDD before, fully expected it to work on this problem too. I knew this was a pretty choice problem for TDD. It's just an algorithm with a minimal state machine and minimal external dependencies. When I find such a problem, I throw TDD at it because it produces such good results.

I truly didn't know the solution when I started, and TDD managed to produce it bit by bit. The resulting code has much smaller methods than I usually write, which is generally considered a "good thing". I suspect that this may be because I was refactoring more agressively that usual, since it is strongly encouraged in TDD. The code has a lot of X-Y symmetry, where XySymmetryBugs like to lurk. Unit tests help, of course. Luckily, I only remember one such bug happening during this project. I still haven't figured out any good way to factor the commonality out.

The control flow of the validity evaluation code is more complex than I would like, especially compared to the rest of the code. Control flow is especially convoluted in processChangeUpdate, where I did the simplest thing I could and copy and pasted the code from processProcessUpdate and wrapped a rarely used do {} while (false); around it. I'm sure it works but it seems so ugly. I've been too lazy to thoroughly analyze it to see if I could make it clearer, but I don't see any obvious fixes. None of my usual rules of thumb for simplification seem to apply. And there's so much X-Y symmetry! Any suggestions??

It was interesting to see that I actually got little bits of negative feedback when when I cheated.

I used lots of old tricks and came up with a new one or two. TDD truly requires practice to build up you bag of tricks, er, patterns, so that you can solve problems quickly.

I think it's slick that so much tricky functionality can be pulled off by a mere filter. Now I'm very curious to see how writing the outer level code will go.

[ < Prev | Calendar | Next > ]
C o m m e n t s :    
(nothing yet)
Edit