|LateNightHacking Louis Projects||Auth|
2003-09-12 : So I was told that unit tests are a Good Thing and can make you code more robust and make refactoring safer. Fine, I'll give it a shot, I told myself. Along with some well know philosophies, here are a few things they didn't tell me, that I learned the hard way.
After hearing a couple people rave about the wonders of unit testing, I figured I'd give it a shot myself. No problem, I thought. I know how to write good code. Well, it turned out to be harder than I expected. It took practice and experimentation before I got the hang of writing good unit tests. Writing unit tests is a distinct problem domain just like everything else. You wouldn't expect to easily be able write good GUI code or good socket code the first time you tried. Unit tests are the same way. It's a skill that has to be learned, so keep at it. It gets easier the longer you work at it. And yes, it is worth it. :)
For some reason, people have a tendency to think that test code is not as imporant as production code. You think that it's not worth as much effort, because what it does is not as critical as the code being tested. It's easy to write tests in a careless, one-off manner, as ugly as necessary to get the job done. Metaphorically, it seems like scaffolding. The work-piece in the middle is important, the stuff on the outside barely has to hold together.
That's the wrong way to approach the problem. If you write tests this way, they will get hairy and unmanagable quickly. When the code has to change, the tests will be hard to change and are likely to be abandoned. Instead, treat the tests just like production code. Invest as much effort into them as the production code. Remove duplication agressively and keep them refactored as best you can. (Trust me, test code will bring you some amazing new refactoring challenges!) Ugliness doesn't creep into test code - in the tests I've written, it comes in rushing at a mad dash. However, the tests need to be able to change as quickly as the production code. If you don't fight tooth and nail to keep it clean, the test code will become a liability when it comes time to change, rather than the valuable asset it can be. Heck, you spend a lot of time writing tests, so why not make them worthy of the time invested?
Metaphorically, your production code may be like a sleek jet fighter, ready to take on all comers. You don't want to take your fighter home and park it in a barn. Your manufacturing and maintenance facilities should be just as top notch, even though they will never see the field of battle. If you use shoddy tools, you'll get a shoddy jet.
Your test code and your production code should grow up as siblings, each constantly testing the other. Yes, you need to test your tests! The best way to do that is to leave bugs in your production code for a moment so you can make sure the tests catch them. Sometimes it seems like I fix more bugs in the tests than in the production code. :)
You write a lot of code. Do you have to write tests for all of it? Well, that's up to you. Any testing is better than no testing.
If you are fixing a bug in an existing component that has no tests, it's OK to write a unit test that just tests the fix for that one bug. Next time you're in that section of code, you can add more.
What about new code? The best answer is test every part that you want to work. Which means, test it all. Yeah, I know. Writing tests for small things like accessors is tedious! Don't let that put you off completely - any testing is better than no testing. Follow this guideline instead: just test the parts that might fail. Ignore the tedious tests and write tests that cover the parts of the code that you know are going to be hard to get right.
Having the tests that show your tricky algorithm is working right is a great confidence booster. And once you get used to having that extra bit of confidence in the tricky parts of your code, you'll probably find you want similar confidence in the less tricky bits. Maybe even the accessors. :) It's called being test infected.
Writing accessor tests is very easy, so why get up-tight about it? Then there's the ultimate argument - once accessor tests caught bugs in my code, I became a believer. :)
Sometimes it seems like there is just no way to unit test the bit of production code you are working on. I've found that this is actually a conceptual problem. Unit tests are only one kind of test. A unit test is designed to test the smallest bit of production code that can be tested. What you are thinking of testing is surely a perfectly valid test. But perhaps it is an integration test. Most likely, there are multiple objects involved. What you really need to do is break it down and test each (and every) object independently. True, testing each of the pieces is not really the same as testing the whole pipeline. However, testing the whole pipeline is an integration test not a unit test. While it is a worthy goal, it is not the present goal. I believe that if you know that each of the parts of the pipeline is thoroughly tested, you'll feel better about the pipeline as a whole. You've gained something even if you haven't done the test you wanted to do. You can always go back and write the integration test too. Also, I suspect that the knowledge you gained about the system (and the instrumentation you added) while writing the unit tests will now make writing the integration test easier.
For example, let's say you want to test an algorithm that is supposed to manipulate database-backed objects. That's too hard, because you have to have a database set up with the right test data, and you might not be able to reach the database, and you can't run it at the same time as someone else, and the database is slow, etc. etc. It's not time to give up, because this is not a unit test. You should test the algorithm independent from the objects it is manipulating, and you should test the database-backed objects independent from the algorithm and the database.
OK, so you know that what you want to test is a small piece of a much larger conglomerate. Now the hard part is figuring out how to get at the part you want to test. This is where the tricks of the trade start coming into play and things start getting dirty. You'll likely have to introduce artifacts into production code, writing in a way that you wouldn't have if you weren't doing unit tests. Sometimes this is clearly OK, possibly even an improvement. Sometimes it has not been clear to me that artifacts I have introduced have not made the code worse, but I haven't had any problems yet so I continue to bravely forge ahead.
The first thing to do when you want to test the parts of a pipeline or conglomerate is to slice the objects apart. The goal is to be able to set up an object that usually lives in the middle of a pipeline such that it stands alone. You attatch fakes that your unit test controls to the upstream and downstream ends of the object under test. These fakes are offically called mock objects. Once you have your object thusly isolated and instrumented, your test can poke at it using the upstream mock object, and verify the responses by seeing what methods are called on the downstream mock object.
To isolate an object from it's pipeline, introduce interfaces between objects that used to know each other's identity. Next, you need to be able to control which edge objects your object under test tries to talk to. One way to do this is to pass the edge objects in when constructing the object under test. If this is not possible, it's probably time to introduce a factory! I've even had to use factories of factories. Do whatever it takes - it's time to use that creativity. You may be able to use setter after the object is constructed, but that often breaks encapsulation. If I have to break encapsulation, I use the TestingAccessor pattern described next.
Another problem I've run into while writing tests is that objects are supposed to hide their internal state, so how do you see if the object is working correctly? Here's an example: an object responds to event A by changing its internal state so that in the future it will respond to event B in a different manner. Discerning the internal state with B events difficult - it may requre a complex, extended exchange of messages with only a subtle change in behavior. Here's another example: an object is supposed to execute a complex alogrithm that you want to test, but only in response to an internal signal that is difficult if not impossible to generate, such as a timer event.
I don't know what the officially accepted solution is, but there is a tactic I have used successfully without serious side effects. I call it the TestingAccessor pattern. In Java, I make a public non-static inner class called TestingAccessor, and add a method on the object to be tested called getTestingAccessor that returns an instance. Now, if there is a private method or field on the object under test that I want to access, I leave it private on the object, but add a public method to the TestingAccessor that calls the private method, or a public accessor to get or set the private field. This allows me write the object with proper access protection so client code sees proper encapsulation, but allows me to break encapsulation during testing.
(I used to keep a reference to the testing accessor object in the main object, so I could return the same object for each call to getTestingAccessor, but I have since decided that this is unnecessary. It requires extra storage in production mode, and I can always cache the returned TestingAccessor in the test code (and if I can't, who cares about creating a few extra objects during testing?))
There are some disadvantages to this pattern. The biggest is that I find my tests end up more closely tied to the implementation of the object under test than they probably should be. It makes future changes harder. Other small problems are that the patter is a little tricky to use when there is inherritance involved, and that it makes the code a little more messy. So far, it's a price I've been willing to pay in order to test certain hairy internal bits.
This section is unfinished.
notes: Patterns I use: Mock Objects, TestingAccessor and ?"internal strategy"?, Stepper for multithreading, SimulatedDatabase for database testing.
Here are some small points, but they are worth passing on.
Unit tests should be blazingly fast and depend on no external resources. You want to be able to run them any where and any time, because if you can then you more likely will.
I haven't had the opportunity to do much pair programming, but I found it much easier to get the tests written when I had a partner. When I felt like slacking off on the tests, I'd ask, "Should we really write a test for this?" Then my partner, who probably felt like slacking off only a little less than I did, would answer "Yes, let's test it," and it would quickly get written and we'd both feel better for having done it. Now I may be putting words into my partner's mouth, but I noticed that he too wrote less tests and less thorough tests on the same code when we were working independently.
Plus, partners are great at providing a sanity check and keeping you from going long distances down silly paths. :)
|Louis K. Thomas <loui sth@hotm ail.co m>||Auth||2003-09-13 (4972 days ago)|