Because Unit Testing is the plain-Jane progenitor of Test Driven Development, it's kind of unfair that it doesn't have an acronym of its own. After all, it's hard to get programmer types to pay attention if they don't have some obscure jargon to bandy about. UT is too awkward, besides being a state abbreviation in the U.S., so for this post (and, if it catches on, future posts as well) I'll borrow from the telco folks and call unit testing Plain Old Unit Testing.
The Best of all Possible Worlds
Part of my problem with TDD has been that it claims to provide complete testing. You see, even if this is true, it runs up against my gut feeling that worse is better when it comes to unit testing. Or, to throw in yet another development aphorism, it is my suspicion that unit testing lies squarely in the realm of the 80/20 rule—80% of the value of unit testing comes from 20% of your unit tests.
While it is functionally impossible to know in advance exactly which 20% of your unit tests will be important later on down the line, that doesn't mean that we're throwing darts at a board. Experienced developers can make pretty good guesses about what the problem areas are likely to be. And they're pretty good about developing unit tests that cover those problem areas.
Which is my way of saying (or restating, really) that number of tests is no more important to me than number of lines of code as a measurement of value in programming.
Crossing the Beams
I think some of the confusion with TDD discussions is that TDD is an intensified version of POUT. Both POUT and TDD use unit tests. Both do so as part of a process. Both are concerned with future maintenance and functionality. The primary substantive difference is that TDD writes the unit tests before developing anything else.
Unfortunately, the popularity of TDD followed the widespread use of POUT very closely. This hurts because we never had a chance to get comfortable with POUT on its own before TDD appeared on the scene. Indeed, many people went from no POUT (or just beginning to learn) straight into TDD.
What that means in practical terms is that we have a tough time separating the value of POUT from the value of TDD. Few people have done both thoroughly enough to judge them based on this relatively singular point of differentiation. The inability to distinguish or track the value of each separately lies at the heart of much of the cross-talk surrounding TDD and TDD evangelism. Indeed, many of the testimonials I've seen (including many comments on a prior post) seem to assume that without TDD, no testing is done at all.
Those of us who are happy with POUT (and looking at the extra effort needed for TDD with some distaste) are left wondering what TDD offers us that we don't already have. We don't want some hogwash about forcing us to do our job. We see the value of testing and have learned to take time out for POUT (Oh. There's a slogan that you can paste up on your very own office wall "Time out for POUT!" Remember you read it here first.). Frankly, it seems to me that we're getting the goodies the TDD folk go on about just fine and without having to retrain how we develop.
A Call for Differentiation
So here's my request for those of you who are using and enjoying TDD who want to invite us all to partake of its superior benefits: please couch your arguments in the assumption that I am already POUTing, that my POUTing already delivers substantial benefit, and that adopting TDD is a non-trivial training and practice burden. Thank you.