Thursday 30 January 2014

What drives the tests in TDD?

tldr; analysis is an under emphasised part of TDD. I'm trying to make a case for developers to pay more attention to this part

Red-Green-Refactor

The introductory mantra in test drive development (TDD) is red-green-refactor. That is, you write a test that fails (red), make the test pass (green) and then tidy up the code (refactor). Following this pushes you toward writing small pieces of behaviour that will incrementally build up to a complete system. Another great advantage of this process is that each test gives you a small problem to solve and that drives the code that you write, hence the name. This sounds great, but trying to put it into practice was very difficult. Initially I'd get stuck trying to think of a test out of thin air, without something to drive it. When I got tired of being stuck and felt that I should start writing something I would think of the tests in terms of a method that a class will have, and then work my way through the possible inputs starting from the most basic. This would get me started but I found that the process wasn't as smooth as I would have hoped and usually got stuck after the second or third test.

Changing the metaphor

Trying to figure out what to test at the level of the test was leading me to the same problems I had when using the code-first technique. I was missing something to drive the test. Then I read Dan North's post on behaviour driven development (BDD)  and things started to make a more sense. He didn't come up with anything new, he just switched the metaphor from test to behaviour. This allowed me to re-frame the problem in terms of something more tangible; I wasn't stuck thinking about tests, I was thinking about the behaviour that the system should have. That still didn't quite do it for me though, because that behaviour has to come from somewhere.

Analysis is important

The driving force for this behaviour is whatever real-world need the software should fulfil. This behaviour still doesn't jump out of thin air though, it requires analysis. This does get pointed out in the BDD posting but it's importance gets overshadowed by the description of the framework. For me the real key is that you perform some analysis and come up with a statement that specifies how the system should work.
The statement plays a very important role because it's the bridge between the real world and the code. The statement forms this bridge by becoming the test name. That's what a test name really is, a specification for a piece of the system, and a series of well written tests will read like a specification document. This is true whether you are doing acceptance tests or unit tests.

Making statements

Each statement should be some true fact about the system. This part sounds easy but becomes difficult when it's tied in with the code in the form of a test name. There are recommended structures for test names, such as Given-When-Then or method-input-result. These can help you out when you are stuck, and in some cases still read OK, but in reality you're writing a specification and your test names help you do this when they in a natural language.

The resulting process looks like this:
Analyse real world problem -> make a statement about the behaviour -> write a test that proves the behaviour -> write an implementation that satisfies the behaviour

Each step gives feedback on the quality of the previous steps. If you find it difficult to make a statement about the behaviour, then you probably need more analysis. If you find it difficult to write a test for the statement, then maybe the statement is too general, or saying too much. If you find it difficult to write the implementation, then maybe there is another statement that you should have made first. There's lots more feedback that can occur between the steps but the important thing is that it is there and will push you back to the analysis phase if necessary.

Disclaimer

TDD is much harder than most books or posts would have you believe. It's hard because it is an analysis and design activity. Rather than giving an easy path to great software, it highlights the complexity by asking hard questions. This is a good thing because you're going to have to tackle this complexity anyway.
I find it difficult to stick to the process all of the time and sometimes I have to go with what I have, whether that's badly named tests or no tests at all. It is worth practising though, because when it works the benefits are huge.


No comments:

Post a Comment