AJ said that he had been taught the definition of "legacy code" was code that had no tests (I assume, after a quick Google, this came from Michael Feather's book Working Effectively with Legacy Code).
But, I pointed out, if we accept this definition then all test code was therefore legacy code. Who has tests for tests - Quis custodiet ipsos custodes?
So, what was the best way to avoid the pain often associated with "legacy" code? Afterall, some of our tests are now over 3 years old. So, we decided:
- Making methods so short that unit testing becomes almost trivial.
- If the method is truly trivial, there is little point in testing it. (We don't like unit testing everything since that sounds like we're just ticking check boxes for the Time and Motion man).
- Do as little mocking as possible since writing expectations can lead to hard to maintain code. Instead, override methods in the (production) class that would call other classes/make network calls etc with methods in a (test) subclass is preferred.
- Continue using EasyMock on the projects that already use it but use Mockito for new projects since test code is much more succinct with Mockito. One team member was alarmed that we would then have two frameworks for testing but was won over by the rest of the team who had used it and said it was trivial to learn.
- If we're just quickly trying to get the test to pass rather than thinking about why it's failing, the test serves no purpose.
Conducted by the end user rather than the system developer, an acceptance test can range from an informal "test drive" to a planned and systematically executed series of tests.
(Software Engineering: A Practitioners Approach, European Adaptation, Fourth Edition, p522 - Roger S Pressman)
More than once, I've heard people use the term "acceptance tests" and assumed that their audience knew they meant automated, regression acceptance tests as encouraged by frameworks such as Fitnesse. Pressman's definition is more general. So, I started Googling for formal definitions of terms the team was discussing.
Test Driven Development
The notable characteristics of TDD are:
- automated testing [1]
- regression testing
- black- and white-box testing [2]
- unit testing [1]
- test-first programming [1] ("write a little test that doesn't work, and perhaps doesn't even compile to begin with").
[2] http://www.threeriversinstitute.org/Testing%20Dichotomies%20and%20TDD.htm
The ubiquitous JUnit is an example of a TDD framework.
Team observations: write-first testing is nice but not to be dogmatically enforced; just because you're using JUnit doesn't mean you're unit testing.
Behaviour Driven Development, to give it its British spelling, was a term coined by fellow Brit, Dan North. It is a methodology that brings together:
- automated testing [3]
- regression testing [3]
- black-box testing
- acceptance testing [3]
It distinguishes itself from other methodologies in that "acceptance criteria should be executable". To this end, it uses a domain specific language that looks like natural language. This makes the tests easy to read by the business.
JBehave is an example of a framework that facilitates this.
Team observation: these tests take a long time to run - which can hamper productivity; testers and business people are supposed to write the tests but that very, very rarely happens and instead the developer does this.
This is a very incomplete list of terms. Ideally, I want to launch Gimp and put together some matrix of which methodologies include what elements. Although they often have some very nice frameworks, new methodologies often claim to be paradigm shifts (Dan North calls BDD "a second generation ... methodology" [3]). Really, they are emphasising ideas that have been long established.