Automated unit testing, one of the most important foundations of software quality, is still a struggle for many software development teams. Justifying the extra upfront time to business is difficult, particularly when the team is under deadline or resource pressure. Many teams give up when confronted by huge amounts of untested, untestable legacy code. However, avoiding or delaying unit testing hurts everyone.
Many misunderstand automated unit testing, making the message inconsistent or less convincing. For example, automated unit tests do not initially reduce the number or severity of code defects. Good developers should already manually test their code thoroughly, stepping through it in a debugger where possible. Good developers also manually check error conditions and corner cases.
Many concerns are also unfounded. For example, automated unit tests do not replace QA (testers). QA check software developers’ work and test at the functional level. Their different perspective can help write better automated unit tests, too.
Many complain about brittle unit tests only to find brittle “unit tests” are usually functional or integration tests, such as calling web services on external systems or accessing a shared database. Since these are not segregated, the unpredictable actions of others cause tests to fail.
Indeed, the biggest barrier to automated unit testing is software design. If a method or function cannot be unit tested, the design is incorrect. For systems outside the software development team’s control, see Michael Feather’s work on legacy code. Testable code tends to be better designed code, too.
The main benefit of automated unit tests is unit tests capture the expected behavior of a single unit of code, such as a method or function. These tests can be repeated quickly and regularly with little manual effort, identifying when code changes, refactoring or experiments break the expected behavior.
Software developers also forget important details as they move to other features or projects. Capturing expectations as automated unit tests retains this experience and knowledge.
Nothing mentioned above is novel. However, questions remain once agreement to add automated unit tests is reached. For example, how much unit testing do developers add? How much extra time is needed? How do you explain this to non-technical stakeholders? The 20/70/0 rule answers these questions:
First, spend 20% of development time writing automated unit tests. A day’s worth of testing each week is a good compromise. This is part of the development task, not extra effort. Otherwise non-technical stakeholders will demand skipping it when under pressure.
Second, aim for 70% code coverage. This excludes third-party or generated code, so make sure code coverage tools can exclude this. Interestingly, technical people tend to think this is high, especially if no automated unit tests exist. Less technical people ask why the remaining 30% cannot be covered.
Third, ensure 0 failing tests. Running automated unit tests after an automated build is a critical part of continuous integration. Fix failing tests immediately.
The first rule, 20% development time on tests, tells project managers and stakeholders the extra time to add initially. It also allows project managers to compare the up front costs with time savings later (ideally greater than 20%).
The second rule, 70% code coverage, tells developers what the team expects, particularly when code reviews highlight missing or poor unit tests. In an agile process, automated unit tests are part of “done” for development tasks.
Code coverage is an imperfect metric and heavily debated. Ideally, the team should target functional coverage. Behavior Driven Development (BDD) is one option. However, for a team without unit tests or a superior metric, code coverage is unambiguous, automatable and easy explained to less technical people.
The third rule, 0 failing tests, reinforces that quality is critical, especially to less technical people once again.
Software developers often get caught up in technical debate. Unit tests and quality are no different. However, projects can rarely wait for perfect understanding. The 20/70/0 rule is unambiguous and understandable, even to less technical people. Attaining it or, more specifically, the quality goal it represents is still a challenge but attaining it is now about metrics instead of gut feel and hand waving.