It seems that a common aim when first starting out in unit testing is to obtain 100% code coverage with our unit tests. This single metric is the defining goal and once obtained a new piece of functionality is targeted. After all, if you have 100% code coverage you can’t get better than that, can you?
It’s probably fair to say that it’s taken me several years and a few failed attempts at test-driven development (TDD) to finally understand why when production code fails it can still occur in code that is “100%” covered by tests! At it’s most fundamental level this insight comes from realising that “100% code coverage” is not the aim of well-tested code, but a by-product!
Consider a basic object “ExamResult” that is constructed with a single percentage value. The object has a read-only property returning the percentage and a read-only bool value indicating a pass/fail status. The code for this basic object is shown below:
namespace CodeCoverageExample { using System; public class ExamResult { private const decimal PASSMARK = 75; public ExamResult(decimal score) { if (score < 0 || score > 100) { throw new ArgumentOutOfRangeException("score", score, "'Score' must be between 0 and 100 (inclusive)"); } this.Score = score; this.Passed = DetermineIfPassed(score); } public decimal Score { get; private set; } public bool Passed { get; private set; } private static bool DetermineIfPassed(decimal score) { return (score >= PASSMARK); } } }
For the code above, the following tests would obtain the magic“100% code coverage” figure:
namespace CodeCoverageExample { using System; using Microsoft.VisualStudio.TestTools.UnitTesting; using NUnit.Framework; using Assert = NUnit.Framework.Assert; [TestClass] public class PoorCodeCoverage { [TestMethod] public void ExamResult_BadArgumentException() { var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(-1)); Assert.That(ex.ParamName, Is.EqualTo("score")); } [TestMethod] public void ExamResult_DeterminePassed() { Assert.IsTrue(new ExamResult(79).Passed); } [TestMethod] public void ExamResult_DetermineFailed() { Assert.IsFalse(new ExamResult(0).Passed); } } }
Note: The testing examples in these blog posts use both MSTest and NUnit. By decorating the class with MSTest attributes you will get the automated test running in TFS continuous integration “out of the box”. Aliasing “ASSERT” to the NUnit version allows access to NUnit version of this command (which I was originally more familiar with and still prefer).
Running any code coverage tool will clearly show all the paths are being tested but are you really protected against modifications introducing unintended logic changes? This can be checked by running through a few potential situations: Changing the pass mark to 80% would cause the above unit tests to fail, but reducing it to 1% wouldn’t. If you consider the main purpose of the unit test is to verify that the exam result is correctly determined (and the potential consequences in the “real world” if it is not) then it would imply that this sort of check is not fit for purpose. In the scenario it is critical that edge cases are tested – these are the points in which a result passes from being a failure to a pass and similarly from being a valid result to invalid (you can’t score less than 0% or more than 100%). Similarly, short cuts should not be taken in asserting the state of the object under test in each individual test – don’t assume because the “Result” property was correctly set in one test it will be correct in another (and therefore not tested). The following improved unit tests verify the desired behaviour of the object in full and in the process of this verification covers 100% of the code. It is this change in priority that is critical when designing and developing your unit tests. Only when all the logic paths through your code are tested are your unit tests complete and at this point, you should by default have 100% code coverage.
namespace CodeCoverageExample { using System; using Microsoft.VisualStudio.TestTools.UnitTesting; using NUnit.Framework; using Assert = NUnit.Framework.Assert; [TestClass] class GoodCodeCoverage { private const decimal LOWEST_VALID_SCORE = 0; private constdecimal HIGHEST_VALID_SCORE = 100; private const decimal PASSMARK = 80; [TestMethod] public void ExamResult_BadArgumentException_UpperLimit() { var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(HIGHEST_VALID_SCORE + 1)); Assert.That(ex.ParamName, Is.EqualTo("score")); } [TestMethod] public void ExamResult_BadArgumentException_LowerLimit() { var ex = Assert.Throws<ArgumentOutOfRangeException>(() => new ExamResult(LOWEST_VALID_SCORE - 1)); Assert.That(ex.ParamName, Is.EqualTo("score")); } [TestMethod] public void ExamResult_DeterminePassed_HigherLimit() { AssertCall(HIGHEST_VALID_SCORE, true); } [TestMethod] public void ExamResult_DeterminePassed_LowerLimit() { AssertCall(PASSMARK, true); } [TestMethod] public void ExamResult_DetermineFailed_HigherLimit() { AssertCall(PASSMARK - 1, false); } [TestMethod] public void ExamResult_DetermineFailed_LowerLimit() { AssertCall(LOWEST_VALID_SCORE, false); } private void AssertCall(decimal score, bool result) { var examResult = new ExamResult(score); Assert.That(examResult.Score, Is.EqualTo(score)); Assert.That(examResult.Passed, Is.EqualTo(result)); } } }
Additional Comment: Whilst working through these examples I considered exposing the “pass-mark” constant held in the “ExamResult” object so it could be used within our unit test. In certain situations that could be acceptable (or even desirable). However, unless there is a requirement it is probably better to keep the two separate as this requires that the unit test explicitly defines the pass / fail point that it is testing.
Today I realised that I’d forgotten how spoilt I am using Resharper and dotCover to run my unit tests. Put another way I’d forgotten how badly Visual Studio plays with any other unit test frameworks other than MS Test! I’m used to and really like the fluent API style of NUnit’s Assert.That(…) syntax so having to fall back to MS Test always feels like a step back. If you ever find yourself in a situation . . .
When I first started looking into Windows Workflow one of the first things that I liked about it was how it separated responsibilities. The workflow was responsible for handling the procedural logic with all its conditional statements, etc. Whilst individual code activities could be written to handle the business logic processing; created in small easily re-usable components. To try and realise my original perception this series of blog posts will cover the unit testing of . . .