In my experience, the difficulties associated with unit testing are in selling the concept to an organization. Quite often, organizations push back on unit testing because development with unit testing takes longer and it therefore delays delivery. It must be conceded that in some cases, that may be true, but there's more to the story. There's no way to know if such delays are an actual consequence of unit testing or from other things, such as a poor design, incomplete requirements, etc.
Related Articles
One thing's for sure: There's more, much more to a software project than development. A non-exhaustive list includes analysis, design, deployment, support, and maintenance. Therefore, to suggest that a software project fails because delivery may be delayed can, on a per se basis, be rejected as logical fallacy for the simple reason that the conclusion doesn't flow from the stated premise.
In anything we do in business that requires effort, in order to justify such effort, there must be a positive cost-benefit. The question to confront is whether or not unit testing as a practice has a positive value proposition for your organization. Let's examine 10 reasons why the presumption and answer to that question is an unequivocal yes.
Before delving into the 10 reasons, let's differentiate between unit, integration, performance, and user-acceptance testing. Unit testing is about isolating a single operation with a test fixture to determine whether or not that operation is compliant with a specification. That specification may be manifested in a requirements document, user story, or some other type of artifact. If you're not acquainted with the technical details around unit testing, you might enjoy reading a whitepaper I wrote a few years ago that I've since published on LinkedIn. You can find the details for that whitepaper in the sidebar.
Quite often, organizations push back on unit testing because they think that unit testing delays delivery. In some cases, that may be true, but there's more to the story.
1. Discipline and Rigor
Software developers like to think of themselves as engineers that provide logic and a scientific approach to problem solving. Nevertheless, there's a lot of bad software in production today. As a process-oriented individual, I rely on established patterns and practices to inform and guide my development.
Ask any I/T director or CXO and they'll tell you “Of course we want and require our development efforts to be disciplined.” That's your opening to ask how they actually do that. You'll often hear what amounts to word and jargon salad about patterns and practices. Again, the question is how do they actually implement those things? Unit testing is an exclusively development-oriented thing. It's the one thing you can do early in the development process that constitutes discipline and rigor in action. If a technology leader eschews the virtues of unit testing and insists on software developers employing discipline and rigor without it, I'd press them to reconcile those two viewpoints.
2. Does It Work?
Does your software work? This is a binary question that must be answered with yes or no. Whether expressly designed or implied, there's an expectation that software will work in a certain way. Many of these expectations get verified through other testing techniques like integration, function, and user-acceptance testing. Unit testing is the first bite at the apple to make sure that the software has the capability of working.
Of course, just because you have unit tests and just because the software passes, that doesn't mean ipso facto that your software works. Tests have to be valid to mean anything and to yield any value. Eventually, obviously, all software is tested. Every time you interact with software, you're testing the software. Disciplined and rigorous development demands that you know as early as possible whether your software works and unit testing provides part of the ability to answer that question.
3. Reduce Cyclomatic Complexity
Cyclomatic complexity, as the name implies, is measure of code complexity. The question is how many paths you can take through a code block. The more conditional statements you have, the more complex the code block is. The more complex the code, the more difficult it is to achieve high degrees of unit-test coverage. Unless you go through the unit testing exercise, you may not become aware of such complexity.
There are ways to measure cyclomatic complexity through other means, like code coverage. That's part of the build process, not the core development process. By the time you build, the code is already in place.
You must always be able to answer with objective evidence the question of whether your code works. Getting to the answer of yes or no is answered in part by how complex the code is. If unit-tests are difficult to write because they require a lot of set up, the code that the tests cover is too complex, period. A good reference for how complexity and testability relate to one another is the book Clean Code: A Handbook for Agile Software Craftsmanship by Robert C. Martin. In Clean Code, the SOLID principles are discussed in detail. The one of interest here is the Single Responsibility Principle, where a class and the code within it should have one and only one task. Without unit tests, all you have is an anecdotal opinion of whether your code is sufficiently simple. Real engineering and science demands objective and independent data to substantiate an opinion.
4. Your Software Is Used Before Delivery
Think of the last time you bought a car. You probably test-drove the car, right? What about the last time your organization decided to implement a CRM suite? You likely tested it before you decided to purchase, right? And what if, in your evaluation, the car or software didn't work? You might say to yourself, “Gee, didn't anyone test this or check it out for defects first?” It's a rhetorical question because more likely than not, you'll move on to another alternative.
With custom software, there's no alternative. You're writing it and delivering it to your customer. Does your code work? Unit testing is one means to exercise your code to make sure the code operates in conformity with its specification. If you can never get to delivery because your unit tests are difficult to write, the problem, in that case, isn't with the tests. This is where unit testing opponents think they have their best argument because they'll see this scenario as proof that unit testing is bad because it delays software delivery.
I don't think I have to spell out just how logically flawed the unit testing naysayer arguments are. The earlier you can implement your software, the quicker you can achieve failure. And in achieving failure, you can remediate the issues and achieve superior software in the process. The question is how do you know if your software fails? Unit testing is one good way.
5. Documentation
People want documentation on how software works. Nobody really likes writing documentation. Unit tests are a form of documentation because they express how software is supposed to work for a given context. I'm not suggesting that unit-tests are what you send your end-user to. But for new developers on the team, there's no better way to grok the software, specifically, how the software is built as to form, patterns, and practices. Organizations often complain about the costs incident to bringing a new developer online. Unit tests are a great way to help reduce those costs.
6. Measure the Effort Needed to Modify an Existing Feature
Software requirements change over time. We've all been in the position of needing to implement changes to existing features. We've also been tasked with providing an estimate on how much effort is required to implement a change. In the absence of unit tests, we guess. Such guesses are based on intuition and experience, which isn't to say that they have no value if unit-tests aren't present. In such cases, only your most experienced developers could be relied upon.
What if, on the other hand, you had good unit test coverage? You could spike up the change and run the tests. If there were massive test failures as a result of the change, you have some decisions to make. The first is whether or not you implemented the change correctly. Second, assuming you implemented the change in the only way you could, you now have to confront the cost benefits of the requested feature modification. Part of that effort might require a refactoring effort to make your code more amenable to the requested change. Without unit-tests, the SWAG taken to determine the effort likely wouldn't account for this additional effort. The key take-away is that unit-tests provide an opportunity to get an objective measure of at least part of the cost of a new feature.
7. Enforces Inversion of Control/Dependency Injection Patterns
Let's assume that you have a feature that handles credit card authorizations. The feature calls out to an external service and in response, the requested charge is approved or denied. A third option is no response at all (i.e., a timeout). The application must react differently based on the response or lack of response. Let's further assume that the code is closed, in that it takes no parameters. Somewhere in the system, there's the credit card information as well as the purchase amount to make the approval request. Such code is incapable of being unit tested because the code does multiple things. In other words, it doesn't conform to the Single Responsibility Principle.
Before you can have unit tests, your code must be unit-testable.
Going back to the SOLID Principles, the D in SOLID stands for Dependency Inversion. By injecting dependencies, you can mock contexts and behaviors so that you may simulate reality and thereby test to see how the software reacts. If you can't write a unit test because there's no way to inject dependencies, you're signing up for a very expensive proposition. Your software will end up costing more to develop and support. And when the time comes for new features, good luck with that implementation. The additional costs you'll incur plus the opportunity costs associated with not being able to implement new features is what's called technical debt.
8. Code Coverage
How do you know a line of code will ever be executed? If you have valid unit tests, you can quickly determine whether or not code is actually run. In my practice, I use JetBrains tools like R# and dotCover to run my unit tests and provide metrics on code coverage. At a glance, in the development context, I can quickly determine whether code is ever hit. If it's not, I have questions to ask. One question is whether I have sufficient test coverage. If I've accounted for all the scenarios, the code can be eliminated. If not, I have at least one more test to write. If the additional tests require a lot work to set up, that's an indication of high cyclomatic complexity.
By now, it should be apparent that all of these factors relate to one another. Taken together, you can safely assume that unit testing leads to better software. That, however, is a 100K-foot statement. Without sufficient details to back that claim up, it's merely conjecture and allows the unit testing naysayers to prevail.
9. Performance
Unit test fixtures can be used by performance tools to measure the success of an operation. For example, there may be a need to operate on a hashed list. In the real world, the source of such data exists in some external data store. For purposes of this discussion, let's presume that the code is the best place to handle a certain operation. Over time, it's been determined that the list can grow. You don't know the rate of growth. With unit tests, you can create scenarios that range from what you do know, to what may be probable, what may be improbable, and finally, the absurd. With unit tests, you can create a 100K item hash and see what happens. Unit tests provide the ability to gauge performance. Wouldn't it be nice to know before your software goes into production what its breaking points are? If its breaking point is 100K items and the most you will ever have is 10K items, call it a day. You now have objective evidence to make the call to end effort that solves a non-existent problem.
10. Enables Continuous Integration (CI)
Imagine that you're on a team with 10, 15, or more developers. Each developer is working on features, some of which are cross-cutting concerns. Now imagine that one developer makes a breaking change, but you don't know it. You have a CI Server that handles the merging and compilation for the team. There are no merge conflicts and the project compiles. Therefore, no problem, at least until you run the application and it blows up!
The fact is that it's exceedingly rare for compile-time errors to get through. Those kinds of errors you're forced to deal with. Runtime errors, on the other hand, are only dealt with when the software runs. If the only time your software runs is when the application is run as a whole, either in production or in user-acceptance-testing, you're setting yourself up for a world of hurt. CI servers are a wonderful thing because not only can they can manage your pull requests and merge and compile your code, they can run your tests, unit, integration, etc.
Without unit tests, a CI environment is nearly worthless. To the extent that CI automates the merge and compilation process, there's some value. The question remains whether your application works as specified. Without unit tests, there's no way to answer that question. Ask yourself if you've encountered somebody who extolls the virtues CI but also questions the value of unit tests. If you have, you can safely put that person in the uninformed side of the ledger.
If unit testing isn't part of your development process, you're doing it wrong, period.
Related Articles
Conclusion
The most hated offered conclusion is “it depends” because it begs the question of “depends on what?” The question on the table is whether or not unit testing is an essential part of software development. If cost, quality, and timeliness are not of concern, the answer is no. Otherwise, the answer is yes. I don't know of any rational organization that doesn't, at the very least, have a stated preference for lower costs, higher quality, and fast delivery. In my opinion, if unit testing isn't part of your development process, you're doing it wrong, period. If things like cost and quality aren't an issue, then your development efforts are a hobby, not a business.
If you got this far in the article, it's probably because unit testing matters to you and you're trying to sell it in your organization. I'd suggest that as a first step, you do it for yourself and let the benefits speak for themselves. If you're trying to sell your organization on the propriety of unit testing, this article provides a framework for your argument.
Th naysayers don't usually prevail because they have good arguments. Often, they don't have arguments at all; they just have conclusions. But here's the hard truth: They don't need arguments because they aren't the ones advocating the adoption of something. In other words, they're not the moving party. The moving party bears the burden of proof, period. The reason why unit testing naysayers prevail is because the developer-advocate fails to carry their burden of proof and make the case.
At the end of the day, you have to make it work. It's worked for me and it's worked in any successful software project I've evaluated. Can your software project be successful without unit testing? Logic demands that the answer to that question be yes. However, the answer must be qualified as being less likely, based on the 10 factors I've set forth here.