Two of my favorite technical books are Andy Hunt's and Dave Thomas' book The Pragmatic Programmer and Bob Martin's Clean Code: A Handbook of Agile Software Craftsmanship. These books are borne of what I call Actionable Theory. We know that person, the “enterprise architect” type who graces us with their presence to offer up a cookbook commentary. I say “commentary” as opposed to “answer” because all too often, the observation is devoid of actionable context. And all too often, the sentences begin with “you should . ..” Once they provide their “advice,” like a seagull, they fly away, only to return at some later time to repeat the cycle.
For the record, the above referenced books were published in 1999 and 2008 respectively, they've stuck around and are at least as much relevant today as they were decades ago. Recently, I've given much thought to theory and its role in software development. What we do is supposed to be utilitarian. Businesses use our work product. In general, people want to hear less about theory and more about “just getting the job done.”
Imagine building a house, brick by brick, without regard to theory. What would we end up with? It might all work out. That, however, is more about faith and hope. Faith and hope are not strategies. Consider the case of the first Tacoma Narrows bridge which opened in July 1940. It collapsed in December 1940. There were plans and engineering principles applied. But a lot of shortcuts were taken in building the bridge. Even during construction, when winds picked up, the workers could feel the movement. That is why the bridge was nicknamed the “Galloping Gertie.” Once things got bad enough, there was no issue with problem recognition. There was an issue with problem resolution. Long story short, five days after a course of action was decided upon, the bridge collapsed. The real issue was not the bridge's sensitivity to aerostatic flutter. The real issue was not adhering to first principles.
What principles do you apply to building software? Here are mine:
1. Theory Matters
Whether it's building a bridge, building, airplane, or software, those who put practice before theory do so at their own peril. We have a very recent example with the Boeing 737 Max 8. That's especially relevant because that tragedy represents a concrete example of the intersection between a physically engineered thing and software. There's much more to learn, but so far, it seems that not much was done correctly on that project. And like the Tacoma Narrows Bridge, it appears that there's an aggressive attempt at after-the-fact remedial steps. Can that work? Sure, anything is possible. Is it likely to work? Absent good tests, there is no quantifiable way to know whether something is more likely than not to work.
In software development, history has taught us that there are certain things, which, if applied, tend to coincide with success. Can there still be failure? Of course, there can be failure because other variables matter. For example, my code may be exceptionally beautifully written and does what it's supposed to do on my development rig. But if it's put on crap hardware in production, it won't matter much. Therefore, we test, whether it be unit-, load-, or performance-based.
Design patterns are another example of theory. Another good book to consider is Christopher Alexander's A Pattern Language: Towns, Buildings, and Construction. If you've ever used a wiki, you can thank Ward Cunningham for that. Ward's inspiration for the wiki was born out of the Portland Pattern Repository, which was a practical result of a 1987 OOPSLA paper titled: Using Pattern Languages for Object-Oriented Programs, and which drew heavily from the 1977 A Pattern Language book. Ward Cunningham collaborated with his wife Karen, a mathematician and Kent Beck. For the record, Ward and Kent are part of the crew that created the Agile Manifesto and are cited heavily in the Clean Code book.
Although we shouldn't sacrifice delivery for purity, at the same time, we shouldn't throw things over the wall for the sake of speed, just to get something delivered. If you're thinking of the classic project management troika of Good, Fast, and Cheap: you can only choose two; you get the point. We don't build bridges to withstand 500-year floods, as that isn't practical. But we do build bridges to withstand floods. Software should be no different.
Theory matters. And while there are many reasons software can fail, without exception, one of the root causes will almost always be a failure to heed theoretical principles.
2. People, Process, and Tools. In That Order
Ultimately, people must build things. People are the ones to develop and govern processes and to develop and employ tools. Two of my favorite words to describe what's necessary for good process are Discipline and Rigor. What we do is difficult. There must be discipline to “stick to it.” At the same time, we must be flexible to change when needed. That's where rigor comes in. We should be as through as practicably possible; but no more so. I often go back to the Agile Manifesto and the associated principles that are often misinterpreted as meaning “no documentation.” Nothing could be further from the truth. We simply value working software over comprehensive documentation. In other words, we strive to avoid the “bike-shedding” problem.
With people that can employ a good process, tools, whether they are developed or used, stand a chance. On tools, without good people and process, we end up with the “leaky abstraction.” Where software is supposed to buffer you from certain details, it can't quite do so completely. The result is a tool that straddles the line and thus gets in the way and isn't as useful. You know these tools. The same holds true for throwing tools at a problem. Without good people and process, A), you don't know what the right tools are, and B), even if you did, the chances that you can rigorously evaluate and use them are low.
People and process can make up for poor tooling. Good tooling is wasted on poor people and process.
3. Do the Right Thing, in the Right Way, for the Right Reason
If you're going to cut corners, why are you doing it? Can you achieve the same end? If not, you may have to scale back so that you can do it the right way. This is just another flavor of the good-fast-cheap trilogy. It's just a different way of articulating that constraint where you can only pick two of the three items.
4. Build Competent Software
This is a phrase I have coined: Competent Software. Competency is about having the necessary things to be successful. For example, to be a competent lawyer, among other things, you need to graduate from law school, qualify, sit for and pass the bar exam, fulfill your annual continuing education requirements, abide by the rules of professional responsibility, and not be suspended/disbarred from the practice of law. That is a nice checklist of items. I have the following checklist of items for Competent Software:
- Delivery: Delivery is a feature. Unless people can use and interact with your software, it's useless and of no value.
- Capable: Your software must be built for a specific purpose. It must be able to do what is required of it by the business. If it can't, it's useless and of no value.
- Reliable: Your software must work. It must be able to handle exceptions and, at the very least, degrade gracefully. If a dependency isn't available, it's okay that your software doesn't work. The question is, “how does your software respond?” Or does it just react with a cryptic error message? If your software can't be depended upon to work, it's useless and of no value.
- Scalable: If your software needs to be available to thousands of users, then it must be designed, built, and deployed with that capability. If it isn't, although any single user may find value, to the business that sponsored the software, it's useless and of no value.
- Verifiable: It's not enough that your software works. You must be able to demonstrate that it works under friendly and adverse conditions. This's what testing is all about. We also verify through metrics and instrumentation. If you don't test completely and correctly, by log, instrument, etc., you're throwing caution to the wind. Your software might work. Or, you may have a Galloping Gertie on your hands. In many cases, SOX, SOC, FDA, and other regulations require a complete working and documented testing regime. You may have delivered capable, reliable, and scalable software. But if you can't verify it, it will be useless and of no value because it won't see the light of day.