There may be no more intellectually lazy motivation than “Conventional wisdom.” What may have been a sound choice once upon a time may not be so today. What changed? One key distinction is time, past vs. present, and what goes along with it, such as advances in technology. Sometimes the distinction is a new regulation. Or perhaps, the distinction is a new threat vector. Regardless of degree, there is always a distinction. There's a familiar quote: “The only constant through time is change.” We must adapt to changing circumstances. The difficulty arises with how and to what degree to adapt. In the words that follow, I attempt to address the question of how to manage change.
Orthodoxy: A generally accepted theory, doctrine, or practice. What are the theories, doctrines, and practices you hold to be acceptable as models of good practice? There's no such thing as a general and universal “Best practice.” As a specific concept, for a discrete set of circumstances, based on a population of empirical data, we may be able to conclude that some approaches are better than other approaches. And based on a set of criteria among that same population of empirical data, we may in fact be able to conclude that one approach is best as compared to the others. But as a general rule, no one specific approach will ever be “best” for any circumstance. The reason this is so because there is no way to accumulate all of the empirical evidence across every possible circumstance. It's the same reason that no line of code can be proven to be absolutely bug-free. It's an unbounded problem because there's no way to know the context of all of the possible variables and their respective values that our code must execute within.
Your orthodoxy is literally your “code.” It's your private law, your legislation. It may also be something adopted by your team, a good example is your team's definition of done (DoD). Each person's orthodoxy is based on experience, practice, and making/encountering lot of mistakes along the way. It's the bedrock upon which everything sits. The degree to which we value trust, respect intellectual property rights, treat other people, or write code, how we carry on with those things and everything we do professionally is based on our own yardsticks, our own barometers. How we judge, assess, evaluate, decide, etc., it's all based on orthodoxy.
To illustrate, let's consider tools evaluation. When a shop evaluates tools, it seeks the best tool for their circumstances, within a budget. Not all circumstances are unique. Not all circumstances are common. As people who have to use the tools to support a process, collectively, one set of circumstances will always be unique because as people, we each are unique. And collectively, that means our teams will be unique, for one reason or another. That's the challenge with any tool. On one hand, the tool must be generic enough to apply to a variety of circumstances. The tool can be so generic that the abstraction leaks inefficiencies based on certain use cases. An orthodoxy I've adopted is one I've written about quite often: People, process, and tools - in that order. Tools rely upon the process that's carried on and managed by people. Too often, static tools are brought to a dynamic process for which the tool may or may not be best, as defined by the shop. The tool, in fact, may actually end up being the worst alternative. The tools evaluation process, like all things, is imperfect. It can't capture every use case and there's typically a finite timebox. Nevertheless, the orthodoxy of some shops is a per se “buy-don't-make” mindset, because that's just the way it is.
There's a word in the UK, “bespoke,” which literally means to “speak for something”. In the US, the word is “custom.” Which fits better, custom-tailored clothes or those bought off the rack? Without question, what will fit better is the custom suit. But how much better will it fit to justify the extra cost? On one hand, a shop may require “the best solution.” On the other hand, that same shop may require that no custom development be expended. The orthodoxy that I invoke in these scenarios is the famous project management triangle of Time, Cost, and Quality. You must pick exactly two. It may very well be that a generic offering may be good enough. But what is “good enough?” That's up to you. That's your metric. That's your definition of done. That's your yardstick. They're your rubrics for decision making. They're all part of your orthodoxy.
Custom software is often eschewed, as a general rule. Custom software, on an objective basis and assuming that the right resources are brought to bear, is often the best all-up approach. But all too often, the process gets short circuited. Does it require any sort of custom development? If yes, we'll pass. That's a categorical approach straight out of the Immanuel Kant playbook - that may or may not end up yielding the best alternative, based on whatever the criteria may be. It may very well be that custom software, either in whole or part, isn't feasible whether it be for lack of talent, money, or time, all of which directly relate to the classic project management constraints of time, cost, and quality. If your orthodoxy is based on a categorial mindset, you run the risk of sub-optimization by not sufficiently considering alternatives. If constraints are biased, how could anybody ever conclude that a process could be billed as
objective? A per se/categorical basis for decision-making absent an objective rationale is the antithesis of reason. Just because the goal is to attain a good result, it doesn't mean that one will be attained. When a bad result occurs, there's typically strong motivation to understand how and why that bad result happened. The orthodoxy of conventional wisdom that states that's “just the way it's done” will only explain the how, not the why.
The other side is the Jeremy Bentham philosophy, which is about maximum utility. Where Kant is about achieving a good result even if there is pain to some degree by some or all, Bentham prioritizes reducing the pain as much as possible for all. The risk is the same with the categorical approach - you get sub-optimization because you may never be able to decide when everyone is satisfied enough. We've all been there. Software design by committee? At some point, decisions must be made. Dr. Michael Ryan of the World Health Organization (WHO) made an interesting statement in March 2021: “If you need to be right before you move, you will never win.” Dr. Ryan's press conference statement is a twist on: Perfection being the enemy of the good. Rod Paddock and I have a similar saying: “Delivery is a feature.”
Which approach, categorial or utilitarian, is “best?” That's up to you, which means it's up to how you apply your orthodoxy, which is itself, part of your orthodoxy. And that assumes that there's a discernable codified orthodoxy. The alternative is chaos. If you're searching for root causes and haven't found them yet, you may wish to look under that rock.
I just reviewed my orthodoxy regarding Test-Driven Development (TDD). TDD is an interesting to review because it has, in some circles, become a belief system unto itself, instead of just being a practice. I realized that I made the mistake of attributing the dogma as a core TDD attribute. And then, I just sort of discarded the concept. It's often difficult to separate the message from the marketing, i.e., the wheat from the chaff. But once I did that, I was able to conclude that all my development will be TDD for the basic reason that TDD is a pattern, not a practice. We know that patterns and practices are different things. Clearly, if they were the same thing, we'd only need one word. Part of my orthodoxy that I'm emphasizing is to rely on plain meaning - instead of what the marketing folks would like me to think. What practice is this pattern design to support? That's the yardstick by which performance is measured in this context.
Let's break down TDD. What's a test? It may be a unit test and may also be an end-user's interaction with your software. Anything that exercises your code is testing your code. The goal should always be to write the best quality code we can write, under the circumstances. What's the most basic quality question we can pose? Does something work as expected? The first and best opportunity to make sure that happens is a test during development by the developer who wrote the code. How to carry out TDD, that's the practice. There are many ways to implement the pattern(s) the practice is based upon. To implement patterns via practices, we often rely on tools. Take dependency injection (DI) for example. DI is not a pattern. DI is a practice. The pattern is Inversion of Control. Separating the two concepts is important to truly understand what TDD actually is because whether you are in a green field project or working with legacy code, how you go about implementing TDD can, and likely will, be different. Regardless of specific approach, both are TDD ? based on the literal phrase of “Test Driven Development.”
Years ago, I took a categorial stance against TDD. At the same time, I was (and still am!) a unit testing and CI/CD proponent. How can Test-Driven Development be antithetical to testing? That was the logical conundrum I was facing. The answer is that it can't be, based on the plain meaning rule. And that's when I decided to revisit an orthodoxy, but not as part of a pure academic exercise. In addition to Quality Assurance (QA), the exercise was in light of the many existing and new threat vectors we confront today. All up, I finally concluded that all development must be led with unit tests that exercise specific code. It presents the first best opportunity to verify work and to inspect and adapt if needed. Over time, the quality of software will improve if properly applied. That's part of the standard I'd expect a good Definition of Done (DoD) to be compatible with.
It's important that we understand those things because often, we're called upon to defend the patterns we've adopted (and perhaps invented) and the practices we've adopted (and perhaps invented). Tinkering with the way we do things, the basis for why we do things, is extremely difficult, but it's often necessary because it may very well be the root cause of your issues. Just because the issue is in your kernel and that it would be disruptive and expensive to address, doesn't mean that the issue is just going to go away. What accrues then? Technical debt. But once you have sound processes, rooted in a sound orthodoxy, then you begin to realize the benefits of having sound, reliable, objective, and verifiable empirical data to prove its efficacy. To get there takes a lot of effort. And if you're wondering how to get there, start with the people. Start with your team.