Writing software is hard, particularly when the schedules keep programmers “nose to the grindstone”; every so often, it's important to take a breather and look around the world and discover what we can find-ironically, what we find can often help us write software better.
Psychology seems no less strange a partner to the software craftsman than philosophy, but understanding how we engage in that practice called “thought” and “feeling” improves interpersonal skills, like how to deal with annoying co-workers like yourself.
A quick multiple-choice test for you: how do you know when a politician's in trouble?
The news runs headshots of the politician looking grim and/or angry.
A current or former spouse is interviewed during prime-time television hours.
Talk-show hosts all start cracking jokes at the subject's expense.
The politician issues a statement to the press that “mistakes were made.”
The answer is “D”, of course-world events often force the politician to engage in some grim press conferences, current or former spouses sometimes engage the press on their own agenda, and talk-show hosts, well, the relationship between them and the political system is long, historic, and symbiotic.
“Mistakes were quite possibly made by the administrations in which I served.”
-Henry Kissinger, responding to charges that he committed war crimes in the United States' actions in Vietnam, Cambodia, and South America in the 1970s.
“Mistakes were made.” A fascinating statement if ever there was one-I challenge anyone to come up with a more facile way of acknowledging an undesirable result without admitting any flaw, fault, or hand in the creation of the outcome. The admission of error is important, it seems, but not responsibility. As three-word sentences go, this one ranks above all others in nuance and subtlety, saying one thing even as it says something completely different.
Except, of course, for the phrase: “It's a feature.”
The Pyramid of Bad Decisions
Q: “You mean those who made up the stories were believing their own lies?”
A: “That's right. If you said it often enough, it would become true.”
-John Dean (Richard Nixon's White House counsel and later the man who blew the whistle on the Watergate scandal, interviewed on his role in the conspiracy)
How is it that people fall so far from the lofty ideals we hold for ourselves? Consider these scenarios:
“The sheriff's department in Kern County, California, arrested a retired high-school principal on suspicion of the murder of his wife. They interviewed two people who told conflicting stories. One was a woman who had no criminal record and no personal incentive to lie about the suspect, and who had calendars and her boss to back up her account of events. The other was a career criminal facing six years in prison, who had offered to incriminate Dunn as part of a deal with prosecutors, and who offered nothing to support his story except his word for it. The detectives had to choose between believing the woman (and Dunn's innocence), or the criminal (and in Dunn's guilt). They chose to believe the criminal.”
While it might be tempting to assume a vast conspiracy within the police department or a theory of collective incompetence, chances are the police, like so many of us, fell victim to cognitive dissonance, a “state of tension that occurs whenever a person holds two cognitions (ideas, attitudes, beliefs, opinions) that are psychologically inconsistent.” Dissonance is not a fun feeling, ranging from minor guilt pangs to major anguish; people quickly strive to find ways to reduce it, either by eliminating the dissonance itself or by engaging in a rationalization process that somehow brings the two discordant elements together.
An example: I had a friend in college, a pre-med student, who smoked a pack of cigarettes a week. I happened to ask him about it one day, curious how somebody who fully appreciated and understood the health risks of smoking could continue to damage himself this way. “Well, when I tried to quit, I found myself snacking a lot more. Since obesity is itself a health risk, particularly since heart disease runs in my family, I figure this is the healthier alternative. Besides, I'm down from half a pack a day, so…”
Mind you, this man graduated from college with a 4.0 GPA, and went on to become a surgeon.
But at Least that's Not Me…
As software developers, we possess an amazing array of analytical skills-we have to, in order to be able to carry out the tasks assigned to us. We have to break complex tasks down into a set of atoms that hinge on binary decisions, expressed as language constructs and keywords. Those who've done well in our industry have to improve their analytical skills, both because they are responsible for laying out the design of large systems and choosing technologies, and because they have to examine user requirements and requests in a way that somehow translates into technological terms. Somehow, an HR system has to be turned into a relational database, HTML, and the processing rules to keep the data from driving the users (and their customers) insane.
And yet, despite all this analytical ability, it seems like we're desperately short of self-analytical ability sometimes.
Recently, I was retained by a Fortune 500 company to help work out some Java/.NET interoperability issues for a desktop application calling some Java libraries. As background material, their enterprise architects gave me a three-hour briefing on the various services developed within the enterprise system (over 25 of them at last count), complete with a versioning scheme that requires a 10-page document to describe it. When challenged about how often multi-version services actually turn out to be necessary, none of the developers I worked with could remember a case where it was necessary. (In fact, many of the developers have grown increasingly disenchanted with the whole “Web services” story in general.)
It feels, in fact, like a good percentage of our industry craves complexity. Another example: many years ago, as a contractor, I worked for a company that was rebooting a project-what 25 developers couldn't do in two years, four developers (including me) did in seven months, all because we simplified the system from three tiers down to two, and from CORBA/C++ running on a middle-tier providing CosEvent services and transactional management to Java servlets talking to the database.
Mike Clark, author of Pragmatic Project Automation, once described how projects have a “complexity budget,” a fixed amount of complexity that a given team can handle. Whether the complexity comes in the form of the build, the number of “moving parts” in the software, the languages involved, or the business functionality itself, every project has an upper limit on the amount of complexity it can support. Exceed this upper limit, and either the project will have to partition itself into pieces small enough for developers to absorb (which runs the risk that now no one person can grasp what's happening throughout the system), or developers will start calling for management to “blow up the building” and start from scratch.
Consider your own experience for a moment-how many of you have hired on to a project, which from the outside sounds so straightforward, but requires 27 different Visual Studio projects scattered across 14 different solution files (some of the projects actually appearing in more than one solution, deliberately so), and said, “What the heck?” Why does it seem that even the simplest projects require relational databases, Web services, enterprise service bus topologies and four servers?
At what point do we cry: “Stop the Madness!?” Certainly, agilists have, for several years, stressed the XP-ish ideal of “Do the simplest thing that could possibly work,” but this is not new-show me a developer who's never heard the similar “KISS” principle, and I'll show you somebody who just entered college. Clearly that's not enough. So just how do we get into these situations?
Of Consultants and Airports
We humans are an irrational lot much of the time. Consider this interesting tidbit-“you get what you pay for.” In the (supposedly) coldly rational world of economics, this makes no sense whatsoever: the value of a good or service is constant, given the circumstance in which it is consumed. If I'm hungry, a slice of pizza has a certain value to me; that value may be different than when I'm not hungry, but in the same context (hungriness), that pizza holds a certain value to me, regardless of what I pay for it. Economic theory holds that if the price of the pizza is equal to or less than my value of it, I engage in the transaction, and both producer and consumer are happy.
But what happens when the pizza is free? Curiously, strange anecdotes begin to emerge from people describing situations in which they've offered goods or services for free, situations that can only be described as “rude” or “shocking.” If you offer me the free slice of pizza, I instead want the one over there, not this one. What about a drink? And you made it with too little sauce! Instead of being ecstatic at the great deal, we peremptorily demand more to it. Put any price to it-even a penny-and the irrational demands tend to fade away.
The reverse is true, as well-when someone is told that the big-screen TV they just purchased was on sale for half-price at a different store, or came with free delivery and installation, that individual will often begin to extol the wonderful service at the store they used, or the virtues of installing it themselves. Indeed, the more pain, discomfort, effort, or embarrassment involved, the more they will be happier with doing it in exactly the manner it was done. The emotional need to justify the action is obvious: I am a sensible, competent individual and the idea that I went through a painful procedure or shelled out an exorbitant amount of money for something that could have been acquired more easily in some other fashion or wasn't worth the effort is incongruous. Therefore, I begin unconsciously looking for positive reinforcement that I made the right decision-the more costly the decision, the more energy I put into finding the good things and ignoring or hiding the negative ones.
Doesn't this sound familiar?
Amongst my Java speaker friends, we have a saying: “The value of the consultant's advice is directly proportional to the number of airports he came through to deliver it.” Stories of consultants being brought in-at exorbitant rates-to suggest the exact same ideas that the company's own employees were making are legion.
“But,” the developer splutters, “That's management, that's not us. We're rational and analytical creatures. We would never, say, learn a new technology or language or platform or tool and begin looking for places to use it, just to justify the time and energy we've spent using it. Only management is that crazy.”
Perhaps, but often a different scenario emerges.
Of Doomsday Cults and Architects
In the mid-50s, a psychologist (Leon Festinger) and two associates infiltrated a doomsday cult that believed the world was coming to an end, and a spaceship was coming to carry the faithful away to a safe planet. Some of the members believed more deeply than others, selling or donating their worldly possessions, but all waited with the cult's leader that night, anxiously awaiting alien salvation.
Festinger's investigation was to predict what those members of the cult would do when no doomsday arrived. He predicted that those members who were “casual” members (those who hadn't abandoned their life and its trappings) would simply walk away, but those who had gone “all in” would in fact go even deeper into the cult's collective psyche, proselytizing the cult leader's mystical abilities to others, seeking recruits. And he was right-the “all in” members, when told by the cult's leader that the world had been saved by their “incredible faith,” went from despair to ecstasy and immediately began calling the press and “witnessing” to casual passers-by on the street. The cult's prediction had failed, but not Festinger's.
Translate into our industry: a consultant is asked to begin working for a company, and receives the usual technology-backgrounder briefing to explain the details of the various parts of the system. After several hours of infrastructure diagrams, clouds with arrows pointing to boxes and projected schedules dating out to 2012 and beyond, the consultant says, “But wait, I see a hole here; you're going to run into problems with performance and scalability with all this infrastructure in place. I hate to tell you this, but I've seen it before and it's usually extremely costly to fix.”
In a perfectly rational world, the company's architect or project lead (who designed this system from the ground up) will immediately seize the opportunity to avoid future malignancy. But how often does that happen in practice? How likely is it instead that the architect or project lead will defend the system, weighing his projected interpretation of how the system will behave against the consultant's experience and memory, and concluding that the problem can be avoided “when we get to that point?”
How many times have you been that architect or project lead?
We architects are guilty at times of the same mistake: we have labored over the system since its infancy, and we don't want to believe that we missed something so obvious (in hindsight). The idea that I am a sensible, honest, and intelligent architect conflicts with the cognition that my work is about to be partially or completely voided by something I missed is a painful dissonance. So, I begin to rationalize: “Our situation is different.” “We don't need to meet as high a performance target.” "We'll run it in a cluster (I mean, ‘the cloud')." The deeper the dissonance, the more strongly I react, and the more aggressively I defend the status quo. Before long, I find myself defending the use of BizTalk in a desktop application, and my friends are left wondering when it was all vestiges of sanity fled my mind. In truth, there was no single point in time where I became irrational-it was a series of justifications, some of them even perhaps unconscious in nature, that began at that moment when the consultant found a flaw in my work.
This holds true at all levels of our industry, by the way, from the developer who has just entered the working world to the 30-year grizzled veteran architect and CTO. Remember “selfless programming?” That was a deliberate attempt to separate the programmer's emotional investment away from the code they wrote, in an effort to help developers see criticism of the code as criticism of the code, not of the developer. Pair programming is another attempt. Still more efforts are bound to take place, until developers (and, in truth, the rest of the human race) become aware of their own cognitive dissonance when it occurs and work to fight against it.
In fact, fighting it may be both the simplest and hardest thing we can ever do: we just have to recognize and admit when we're wrong. Not in the manner of the politician (“mistakes were made”), but in the basic assumption that we can approach things in an objective and coldly rational manner-the human mind is a complex construct, and imagining that we can somehow put emotion aside in a decision is fundamentally flawed. From a professional perspective, we need to admit that just as we have a personal investment in the work that we've done, others do in the work that they've done, as well. From there…
Well, that'll be another subject for another day. But for now, just know that the six most powerful words in the English language are “You were right, I was wrong.” Try them out next time you're in a code review.
(For those interested in a light introduction to cognitive dissonance, I highly recommend the books “Mistakes Were Made (but not by me)” and “On Being Certain”.)