Writing software is hard, particularly when schedules keep programmers “nose to the grindstone.” Every so often, it's important to take a breather and look around the world and discover what we can find-ironically, what we find can often help us write software better.
Communication is a difficult thing to achieve-why wouldn't we take full advantage of all the advantages we've got to make it happen?
Many thousands of years ago, the ancient Egyptians found themselves facing something of a quandary: the various kingdoms they'd conquered required relatively close supervision (they were conquered states, after all, and might not be fully “on board” with the Egyptian plan for world domination yet), but each had its own language. Clearly Egyptians didn't want to have to learn each of their slaves' languages, yet the Egyptian form of communication, the hieroglyphics, was far too complicated to learn easily. Hieroglyphics, being a pictographic language, used a different icon to represent a concept individually. For comparison purposes, it would be as if each word in English were to have its own written representation, which would mean that this article thus far would have required roughly just over 150 different icons so far.
The solution to the problem, as it turned out, was to create a far simpler set of icons, each one representing a sound, a phoneme, which could combine together to form words. This was the world's first alphabet, and it was so wildly successful that the Egyptian hieroglyphic language fell so far out of use that it remained essentially an unbreakable cipher until the discovery of the Rosetta Stone in the 18th century.
Fast forward to the 21st century: I'm engaged in a design discussion with a group of developers, and find it hard to express the vision inside my minds' eye, so immediately I reach for a whiteboard marker and start scrawling pictures-boxes, arrows, clouds and cylinders-to try and describe what I see there. The developers, before long, see enough to understand what I'm trying to describe, and can begin to offer suggestions and criticism, but as soon as a new developer comes into the picture, we have to begin all over again, walking them through each box and arrow, getting them up to speed.
Does anybody else see the awful parallel between UML diagrams and ancient Egyptian hieroglyphics? Or is it just me?
Pictographic Language
In the mid-90's, right around the introduction of “object-oriented” into the mainstream, there was a strong movement to create pictographic languages to represent software design on the grounds that “a picture is worth a thousand words.” Several different “notations” emerged, generally named after the O-O bigwig who'd invented it, including the Booch notation, Rumbaugh notation, Coad/Yourdon notation, Jacobson notation, and more. Eventually they all merged together into a single pictographic representation, the Universal Modeling Language, or UML, which stands to this day as the “real” pictographic language.
Unfortunately, all the wonderful things that were supposed to come along with a universal pictographic language haven't really emerged-it doesn't seem as if it's all that much easier to explain a new project. In fact, in some cases, it feels even more complicated than before, particularly when using some of the more esoteric representations UML offers, like the differences between the Classes Diagram and the Data Types Diagram representations.
(By the way, test yourself on your UML trivia: What specification number is UML currently at? Who owns the specification for UML? When was it last updated? How large is it, in pages?)
Even if we distill UML down to a few core constituent parts, usually the same set given by Martin Fowler in his book UML 2.0 Distilled, other problems surface in short order. First, UML is nowhere near specific enough in certain areas to enable effective code generation from the model, which was a desired goal for at least a subset of the modeling community. Secondly, and more importantly, as a system scales up in size, the UML diagram grows more and more unwieldy and difficult to understand. In fact, a particularly complex UML diagram can end up being, as one friend of mine put it, “negatively useful”: it actually reduces your understanding of the system, just by trying to use it.
Chris Sells has a great example of a “negatively useful” UML diagram on his website: it's an autogenerated UML diagram of the ActiveX Template Library, and when compressed far enough down to view on a single page, it resembles a Star Trek Borg Cube-hence the moniker assigned it: “ATL Borg”. Check it out at http://www.sellsbrothers.com/Posts/Details/12373 if you don't believe me.
UML also clearly demonstrates a bias toward the dominant paradigm of its time, in that it clearly represents and models object-oriented systems, less clearly those which have procedural and/or metaprogrammatic aspects to them, and almost completely ignores aspect-oriented, dynamic, or functional languages or approaches, all of which are present, to some degree or another, in the .NET systems of the latter half of this decade.
(Answers to the above test questions, for those who were interested: 2.1.2; the Object Management Group, the same folks who brought you CORBA; November of 2007; and it's split into two documents, Infrastructure at 224 pages, and Superstructure at 738 pages, making it just under a thousand pages in total.)
It strikes me as ironic that the Egyptians, in order to reduce the complexity of communication, moved away from pictures to smaller atoms that can be combined in various ways to form representations of concepts (words), and thousands of years later, we seek to move away from words to pictures. It also strikes me as a fundamental mistake.
Don't get me wrong, UML serves its purpose. But it's clearly not the alphabet of software that we'd hoped it would become.
Another Alphabet
Around the same time that everybody was arguing over pictorial representations, a book emerged that generated almost as much hype as UML did. I speak of the book Design Patterns by Gamma, Helm, Johnson and Vlissides, which spawned an entire subsection in the programming languages portion of the local bookstore. For a while, it was the thing to do to write a “patterns book,” even if it had little to nothing to do with patterns, per se. Eventually, the hype around patterns died down, as most hype waves do, with the blame around the “failure” being attributed to a variety of sources.
I Come Not to Bury Patterns, but to Praise Them
You see, patterns never intended to offer reusable code snippets for people to drop into their implementations, but to provide a lexicon-an alphabet, if you will-to allow programmers to discuss designs at a higher level, without having to resort to drawing pictures.
For example, if I try to describe a scenario in which a particular subclassed instance of a class is called to construct class instances which obey a particular interface in order to allow developers to choose between a variety of different implementations at runtime, as opposed to a different subclassed instance of that first class type, which constructs different implementations of that other interface… the purpose and intent of this design gets lost pretty quickly. But if I say, “We use an Abstract Factory to construct Strategy instances.” With one sentence I can convey both the concept of how these Strategy objects are constructed, but also what the intent of those constructed objects is supposed to be.
Time for a New Alphabet?
The original Design Patterns book was published in 1994, and fifteen years later, it shows its age. While a number of the patterns overall have come to be a core part of the terminology we use every day, many of the patterns it describes have since either become part of the languages we use (thus rendering the pattern moot), or else the patterns don't seem to make as much sense as they used to. More importantly, languages have evolved significantly since the book's first publication, and features now present in our languages of choice enable ideas and designs that are simply not considered anywhere in the patterns lingo of the late-90's.
For this reason, it's become hip to suggest that “design patterns are made obsolete by (insert feature or language name here).” Quite frankly, I argue the opposite-the ubiquity of patterns as a language tool led to those ideas becoming ubiquitous in themselves, which in turn led language designers to make those concepts first-class constructs within their language. (Or perhaps the easy availability of the terminology and concept made those languages using those concepts easier to use and thus easier to adopt. It's hard to tease apart cause and effect sometimes.)
But it is clear that if we're to have better success with communicating between developers, we're going to need a new alphabet that better captures the atoms of modern software design, and soon. Or else we face the problems of the Tower of Babel, all over again.