When "selling" their methods, Agile evangelists often stress the strength of Agile methods at removing, and even preventing, errors. I used to do this myself, but I always wondered how people could resist this sales pitch. I would plead, "Don't you want quality?" And, of course, they always said, "Yes, we want quality," but they didn't buy what I was selling. Eventually, I learned the reason, or at least one of the reasons. In today's blog, I want to help today's evangelists (coaches, team leaders, managers, or whomever) by sharing what I've learned about why Agile methods can be so difficult to sell.
Another Story About Quality
In a prior essay, I told a story that demonstrated how "quality" is relative to particular persons. To test our understanding of this definition, as well as its applicability, let's read another story, one that illustrates that quality is not merely the absence of error.
One of the favorite pastimes of my youth was playing cribbage with my father. Cribbage is a card game, invented by the poet Sir John Suckling, very popular in some regions of the world, but essentially unknown in others. After my father died, I missed playing cribbage with him and was hard pressed to find a regular partner. Consequently, I was delighted to discover a shareware cribbage program for the Macintosh: "Precision Cribbage" by Doug Brent, of San Jose, CA.
Precision Cribbage was a rather nicely engineered piece of software, I thought, especially when compared with the great majority of shareware. I was especially pleased to find that it gave me a challenging game, though it wasn't good enough to beat me more than 1 or 2 games out of 10. Doug had requested a postcard from my home town as a shareware fee. I played many happy games of Precision Cribbage, so I was pleased to send Doug this minimum fee.
Soon after I sent the card, though, I discovered two clear errors in the scoring algorithm of Precision Cribbage. (Perhaps the word "precision" in the name should have been a clue. If it was indeed precise, there was no need to call it "precision." The software would have spoken for itself. I often use that observation about product names to begin my evaluation of a project. For instance, whenever a product has the word "magic" in its title, I steer clear of the whole mess.)
One error in Precision Cribbage was an intermittent failure to count correctly hands with three cards of one denomination and two of another (a "full house," in poker terminology). This was clearly an unintentional flaw, because sometimes such hands were counted correctly.
The second error, however, may have been a misunderstanding of the scoring rules (which were certainly part of the "requirements" for a program that purported to play a card game). It had to do with counting hands that had three cards of the same suit when a fourth card of that suit turned up when the deck was cut. In this case, I could actually prove mathematically that the algorithm was incorrect.
So what makes this story relevant? Simply this: even with two scoring errors in the game, I was sufficiently satisfied with the quality of Precision Cribbage to
a. keep on playing it, for at least several of my valuable hours each week
b. pay the shareware "fee," even though I could have omitted payment with no fear of retribution of any kind
In short, Precision Cribbage had great value to me, value which I was willing and able to demonstrate by spending my own time and (if requested) money. Moreover, had Doug corrected these errors, it would have added very little to the value of the software.
What's Happening to Quality?
My experience with Precision Cribbage took place some years ago, and occured in a more-or-less amateur piece of shareware. Certainly, with all we've learned over the past few decades, the rate of software errors has diminished. Or has it?
I've conducted a small survey of more modern software. Software written by professionals. Software that I use regularly. Software I paid real money for. And not software for playing games, but software used for serious tasks in my business. Here's what I found:
Out of the 20 apps I use most frequently, 16 have bugs that I have personally encountered–bugs that have cost me at least inconvenience and sometime many hours of fix-up time, but at least one hour for each occurence. If I value my time at a conservativer $100/hour (I actually bill at $500/hour), these bugs cost me approximately $5,000 in the month of August. That's $60,000 a year, if I maintain that average.
If I consider only the purchase prices, those 20 apps cost me about $3,500. In other words, over one year, the purchase price of the software represents less than 10% of what it costs me. (And these are selected apps. The ones that are even buggier have been discarded any time I can find a plausible substitute.) In other words, since quality is value, there's a large negative quality associated with this set of applications.
And that's only for one person. In the USA, there must be at least 100,000,000 users of personal computers. My hourly rate is probably higher than the average, so let's just estimate $10/hour, roughly minimum wage for the average person. That would give us an estimate $6,000/year per person for buggy software, which adds up to about $600,000,000,000 for the annual cost to United States workers. Even if my estimates are way off, that's not chump change.
Why Is Improving Quality So Difficult?
If they payoff is so huge, why aren't we raising software quality to new levels? We could ask the same question about improving auto safety, where tens of thousands of human lives are destroyed every year in the United States. You might think that's more motivation than any number of dollars, but it doesn't work that way. Unless the person killed in the car is someone we know, we've heard about so many traffic deaths that we've grown immune to the terrible cost. In other words, it's precisely because traffic deaths are so common that we don't get awfully excited about them.
And, I believe, it's the same with software failures. They're so common that we've learned to take them with an accepting shrug. We simply reboot and get back to work. Very seldom do we even bother to switch to a different app. The old one, with all its bugs, is too familar, too comfortable. In fact, some people obtain most of their job security precisely because of their familiarity with software bugs and ways to work around them.
In other words, we're surprised that people don't generally feel motivated to improve quality because we vastly underrate the value of the familiar. And that observation explains an interesting paradox. Agile advocates are often so eager to prove the value of Agile methods that they strive to create products with all sorts of wonderful new features. But each new feature, no matter how potentially valuable, has a downside–a negative quality value because of its unfamiliarity. The harder we strive to produce "higher quality," the lower the quality we tend to produce.
It's a classic catch-22. To convince people of the value of Agile, we need to produce software that is full of wonderful features that the old software didn't possess, at the same time the new software functions exactly the way the old software did. No wonder change is difficult.
3 comments:
You've always had me as a fan, but this post knocked it out of the park for me. Sure, I'm just an Agile guy, but I would be more than willing to have your baby.
Ok... just kidding...
On a serious note, we make the same points in our Agile QA courses and the biggest aha moment we have is when we state: quality is relative to the context. Putting your awesome number aside about the cost of poor quality software to Americans, I wonder how much we are spending as a nation on useless defect metrics that really are measuring quality of a desired product feature, but rather are measuring the ability to find bugs in software whether the market really wants a product feature or not. I probably don't have enough bits in my stack to calculate that number. Great post!!!
Hi Gerry,
Interesting. I wonder how different it would be if we treated software problems more like aviation than ground transportation.
When a plane crashes, there is always a thorough investigation looking deeply for root causes. As a result, travelling by plane is extremely safe. Meanwhile, every time an aircraft so much as hiccups it's front page news.
I guess it's a function of the consequences of something going wrong. A 'fender-bender' at 35,000 feet will have an immediate and dire outcome, whereas one on the ground may result in some broken glass.
The same applies to both software and the process used to create it. If the consequences of broken software are innocuous, then the functional and economic need to ensure that it's not broken is reduced. From a process perspective, if an organization is getting away with a serial, phase-gate process without any dire consequences, then there's little incentive to change.
I'll conclude with the famous quote from Stalin, "The death of one man is a tragedy. The death of millions is a statistic." As messed up as that sounds, he hit the nail on the head with respect to human nature.
Besides the argument about familiarity with bugs, there are also the arguments about the potential prohibitive cost or unacceptable risk of fixing the bugs (as you write about in other blog posts). I think this can offer a way around the catch-22.
The way I see it, the argument to sell agile should not be based on wonderful new features, but rather the possibility of agile speeding up the feedback loop which makes it possible to do the same at a lower cost and less risk. Thus preserving familiarity and pleasing management at the same time.
As a side note, I'll also offer an opinion on the question "If they payoff is so huge, why aren't we raising software quality to new levels?". I think the number "$600,000,000,000" should be put in context of (at least) the two following numbers:
1. The cost for fixing all bugs
2. The cost for working without the software (e.g. if it were not released yet due to bug fixing)
In that context I don't think the payoff is that huge.
Post a Comment