Thursday, April 04, 2013

Meet the Seashores

Roger Ebert, the film critic, died today. I don't know what I can do to replace him in my life, because  for the past 30+ years, he was the one I went to when I didn't know which movie to watch (or not watch).

I will need to replace Roger with another film critic, but I don't know how to replace him because Charlie Seashore died last month, followed a few days later by Edie Seashore. For the past 30+ years, Charlie and Edie were the ones I went to when I didn't know what to do about anything in my life, especially things more important than movies.

Edie and Charlie and I were connected in a plethora of ways. Though we lived thousands of miles apart, we visited each other often. Aside from these personal friendships, we frequently were consultants to each other. I assisted in some of their workshops; they assisted in some of mine. These interactions were always of high intensity and deep learnings, but perhaps the most intense, the most educational, were working on our feedback book, What Did You Say?.

With the perspective of several decades, I'd have to admit that the book was a success. Though we spent not a penny on promotion, What Did You Say? became a word-of-mouth classic, selling tens of thousands of copies and changing a similar number of lives. We probably would have sold even more, but the book was hard to find in the USA–and virtually impossible to obtain overseas.

In recent years, with the changing technology in publishing, we began talking about, then working on, an electronic (ebook) version of What Did You Say?. We corrected numerous small errors and added two entirely new chapter. The work went slowly because all three of us were immersed in our other work, especially visiting clients and leading workshops. And then they died.

Because Edie and Charlie were two of the world's greatest consultants and trainers, they were always incredibly busy–and that busyness created the one flaw in their work: They seldom had the time to sit down and write about the magical things they were doing with their students and clients. Without a massive collection of written words, only a relatively few of us lucky ones had a chance to share their magic.

In my shock, my first reaction was to drop the entire project, but clients kept asking about how the book was coming along. It took me weeks just to face filing away all our drafts. The filing was so painful and slow, I just knew I could never finish the book without the rest of my team. As I filed, though, I realized that, except for some final polishing and formatting, we had already finished.

I knew then that I had to do that polishing and formatting. The ebook could be the way for thousands of people to come to know Charlie and Edie, if only in a partial way. So, I put almost everything else aside and turned my editorial crank–grinding out what became the Revised Second Edition of What Did You Say?. I invite you to get yourself a copy and get to know the Seashores. (And learn a few things about feedback.)


Wednesday, February 27, 2013

Special Questions




Note: The following tale is adapted from my book, Rethinking Systems Analysis and Design. What's the moral of this tale? After you make your own suggestions, take a look at the original tale and see what the book has to say.




Harlan Mills predicted that some day programmers will make so few errors that they'll remember every one they ever made in their entire career. I've had a long career and I've made rather more than the one error per year that Harlan predicted. One a day might be more like it. But some errors were so gross or so costly that they stand out among the thousands.

Over forty years ago, I was analyst/programmer for a service bureau studying a job that involved processing a million cards through the IBM 650 computer. Because of limitations on the 650's ability to read cards, the only punches allowed in the cards were alphabetics and numerics. Special characters could not be read at all.

When questioning the client in our very first meeting,I asked, "Are there any spetial characters in the cards?"

"No," he replied, "none whatsoever."

"Good," I said, "but I have to be sure. Are you certain that there are no special characters at all?"

"I'm quite certain. I know the data very well, and there are no special characters."

On that assurance, we went ahead with designing and programming the application, only to discover on our first production run that the system was hanging up on cards like this:
THREADED BOLT—1/2" #7
About sixty-five percent of the cards contained special characters, but when I confronted the client with this figure, he appeared genuinely puzzled, "But there are no special characters," he pleaded.

"Oh, no," I said triumphantly, "then what about this dash, slash, quote, and number .sign?"

Tuesday, February 12, 2013

Nothing New Ever Works


The best "review" of one of my books is a testimonial about how the book has been useful. Here's a letter from Jon Jagger about how a law from The Secrets of Consulting helped him help his son, Patrick, who was ill.





                      
Hi Jerry,
I just had a moment of enlightenment about the New Law I wanted to share with you...

I was giving Patrick, my son, some calpol (liquid paracetamol - he's ill off school today).
The bottle had a new plastic widget in the top.
With the bottle there was a new small syringe with a new plunger.
This was a new design - instead of simply pouring the calpol onto a teaspoon you clearly had to fill up the syringe.
Try as I might I could not get the syringe through the hole in the plastic widget in the neck of the bottle. 

So was it The New Law - Nothing new ever works? 

My beautiful wife Natalie came to my rescue.
It did work and she showed me how.

I just re-read The New Law from your book.
I noticed that all the examples, the coffee maker, the pills, the car-battery, the car, the hospital procedure were examples where the new thing was genuinely not working. But in my case the new thing WAS working.  It was ME that was not working!

From this I have realized that

1) It's easy to think the emphasis in "Nothing New Ever Works" is on the word "new" but it's equally on the work "works"!

2) Something being new is a relationship

3) Something working is a relationship

4) When I say "it's not working" what I always mean is "I can't get it working"

Also, it might give some insight into the question you pose at the end of the New Law...

"Everyone knows that new things never work."
"Then why is everyone obsessed with changing everything for something new?"
"If you answer that, you'll have something worth writing about"

Well, when things go wrong we can look for the cause outside of ourselves, or we can look for the cause inside of ourselves. 
But, looking for the cause inside of ourselves would mean WE had failed. Which is unthinkable.
Therefore the cause must be outside of ourselves. 
Viz, if it's a choice between changing the world around us, or changing the world inside us, outside wins. 
And that's one reason why we create new things! 

Cheers
Jon

Tuesday, January 15, 2013


Auld Lang Syne
by Gerald M. Weinberg

Should old acquaintance be forgot,
and never brought to mind ?
Should old acquaintance be forgot,
and old lang syne?
The Good Old Days
Ain't it great being an old-timer? I've been around the computer business for so long I don't have to compute any more–I can make my living telling stories. Telling stories is a lot more fun than computing. I know, because in 1952, I actually was a computer. That was my job title: "computer.". I was paid 90 cents an hour for inverting 10 by 10 matrices with pencil and paper–and lots of erasers. I used a huge mechanical Friden (they were the big name in computation back then) which I thought was the last word in computational equipment. It had all the processing ability of a four-function calculator (no memory) in a box about the size and weight of a dozen of today's laptops It could do a single multiplication in under 15 seconds, while making about the same sounds as a Cuisinart.
Back in 1952, the cost of a computer (me) was 90 cents an hour plus a few hundred for the Friden. The minimum wage was about 50 cents, so computing was rather expensive. You didn't hire a computer without giving it a lot of thought. Four years later, in 1956, I was working for IBM, programming a computer. I was being paid $450 a month–about $2.50 an hour. I worked with two different computers (by then, computers were machines). The IBM 650 rented for $80 an hour–32 times my wage. The IBM 704–at that time the biggest commercial computer in the world–rented for $685 an hour–274 times my wage!

By my current definition, the 650 and the 704 were personal computers, because my relationship to the 650 was almost precisely that of most of today's PC owners. I worked on-line, one on one with the machine. I had control of all the machine's resources, but I had to know hundreds of mysterious details to use them effectively. And most of the time, I had the machine all to myself––because nobody else knew how to use it.
On the 704, the situation was similar–except you were never alone with the 704. 704s were in short supply. To use the closest one, we had to fly from California to New York and share Machine #1 with everyone else. Not only didn't we have remote computing, we didn't have jet airplanes. So any time we used the 704, we had to add about two days of travel to the cost. Even so, the travel was cheap compared with an hour or two of machine time.
Of course, nobody got "an hour or two of machine time" just like that. Time on the machine had to be scheduled weeks in advance, and was parcelled out in precious 15-minute nuggets. When you worked with the 704, you really knew your place in the universe was minuscule. You moved fast, and you didn't make any mistakes. Mistakes would waste the 704's valuable time.
The 704 scheduling rules had one exception: the FORTRAN development team. This elite team could get seemingly endless hours of time while the rest of us could only watch helplessly from behind the glass walls of the computer room. It was bad enough being unable to do my work, but for some cockamanie idea about "automatic programming," it was intolerable. Those of us who had to wait behind the glass knew that FORTRAN would never fly. There was simply no way a computer could generate code as efficient as the code we expert programmers could write.

Buy or Build?

We were right. FORTRAN never did achieve the ability to generate code to equal the efficiency of a master programmer. In a few years, however, nobody cared. Machine time kept getting cheaper, while people time grew more expensive. In 1984, I can own a computer with much more power than the 704 for less than what I earn for one hour of consulting. And, if this cheap laptop didn't work, I could throw it away and have a new one delivered. Off the shelf, too, not after waiting a year to have a new one built by hand.

I still don't use FORTRAN. Why not? Because it's inefficient with machine time? No, because it's inefficient with my time. If I must write a program, I prefer APL, which may allow me to produce the same program in one-fifth the time. But if I can possibly manage it, I'll buy a program rather than write one myself. I can't always manage that, however. Even though I now consider my time to be very valuable, the economics don't always in favor a purchase. For instance, I recently needed a system to make cash flow projections. A lot of terrific spreadsheet programs can do that job, so choosing one ought to have been better than writing one of my own. But it wasn't. Let me sketch my decision process.

First of all, I had to decide what I wanted. A general purpose program like a spreadsheet represents a much bigger investment than the purchase price, so I wouldn't want to make such a decision without meticulous consideration of all my future needs. On the other hand, if I am simply going to write one special purpose program for myself, I don't have to be so careful. I figured I could develop such a program in under 2 hours, so if I didn't like it, I could modify it or scrap it with no big loss.

Once I knew the features I wanted in a spreadsheet program, I'd have to select one. If I wanted to do that intelligently, I'd have to survey the field to narrow down a hundred candidates to a short list of five or six. Then I'd have to gather the information on each of these, hopefully getting a demonstration and reading a couple of unbiased reviews. By the time I was finished, I could easily have spent three days on the selection process.

If I lived in a less remote place, I might be able to accomplish all this in a full day by visiting a computer store, but we don't have one of those in Tererro, New Mexico. As it is, once I decided which app I wanted, I would still have to wait a few days then drive to Albuquerque to pick up the mail. (We don't have mail delivery in Tererro, nor do we have internet access.) Then I would be stuck installing it on my own, which from past experience would require at least half a day, plus a couple of frustrating long distance phone calls.

Once I had the spreadsheet installed, I would turn to the job of converting my existing files to fit the requirements of the app. I could perhaps write a program to do this, but because the file formats of apps usually aren't well documented, I'd have to estimate at least a couple of days to get it working, if I were lucky. When writing my own program, of course, I simply use the file formats I already have.

Next I'd have to learn to use the spreadsheet. I would have to estimate about 4 hours to learn to do the first simple task–it takes at least 4 hours to learn to use any app, if you don't have a tutor at your side. The second task would probably be much easier, and I would start to regain my investment in the software.

How big is that investment? Writing my own program–deciding what I want, writing the code, and testing–might take 3 hours. Buying a package–deciding what I want, selecting the best package, installing, file conversion, and learning to use–might take more than 50 hours. In addition, the off-the-shelf program could cost me as much as $500, but if my time is worth more than minimum wage, the cash will be the smaller part of my investment. 

The Real World

Of course, nobody actually buys software this way. Most of the time, I call up a friend who says that ClunkyCalc is the greatest spread since butter. I run down to the closest computer store and buy one–no time deciding what I want, no time selecting the package, and a friend to help me install it and teach me to use it while we share a bottle of our favorite beverage.
This way is not only cheaper, it's more fun. But there's still the question of converting those files. We begin to see why the software industry is moving away from isolated packages to integrated suites. Once you get hooked into one vendor's file formats, your decision on the next package is terribly skewed in favor of that vendor. For instance, after two years of using a new word processor, I had a large-folder filled with text–six or eight book manuscripts, a hundred articles, and several hundred miscellaneous pieces of text, including financial reports.

When I became dissatisfied with that word processor, it was almost impossible for me to consider a substitute that couldn't handle the old file formats.
In today's world, the choice of hardware or software is determined by tradeoffs involving a variety of issues, such as,
• How much are you paid for your time, or how much do you value it?
• How much do you have invested in your current system, and how much would it cost you to convert out of it?
• How much cash or credit do you have?
• What skills do you already have, such as programming, typing, or using a particular package?
• Who do you know, and how much can you count on them?

The Time Dimension
If you examine these questions, you'll see that each of them has an implicit time dimension. The older you are, the more likely you are to be paid higher wages, have lots of data stored in file cabinets or on magnetic media, have money in the bank, know a variety of skills on older systems, and maintain a network of people who can be called upon for advice.

Being an old-timer has a lot of advantages, but there's another side to the coin. Your higher wages may mean you can't afford to play with new ideas. Your vast store of data may lock you in to old systems which are less efficient than what you could buy today. Your money in the bank may convince you that it's better to buy solutions than work them out yourself, so you lose the opportunity to learn new things. Your old skills are another barrier to new learning, and your old friends may not, in fact, be reliable guides to today's technology.

When you've been in computing for half a century, you have wrestled with these tendencies for a long time. FORTRAN was only the first of many innovative ideas I scorned. Most of the new ideas did turn out to be useless, as I predicted, but there were a few that made me eat crow. If I hadn't been ready to humble myself–to drop some of my old success–and start back at square one with the beginners–I would have washed out of the computer business a half-century ago. I know because many of my colleagues did.

But in this business, you don't have to be around as long as I have to start suffering from technological senility. The system you bought two years ago is totally obsolete, but you're locked in by your huge investment in files and old skills. In fact, the clumsier the system is, the more you had to invest to make it workable–so the more you're locked in. If you want to try something new, your old buddies are threatened, and if you want to keep their friendship, you'll slip back into the good old ways, scoffing at the young whippersnappers who "don't really appreciate the good old days of computing, when we pioneers all had arrows in our backs."

How do you beat this tendency to become an old fogey before your time? I believe you must plan to invest at least 10% of your time and money investigating new things, and 20% would be better. You have to be ready and willing to take a total loss on your investigations. If you never read a stupid article, never go to a useless conference, and never buy a crippled piece of software, you're playing it too safe. You have to be willing to risk making a fool of yourself, otherwise you're sure to make a fool of yourself.

THE END

Monday, December 17, 2012

Gifts for Any Time, but Especially Now


It's the week before Christmas,
And all through the house,
Every creature was fretting
And feeling like a louse.

Without any warning
A gift I had gotten
From a friendly old colleague
Whose gift I'd forgotten.

But then I remembered
Don't feel like a schnook
In just a few minutes
I can send a fine book

No wrapping, no breakage,
No address to scrawl,
No trade-ins, no trouble,
'Cause one size fits all.

So boot up your browser,
There's no need to grovel
In a fistful of keystrokes
You can send an e-novel.

Naturally, I recommend one of my novels for a fun read:

Women of Power Series
   Mistress of Molecules
   The Hands of God
   Earth’s Endless Effort

The Stringers Series
   First Stringers: or eyes that do not see
   Second Stringers: the sole advantage

The Residue Class Mystery Series
   Freshman Murders
   Where There’s a Will There’s a Murder

The Aremac Series
   The Aremac Project
   Aremac Power: Inventions at Risk
  

For just $4.99, you can go to http://www.geraldmweinberg.com/Site/Novels.html and send your friend an engaging, exciting story, one that also carries one or more science/technology theme: my attempt to put the science back in science fiction and the tech back in techno-thrillers.


The themes in The Freshman Murders are Computers, Culture, and Genealogy.

The themes in Where There's a Will There's a Murder are Mathematics and Anthropology.

The themes in First Stringers and Second Stringers are Physics, Chemistry, and Social Psychology.

The themes in Mistress of Molecules are Chemistry and Politics.

In The Hands of God, the themes are Parallel Computing, Neurophysiology, and Prosthetics.

The Aremac Project and Aremac Power emphasize software testing, security, and the risks and rewards of invention.

Earth’s Endless Effort features large, unconventional computers and the contact with alien species.

And, of course, always Computers! And always an exciting story.


And if you've already gifted everyone on your list, I invite you to try one of my novels for yourself. As always, if you don’t like it, I’ll gladly give you your money back.

Monday, October 01, 2012

Why People Don't Instantly Buy Into Agile Methods: A Catch-22



When "selling" their methods, Agile evangelists often stress the strength of Agile methods at removing, and even preventing, errors. I used to do this myself, but I always wondered how people could resist this sales pitch. I would plead, "Don't you want quality?" And, of course, they always said, "Yes, we want quality," but they didn't buy what I was selling. Eventually, I learned the reason, or at least one of the reasons. In today's blog, I want to help today's evangelists (coaches, team leaders, managers, or whomever) by sharing what I've learned about why Agile methods can be so difficult to sell.

Another Story About Quality
In a prior essay, I told a story that demonstrated how "quality" is relative to particular persons. To test our understanding of this definition, as well as its applicability, let's read another story, one that illustrates that quality is not merely the absence of error.
One of the favorite pastimes of my youth was playing cribbage with my father. Cribbage is a card game, invented by the poet Sir John Suckling, very popular in some regions of the world, but essentially unknown in others. After my father died, I missed playing cribbage with him and was hard pressed to find a regular partner. Consequently, I was delighted to discover a shareware cribbage program for the Macintosh: "Precision Cribbage" by Doug Brent, of San Jose, CA.
Precision Cribbage was a rather nicely engineered piece of software, I thought, especially when compared with the great majority of shareware. I was especially pleased to find that it gave me a challenging game, though it wasn't good enough to beat me more than 1 or 2 games out of 10. Doug had requested a postcard from my home town as a shareware fee. I played many happy games of Precision Cribbage, so I was pleased to send Doug this minimum fee.
Soon after I sent the card, though, I discovered two clear errors in the scoring algorithm of Precision Cribbage. (Perhaps the word "precision" in the name should have been a clue. If it was indeed precise, there was no need to call it "precision." The software would have spoken for itself. I often use that observation about product names to begin my evaluation of a project. For instance, whenever a product has the word "magic" in its title, I steer clear of the whole mess.)
One error in Precision Cribbage was an intermittent failure to count correctly hands with three cards of one denomination and two of another (a "full house," in poker terminology). This was clearly an unintentional flaw, because sometimes such hands were counted correctly.
The second error, however, may have been a misunderstanding of the scoring rules (which were certainly part of the "requirements" for a program that purported to play a card game). It had to do with counting hands that had three cards of the same suit when a fourth card of that suit turned up when the deck was cut. In this case, I could actually prove mathematically that the algorithm was incorrect.
So what makes this story relevant? Simply this: even with two scoring errors in the game, I was sufficiently satisfied with the quality of Precision Cribbage to
a. keep on playing it, for at least several of my valuable hours each week
b. pay the shareware "fee," even though I could have omitted payment with no fear of retribution of any kind
In short, Precision Cribbage had great value to me, value which I was willing and able to demonstrate by spending my own time and (if requested) money. Moreover, had Doug corrected these errors, it would have added very little to the value of the software.

What's Happening to Quality?
My experience with Precision Cribbage took place some years ago, and occured in a more-or-less amateur piece of shareware. Certainly, with all we've learned over the past few decades, the rate of software errors has diminished. Or has it?
I've conducted a small survey of more modern software. Software written by professionals. Software that I use regularly. Software I paid real money for. And not software for playing games, but software used for serious tasks in my business. Here's what I found:
Out of the 20 apps I use most frequently, 16 have bugs that I have personally encountered–bugs that have cost me at least inconvenience and sometime many hours of fix-up time, but at least one hour for each occurence. If I value my time at a conservativer $100/hour (I actually bill at $500/hour), these bugs cost me approximately $5,000 in the month of August. That's $60,000 a year, if I maintain that average.
If I consider only the purchase prices, those 20 apps cost me about $3,500. In other words, over one year, the purchase price of the software represents less than 10% of what it costs me. (And these are selected apps. The ones that are even buggier have been discarded any time I can find a plausible substitute.) In other words, since quality is value, there's a large negative quality associated with this set of applications.
And that's only for one person. In the USA, there must be at least 100,000,000 users of personal computers. My hourly rate is probably higher than the average, so let's just estimate $10/hour, roughly minimum wage for the average person. That would give us an estimate $6,000/year per person for buggy software, which adds up to about $600,000,000,000 for the annual cost to United States workers. Even if my estimates are way off, that's not chump change.
Why Is Improving Quality So Difficult?
If they payoff is so huge, why aren't we raising software quality to new levels? We could ask the same question about improving auto safety, where tens of thousands of human lives are destroyed every year in the United States. You might think that's more motivation than any number of dollars, but it doesn't work that way. Unless the person killed in the car is someone we know, we've heard about so many traffic deaths that we've grown immune to the terrible cost. In other words, it's precisely because traffic deaths are so common that we don't get awfully excited about them.
And, I believe, it's the same with software failures. They're so common that we've learned to take them with an accepting shrug. We simply reboot and get back to work. Very seldom do we even bother to switch to a different app. The old one, with all its bugs, is too familar, too comfortable. In fact, some people obtain most of their job security precisely because of their familiarity with software bugs and ways to work around them.
In other words, we're surprised that people don't generally feel motivated to improve quality because we vastly underrate the value of the familiar. And that observation explains an interesting paradox. Agile advocates are often so eager to prove the value of Agile methods that they strive to create products with all sorts of wonderful new features. But each new feature, no matter how potentially valuable, has a downside–a negative quality value because of its unfamiliarity. The harder we strive to produce "higher quality," the lower the quality we tend to produce.
It's a classic catch-22. To convince people of the value of Agile, we need to produce software that is full of wonderful features that the old software didn't possess, at the same time the new software functions exactly the way the old software did. No wonder change is difficult.

Sunday, September 23, 2012

Agile and the Definition of Quality


Some Agile writers have called me "the grandfather of Agile." I choose to interpret that comment as a compliment, rather than a disparagment of my advanced age. As a grandfather, much of my most influential writing was done long before the Agile movement appeared on stage. As a result, newcomers on the scene often fail to see the connection between those writings and today's Agile movement.
I'm planning to use my blog to correct that situation, with a series of articles relating specific material to Agile basics. I'm starting with this blog entry about my definition of "quality"–often quoted by not always understood. The essay is adapted from the very first chapter of How Software is Built, which in turn is adapted my the first volume of Quality Software Management. <http://www.geraldmweinberg.com/Site/QSM_vol_1.html>



A Bug in the Family
My sister's daughter, Terra, is the only one in the family who has followed Uncle Jerry in the writer's trade. She writes fascinating books on the history of medicine, and I follow each one's progress as if it were one of my own. For that reason, I was terribly distressed when her first book, Disease in the Popular American Press, came out with a number of gross typographical errors in which whole segments of text disappeared. I was even more distressed to discover that those errors were caused by an error in the word processing software she used–CozyWrite, published by one of my clients, the MiniCozy Software Company.
Terra asked me to discuss the matter with MiniCozy on my next visit. I located the project manager for CozyWrite, and he acknowledged the existence of the error.
"It's a rare bug," he said.
"I wouldn't say so," I countered. "I found over twenty-five instances in her book."
"But it would only happen in a book-sized project. Out of over 100,000 customers, we probably didn't have 10 who undertook a project of that size as a single file."
"But my niece noticed. It was her first book, and she was devastated."
"Naturally I'm sorry for her, but it wouldn't have made any sense for us to try to fix the bug for 10 customers."
"Why not? You advertise that CozyWrite handles book-sized projects."
"We tried to do that, but the features didn't work. Eventually, we'll probably fix them, but for now, chances are we would introduce a worse bug–one that would affect hundreds or thousands of customers. I believe we did the right thing."
As I listened to this project manager, I found myself caught in an emotional trap. As software consultant to MiniCozy, I had to agree, but as uncle to an author, I was violently opposed to his line of reasoning. If someone at that moment had asked me, "Is CozyWrite a quality product?" I would have been tongue-tied.
How would you have answered?

The Relativity of Quality
The reason for my dilemma lies in the relativity of quality. As the MiniCozy story crisply illustrates, what is adequate quality to one person may be inadequate quality to another.

Finding the relativity
If you examine various definitions of quality, you will always find this relativity. You may have to examine with care, though, for the relativity is often hidden, or at best, implicit.
Take for example Crosby's definition:
"Quality is meeting requirements."
Unless your requirements come directly from heaven (as some developers seem to think), a more precise statement would be:
"Quality is meeting some person's requirements."
For each different person, the same product will generally have different "quality," as in the case of my niece's word processor. My MiniCozy dilemma is resolved once I recognize that
a. To Terra, the people involved were her readers.
b. To MiniCozy's project manager, the people involved were (the majority of) his customers.

Who was that masked man?
In short, quality does not exist in a non-human vacuum, but every statement about quality is a statement about some person(s).
That statement may be explicit or implicit. Most often, the "who" is implicit, and statements about quality sound like something Moses brought down from Mount Sinai on a stone tablet. That's why so many discussions of software quality are unproductive: It's my stone tablet versus your Golden Calf.
When we encompass the relativity of quality, we have a tool to make those discussions more fruitful. Each time somebody asserts a definition of software quality, we simply ask,
"Who is the person behind that statement about quality."
Using this heuristic, let's consider a few familiar but often conflicting ideas about what constitutes software quality:

a. "Zero defects is high quality."
1. to a user such as a surgeon whose work would be disturbed by those defects
2. to a manager who would be criticized for those defects

b. "Lots of features is high quality."
1. to users whose work can use those features–if they know about them
2. to marketers who believe that features sell products

c. "Elegant coding is high quality."
1. to developers who place a high value on the opinions of their peers
2. to professors of computer science who enjoy elegance

d. "High performance is high quality."
1. to users whose work taxes the capacity of their machines
2. to salespeople who have to submit their products to benchmarks

e. "Low development cost is high quality."
1. to customers who wish to buy thousands of copies of the software
2. to project managers who are on tight budgets

f. "Rapid development is high quality."
1. to users whose work is waiting for the software
2. to marketers who want to colonize a market before the competitors can get in

g. "User-friendliness is high quality."
1. to users who spend 8 hours a day sitting in front of a screen using the software
2. to users who can't remember interface details from one use to the next

The Political Dilemma
Recognizing the relativity of quality often resolves the semantic dilemma. This is a monumental contribution, but it still does not resolve the political dilemma:
More quality for one person may mean less quality for another.
For instance, if our goal were "total quality," we'd have to do a summation over all relevant people. Thus, this "total quality" effort would have to start with a comprehensive requirements process that identifies and involves all relevant people. Then, for each design, for each software engineering approach, we would have to assign a quality measure for each person. Summing these measures would then yield the total quality for each different approach.
In practice, of course, no software development project ever uses such an elaborate process. Instead, most people are eliminated by a prior process that decides:
Whose opinion of quality is to count when making decisions?
For instance, the project manager at MiniCozy decided, without hearing arguments from Terra, that her opinion carried minuscule weight in his "software engineering" decision. From this case, we see that software engineering is not a democratic business. Nor, unfortunately, is it a rational business, for these decisions about "who counts" are generally made on an emotional basis.

Quality Is Value To Some Person
The political/emotional dimension of quality is made evident by a somewhat different definition of quality. The idea of "requirements" is a bit too innocent to be useful in this early stage, because it says nothing about whose requirements count the most. A more workable definition would be this:
"Quality is value to some person."
By "value," I mean, "What are people willing to pay (do) to have their requirements met." Suppose, for instance, that Terra were not my niece, but the niece of the president of the MiniCozy Software Company. Knowing MiniCozy's president's reputation for impulsive emotional action, the project manager might have defined "quality" of the word processor differently. In that case, Terra's opinion would have been given high weight in the decision about which faults to repair.

The Impact on Agile Practices
In short, the definition of "quality" is always political and emotional, because it always involves a series of decisions about whose opinions count, and how much they count relative to one another. Of course, much of the time these political/emotional decisions–like all important political/emotional decisions–are hidden from public view. Most of us software people like to appear rational. That's why very few people appreciate the impact of this definition of quality on the Agile approaches.
What makes our task even more difficult is that most of the time these decisions are hidden even from the conscious minds of the persons who make them. That's why one of the most important actions of an Agile team is bringing such decisions into consciousness, if not always into public awareness. And that's why development teams working with an open process (like Agile) are more likely to arrive at a more sensible definition of quality than one developer working alone. To me, I don't consider Agile any team with even one secret component.
Customer support is another emphasis in Agile processes, and this definition of quality guides the selection of the "customers." To put it succinctly, the "customer" must actively represent all of the significant definitions of "quality." Any missing component of quality may very likely lead to a product that's deficient in that aspect of quality. As a consultant to supposedly Agile teams, I always examine whether or not they have active participation of a suitable representation of diverse views of their product's quality. If they tell me, "We can be more agile if we don't have to bother satisfying so many people, then they may indeed by agile, but they're definitely not Agile.


Sunday, September 16, 2012

I am a music box.

Though I embody the finest science and craft, made entirely by hand, I exist only to create beauty and pleasure.

I can be played closed, as a mystery.

I can be played open, when every part is open for inspection.

Yet though every part can be seen, I cannot be understood as a mechanical object.
I need the touch of human fingers to wind me with energy, adjust my gears, and start my music.

Without human contact, I am merely a lifeless decoration.

Wednesday, August 08, 2012

Mistakes that Win


We all make mistakes.

We all try to eliminate mistakes.

But sometimes, mistakes are our best friends.

One of most common mistakes is arrogance—the belief that we know what we're doing. When we're arrogant, we think our knowledge is complete—particularly in our own work.

Here's an example. Johanna Rothman asked me to write a foreword for her terrific book, Hiring the Best. As I read the book, I realized that Johanna had made a horrible mistake in marketing the book. She said the book was for managers who do the hiring, but what she failed to see about her own work was an even bigger audience: people trying to be hired. Fortunately, that mistake, that omission, could be easily corrected.

Of course, I never make such mistakes, right?

Wrong!

I recently began publishing a series of books on Experiential Learning. In response to the second volume, Jason Reid wrote the following letter:

I participated in PSL this past May. ... I recently had the opportunity to conduct "project debriefs" with several coworkers. We didn't have a standard at my company for how these meetings should run, so I was free to design the agendas. I don't recall what sparked the connection, but I eventually thought of your book, Experiential Learning 2: Invention, and of some of the knowledge invention activities we performed during PSL. I figured that those activities were designed to assist learning after hands-on experiences, and I thought, "What's more hands-on than actual work?" So I determined that my goal for the debriefs would be to help my coworkers learn from their experiences on their projects, and your book was a gold mine for questions to ask them.

Each meeting went very well and I enjoyed them immensely. I lost count of the number of times I heard, "That's a great question!" from the person I was helping. While the mechanics of the meetings were limited to one-on-one discussion, I look forward to incorporating more of the activities in your book into future debriefs.

I now have a bruise on my forehead, from slapping myself when I read Jason's letter. He had caught me making the same mistake I had caught Johanna making: underestimating the market for my own book.

Fortunately, from now on, I will remind people that Experiential Learning 2 : Invention is "a gold mine" of questions and exercises useful for conducting retrospectives. If that leads people to read the book, then I have managed to profit from a friend pointing out my arrogance and stupidity.

Are your friends helpful in this way?

Wednesday, June 27, 2012

Is it real or is it Agile? (Part 1)


Well, I'm writing my blog again, now that my book, Experiential Learning: Inventing, is now published on LeanPub.com. But after only one week, the feedback has started, and I feel a need to respond.
My friend and colleague, Markus Gaertner was the first to write, all the way from Germany:
"I just started reading your second book on Experiential Learning. One thing that confused me is in the chapter Design and Development Inventions where you make a positive reference to agile processes. This confused me since you usually write in a timeless manner, and I don't consider agile processes to be timeless in--say--20 years or so."
The passage in question read like this:
-----
Sometimes the specification turns on the meaning of a word, such as “support,” or “height.” The trouble is, you don’t know in advance which word it turns on ... that depends on the design possibilities. Therefore, an organization or process that discourages back-and-forth communication will generally do a poorer job of designing things. That’s one of the reasons agile processes can work so well.
-----
My first reaction to Markus's comment was that he was exactly right. My half-century of experience tells me that in 20 years, the "agile" craze will have petered out, just like so many before it--structured programming, HIPO, Nassi-Schneiderman charts, and dozens of others. So most readers in 2030 or so, won't even recognize the word "agile" as a code.
Yet forgetting the code doesn't mean that "agile" principles will have disappeared as good programming practices. I was specifically referring to two sentences from the Agile Manifesto (you know, that document that a dozen of the guys worked out a decade ago, without the help of any women):
1. "Business people and developers must work together daily throughout the project."
2. "The most efficient and effective method of conveying information to and within a development team is face-to-face conversation."
That's what I meant by "back-and-forth communication," and I believe those sentences will still describe effective programming practice a generation from now (as they did a generation ago, and two generations ago).
So Markus is right. There's no reason for me to date my book by using the code word, "agile." I published my Experiential Learning series with LeanPub.com so it could be a dynamic e-book, changing as the world changed and I learned to be smarter. So, I will update the next version with something like Markus's suggested wording:
"That's one of the reasons that software development works so well with bi-directional communication in place."
 (I'll also make a batch of changes suggested by Dani--who didn't even recognize the code word, "agile," in 2012--plus whatever other wisdom arrives from my readers.)
But, as usual, I'm not finished responding to Markus and Dani. In a later blog, I'm going to continue with some thoughts about what is "agile" really.

Thursday, June 21, 2012

Experiential Learning vol 2: Inventing

Regular readers of this blog have probably noticed a reduction in my posting frequency in recent weeks. Perhaps you'll all forgive me when I tell you I've been distracted from blogging by finishing volume 2 of my Experiential Learning series, called Inventing or Invention, I can never remember which. Anyway, it says "invention" on the cover, and can be found here.

Anyway, it's about the part of experiential learning that comes after the experience--the part where we invent the learnings we've found during the experience. It's what converts an ordinary experience into a learning experience. If you're teaching experientially, you'll want to learn how to facilitate invention--but that's not all. The book is full of techniques I personally use to extract learnings from all my experiences, whether in a class or in life.

As we say in life-learning, "First you pay the tuition, then the learning is optional." If you want to take advantage of the learning you've paid for with your life, Experiential Learning: Volume 2, Invention is the book for you.

Friday, June 08, 2012

Shaping a Team


A Correspondent Writes
I have taken over a group of folks that I need to shape into a team. There are many issues including getting developers to write unit tests consistently, training my test engineers and deploying more test automation. More worrisome is that they do not want to change out of a poor pattern of behaviors. I suppose since they hit their delivery schedule they think things are OK. On the plus side, they say they are committed to quality.

What would you look more into? Tackle first? Is there an inspiring story I might share at my upcoming team building event to highlight the need to change?

Any advice is welcome and greatly appreciated.


Jerry Replies
Well, you're certainly experiencing a classical problem, one I've described in a number of places, including my Becoming a Technical Leader (in terms of my pinball expertise). They're stuck on a plateau, and it's going to take some skilled leadership to move them up to the next level.

The first thing you have to do is create a safe environment that will protect them while they are changing to new practices. Although those practices must be designed to improve the quality of their work, there is no doubt that at first they will slow them down and probably hurt quality. That's why they need protection.

Start small, with some step that ideally they will choose from a list you develop together. Choose something that's as sure to succeed as possible, and it will help them in some obvious way. In other words, you want to start with a guaranteed success, and then build up from there.

 Beyond that, I suggest you read my books on change

- BECOMING A CHANGE ARTIST
- CHANGE: Planned & Unplanned
- Change Done Well



Thursday, May 31, 2012

Writers Need Feedback


I recently received an email containing the following paragraph:

"A tester peer of mine here in town recently told me a great story of how your book, Perfect Software, helped save one of his tester's jobs. He gave the book to his Manager and it changed the manager's mind about testing and the need for good testers. I've encouraged my peer to contact you with the story in more detail and will keep doing that."

I love to hear stories of how my writing is influencing real people to change their world (for the better, I hope). I specifically intended Perfect Software and Other Illusions about Testing for the purpose of educating managers and others who require a better understanding of software testing if they are to do a better job.

Do You Have a Story?

I'd love to hear your story of how one of my writings helped you do a better job. I'd even like to hear your story of how one of my writings led you to do a worse job. I need this kind of feedback if I'm to do a better job myself.

In fact, I'd even like to hear stories about how other authors' writings helped or hindered your work. Or, even better, about writings that helped improve your life. Or made it worse.

I suppose I should give an example. Okay, like many smart people, I used to use my intelligence to think of reasons I should be miserable. That kind of thinking made me a rather miserable person. Then I read Bertrand Russell's little book, The Conquest of Happiness. Russell's words showed me that I could use my intelligence to be happy, not miserable. They changed my life.

They say that the pen is mightier than the sword. Well, we don't use pens much any more (or swords, for that matter), but there's still plenty of power in our keyboards. With all that power, we need to know if we're using it to people stronger or to lop off their heads.

So, let me hear from you, and I'll try to pass your feedback on to the whole world of writers.