Showing posts with label history. Show all posts
Showing posts with label history. Show all posts

Sunday, April 02, 2017

Complexity: Why We Need General Systems Thinking


It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so. - Will Rogers

The first step to knowledge is the confession of ignorance. We know far, far less about our world than most of us care to confess. Yet confess we must, for the evidences of our ignorance are beginning to mount, and their scale is too large to be ignored!

If it had been possible to photograph the earth from a satellite 150 or 200 years ago, one of the conspicuous features of the planet would have been a belt of green extending 10 degrees or more north and south of the Equator. This green zone was the wet evergreen tropical forest, more commonly known as the tropical rain forest. Two centuries ago it stretched almost unbroken over the lowlands of the humid Tropics of Central and South America, Africa, Southeast Asia and the islands of Indonesia.

... the tropical rain forest is one of the most ancient ecosystems ... it has existed continuously since the Cretaceous period, which ended more than 60 million years ago. Today, however, the rain forest, like most other natural ecosystems, is rapidly changing. ... It is likely that, by the end of this century very little will remain. - Karl Deutsch 

This account may be taken as typical of hundreds filling our books, journals, and newspapers. Will the change be for good or evil? Of that, we can say nothing—that is precisely the problem. The problem is not change itself, for change is ubiquitous. Neither is the problem in the man-made origin of the change, for it is in the nature of man to change his environment. Man’s reordering of the face of the globe will cease only when man himself ceases.

The ancient history of our planet is brimful of stories of those who have ceased to exist, and many of these stories carry the same plot: Those who live by the sword, die by the sword. The very source of success, when carried past a reasonable point, carries the poison of death. In man, success comes from the power that knowledge gives to alter the environment. The problem is to bring that power under control.

In ages past, the knowledge came very slowly, and one man in his life was not likely to see much change other than that wrought by nature. The controlled incorporation of arsenic into copper to make bronze took several thousand years to develop; the substitution of tin for the more dangerous arsenic took another thousand or two. In our modern age, laboratories turn out an alloy a day, or more, with properties made to order. The alloying of metals led to the rise and fall of civilizations, but the changes were too slow to be appreciated. A truer blade meant victory over the invaders, but changes were local and slow enough to be absorbed by a million tiny adjustments without destroying the species. With an alloy a day, we can no longer be sure.

Science and engineering have been the catalysts for the unprecedented speed and magnitude of change. The physicist shows us how to harness the power of the nucleus; the chemist shows us how to increase the quantity of our food; the geneticist shows us how to improve the quality of our children. But science and engineering have been unable to keep pace with the second-order effects produced by their first-order victories. The excess heat from the nuclear generator alters the spawning pattern of fish, and, before adjustments can be made, other species have produced irreversible changes in the ecology of the river and its borders. The pesticide eliminates one insect only to the advantage of others that may be worse, or the herbicide clears the rain forest for farming, but the resulting soil changes make the land less productive than it was before. And of what we are doing to our progeny, we still have only ghastly hints.

Some have said the general systems movement was born out of the failures of science, but it would be more accurate to say the general systems approach is needed because science has been such a success. Science and technology have colonized the planet, and nothing in our lives is untouched. In this changing, they have revealed a complexity with which they are not prepared to deal. The general systems movement has taken up the task of helping scientists unravel complexity, technologists to master it, and others to learn to live with it.

In this book, we begin the task of introducing general systems thinking to those audiences. Because general systems is a child of science, we shall start by examining science from a general systems point of view. Thus prepared, we shall try to give an overview of what the general systems approach is, in relation to science. Then we begin the task in earnest by devoting ourselves to many questions of observation and experiment in a much wider context. 

And then, having laboriously purged our minds and hearts of “things we know that ain’t so,” we shall be ready to map out our future general systems tasks, tasks whose elaboration lies beyond the scope of this small book.

[Thus begins the classic, An Introduction to General Systems Thinking]

Sunday, February 05, 2017

Fuzz Testing and Fuzz History

In 2016 I added a paragraph to the Wikipedia page on "fuzz testing." Later, the paragraph was edited out because it "lacked reference." The editor, however, suggested that I blog the paragraph and then use the blog as a reference, so the paragraph could be included. So, here's the paragraph:

(Personal recollection from Gerald M. Weinberg) We didn't call it fuzzing back in the 1950s, but it was our standard practice to test programs by inputting decks of punch cards taken from the trash. We also used decks of random number punch cards. We weren't networked in those days, so we weren't much worried about security, but our random/trash decks often turned up undesirable behavior. Every programmer I knew (and there weren't many of us back then, so I knew a great proportion of them) used the trash-deck technique.

The subject of software testing has many myths and distortions. This story of fuzz testing has several morals:

1. This type of testing was so common that it had no name. Apparently, it was giving the name "fuzz testing" around 1988, and the namers were thus given credit in the Wikipedia article for "inventing" the technique.

2. This is just one example of how "history" is created after the fact by human beings, and what they write becomes "facts." That's why I believe there are no such things as "facts"—not in the sense of "truths."

3. In any case, this is one example of why we ought to be wary of labeling "inventors" of various techniques and technologies. For instance, Gutenberg is often labeled the "inventor" of moveable type, though moveable type existed and was widely used long before Gutenberg. Gutenberg used this idea in ways that hadn't been employed before. That was his "invention," and a worthy one it was, but if we're to understand the way technology develops, we have to be more precise in our definition of what was invented and by whom.

Finally, I have no idea who "invented" fuzz testing. It certainly wasn't me.

NOTE: If someone would like to update the fuzz testing article on Wikipedia, they're welcome to reference this blog post.

Thursday, November 10, 2011

Iterative Development: Some History

As an old guy who's been around computing since 1950, I'm often asked about early history of computing. I appreciate efforts to capture some of our history, and try to contribute when my agin memory doesn't play tricks on me.

Back in 2003, Craig Larman and Victor R. Basili compiled an interesting article, Iterative and Incremental Development: A Brief History. I made several contributions to their history, but they did much, much more. Here's a small sample of what I told them:

We were doing incremental development as early as 1957, in Los Angeles, under the direction of Bernie Dimsdale [at IBM's Service Bureau Corporation]. He was a colleague of John von Neumann, so perhaps he learned it there, or assumed it as totally natural. I do remember Herb Jacobs (primarily, though we all participated) developing a large simulation for Motorola, where the technique used was, as far as I can tell, indistinguishable from XP.

When much of the same team was reassembled in Washington, DC in 1958 to develop Project Mercury, we had our own machine and the new Share Operating System, whose symbolic modification and assembly allowed us to build the system incrementally, which we did, with great success. Project Mercury was the seed bed out of which grew the IBM Federal Systems Division. Thus, that division started with a history and tradition of incremental development.

All of us, as far as I can remember, thought waterfalling of a huge project was rather stupid, or at least ignorant of the realities… I think what the waterfall description did for us was make us realize that we were doing something else, something unnamed except for "software development."


Larman and Basili's article has a whole lot more to say, and as far as I know, is an accurate history. I strongly recommend that all in our profession give it a good read. We should all know these things about ourselves.

Monday, June 06, 2011

Beyond Agile Programming

After being in the computing business now for more than half a century, one thing worries me more than almost anything else: our lack of a sense of history. In order to contribute my bit to addressing that problem, I've posted this essay—one that's sure to infuriate many of my readers, including some of my best friends. So first let me tell you how it came about.

While reformatting my book, Rethinking Systems Analysis and Design for e-booking, I noticed a few places that might have needed updating to present realities. The version I was using was more than 20 years old, from just after the peak of excitement about "structured programming." In particular, there was a whole section entitled, "Beyond Structured Programming." As I contemplated updating that section, it dawned on me that I could almost update completely by substituting the name of any more recent "movement" (or fad) for the word "structured.

 I also knew how smart most of my readers are, so I figured they would see the same possibility without my updating a thing. Instead of changing the book, I decided to update the section and publish it on this blog. Why? Because I think it shows an important pattern—a script where only the names have changed over at least five decades. So, here is the section with "agile" substituted for "structured," just as "structured" had been substituted for some other fad a generation earlier.

The Restructured Essay
Before I proceed further with the task of rethinking systems analysis and design, I'd like to express myself on the subject of another great "rethinking" in programming—the agile programming revolution. Although this essay was written a generation ago (now two generation), and the agile programming "revolution" is now an exhausted fad (for most programmers), most of what this essay says still applies—though to the next rethinking fad, and the next, and the next. I believe it will still apply long after I'm no longer writing new editions. Why? Because our industry seems to require a new fad every decade to keep itself from being bored. So, just apply the lessons to whatever fad happens to be dominating the computer press at the time you're reading this.

Before anyone becomes overly enthusiastic about what the rest of this book says, I want to take stock of what this great agile rethinking has done. I don't claim to be starting a new revolution of the magnitude most of the fads claim, so I'd like people to realize how slow and how small the agile programming movement has been, in case they think this book is going to make much difference.

My own personal stock-taking on the subject of agile programming is based on visits to some forty installations on two continents over the past ten years, plus a few hundred formal and informal discussions with programmers, analysts, managers, and users during the same period. Because of the conditions under which these visits and interviews took place, I would estimate the sample is quite heavily biased toward the more progressive organizations. By "progressive," I mean those organizations more likely to:
• Send staff to courses
• Hire outside consultants, other than in panic mode
• Encourage staff to belong to professional organizations, and to attend their meetings.
Consequently, my stock-taking is likely to be rather optimistic about the scope and quality of the effects of agile programming.

The first conclusion I can draw from my data is this:
Much less has been done than the press would have you believe.
I interpret the word "press" very loosely, including such sources as:
• Enthusiastic upper management
• The trade press
• The vendors and their advertising agencies
• The universities, their public relations staffs, and their journals
• The consulting trade.
Although this may be the most controversial of my observations, it is the most easily verified. All you need do is ask for examples of agile programming—not anecdotes, but actual examples of agile behavior and agile-produced code. If you're given any examples at all, you can peruse them for evidence of following the "rules" of agile programming. Generally, you will find:

a. Five percent can he considered thoroughly agile.

b. Twenty percent can be considered to follow agile practices sufficiently to represent an improvement over the average code of 1990.

c. Fifty percent will show some evidence of some attempt to follow some "agile rules," but without understanding and with little, if any, success.

d. Twenty-five percent will show no evidence of influence by any ideas about programming (not just agile) from the past twenty years.

Please remember: these percentages apply to the code and behavior you will actually see in response to your request. If you ask software organizations at random for "agile examples," about two-thirds will manage to avoid giving you anything. We can merely speculate what they do, and what their code contains.

My second conclusion:
There are rather many conceptions of what agile programming ought to look like, all of which are reasonably equivalent if followed consistently.
The operative clause in this observation seems to be "if followed consistently." Some of these conceptions are marketed in books and/or training courses. Some are purely local to a single installation, or even to one team in an installation. Most are mixtures of some "patented" method and local adaptations.

My third observation:
Methods representing thoughtful adaptations of "patented" and "local" ideas on agile programming are far more likely to be followed consistently.
In other words, programmers seem disinclined to follow an agile methodology when it is either:
1. Blind following of "universal rules"
2. Blind devotion to the concept: anything "not invented here" must be worthless.

My fourth observation:
I have other observations to make, but now I must pause and relate the effect these observations have on many readers, perhaps including you. I recall a story about a little boy who was playing in the schoolyard rather late one evening. A teacher who had been working late noticed the boy and asked if he knew what time it was.
"I'm not sure," the boy said, "but I know it isn't six o'clock yet."
"And how do you know that?" the teacher asked.
"Because I'm supposed to be home at six, and I'm not home."
When I make my first three observations about agile programming, I have a similar reaction—something like this:
"These can't be right, because if they were right, why would there be so much attention to agile programming?"

In spite of its naive tone, the question deserves answering. The answer can serve as my fourth observation:

Agile programming has received so much attention for the following reasons:
• The need is very great for some help in programming.
• To people who don't understand programming at all, it seems chaotic, so the term "agile" sounds awfully promising.

• The approach actually works, when it is successfully applied, so there are many people willing to give testimonials, even though their percentages among all programmers may not be great.

• The computer business has always been driven by marketing forces, and marketing forces are paid to be optimistic, and not to distinguish between an idea and its practical realization.

In other words, the phrase "agile programming" is similar to the phrase"our latest computer," because each phrase can be used interchangeably in statements such as these:

• "If you are having problems in information processing, you can solve them by installing our latest computer."

• "Our latest computer is more cost effective and easier to use."

• "Your people will love our latest computer, although you won't need so many people once our latest computer has been installed."

• Conversion? No problem! With our latest computer, you'll start to realize savings in a few weeks, at most."

So actually, the whole agile programming pitch was pre-adapted for the ease of professionals, who have always believed "problems" had "solutions" which could be mechanically applied.

My final observation is related to all of the others:
Those installations and individuals who have successfully realized the promised benefits of agile programming tend to be the ones who don't buy the typical hardware or software pitch, but who listen to the pitch and extract what they decide they need for solving their problems. They do their own thinking, which includes using the thoughts of others, if they're applicable. By and large, they were the most successful problem solvers before agile programming, and are now even more successful.

There's yet another lesson in all this that's much bigger than agile programming or any new hardware or software or process:

Our profession contains few, if any, easy solutions. Success in problem solving comes to those who don't put much faith in the latest "magic," but who are willing to try ideas out for themselves, even when those ideas are presented in a carnival of public relations blather.
Based on this lesson, I'd like to propose a new "programming religion," a religion based on the following articles of faith:

• There's no consistent substitute for a thorough understanding of your problem, though sometimes people get lucky.

• There's no solution applicable to every problem, and what may be the best approach in one circumstance may be precisely the worst in another.

• There are many useful approaches applicable to more than one problem, so it pays to become familiar with what has worked before.

• The trick to problem solving is not just "know-how," but "know-when"—which lets you adapt the solution method to the problem, and not vice versa.

• No matter how much you know how or know when, some problems won't yield to present knowledge, and some aspects of the problem nobody currently understands, so humility is always in order.

I realize writing a book is not the most humble thing a person can do, but it's what I do best, and how I earn my living. I'd be embarrassed if anyone took this book too seriously. We don't need another "movement" just now, unless it is something analogous to a bowel movement—something to flush our system clean of waste material we've accumulated over the years.

Where to read the original
If you want to check on my historical work, you can find the original essay (and many others) in Rethinking Systems Analysis and Design, which is an ebook on Smashwords (where you can probably see it in the free sample) and Kindle and Barnes and Noble.


Problem-Solving Leadership Workshop
Reminder: The second (and last) PSL Workshop for 2011 will take place in Albuquerque, New Mexico, USA, August 28-September 2, 2011. Only a few places left for participants, so for more information, see <http://www.estherderby.com/workshops/problem-solving-leadership-pslhttp://www.estherderby.com/workshops/problem-solving-leadership-psl>

Thursday, May 05, 2011

Project Mercury

I was one of the workers on Project Mercury. My job was to design and implement the space tracking network, and particularly the multiprogrammed operating system that ran the entire network. Many readers have asked me about Mercury, so here's a little something courtesy of SPACE.com:

See how the first American astronauts flew in space on NASA's Mercury space capsules in this SPACE.com infographic.
Source SPACE.com: All about our solar system, outer space and exploration

Sunday, February 27, 2011

Who Can Alienate Readers Better?

I'm an author who's old enough to remember when the people who ran "Big Publishing" were book people—people who had some fairly decent intuition about books and the people who read them (in other words, their products and their customers). My first book was published by McGraw-Hill They were the biggest of the big, but they treated me with respect. For example, when I spotted trouble on my royalty statement, the situation was handled personally by the company president (one of the McGraws).

Four McGraw-Hill books later, the company was having some trouble over a bogus Howard Hughes biography, and turned down every new project for a year—including my latest manuscript, The Psychology of Computer Programming. I was naive enough to be shocked that a publisher might turn down a good book, so thought I must have done something wrong. After moping for a year of self-doubt, I recovered sufficiently to circulate the book to four publishers and was offered a contract by each of them. I chose Van Nostrand.

A year later, when the printed book was delivered, I went down to NYC to receive my first copy from the hand of my editor (a ritual I had practiced with McGraw-Hill). When I suggested we go to my editor's office to sit down and talk, he told me he didn't have an office—because he had just been fired.

Turns out he'd been fired by the corporate executives for publishing my book. In the interval since contract signing, Van Nostrand had been purchased by Litton Industries, along with (as I recall) four other publishers. The idea was to convert publishing to a "proper" business model—and this was the first such acquisition/consolidation, the one that began this new era in the publishing industry.

This new model included taking editorial responsibility out of the hands of the editors (real book people) and putting it into the hands of the executives (real business people).

Apparently their business intuition told them the book wouldn't sell, but apparently that intuition didn't work. In spite of fantastic order fulfillment screw-ups (another byproduct of the acquisition/consolidation, but that's another story), The Psychology of Computer Programming outsold all other similar books in Van Nostrand's inventory. It's still selling (I got the rights back—another stupid business decision by the executives—and the book is still selling steadily after almost 40 years—over 250,000 copies in a dozen languages. (It will be out soon as an eBook.)

And, after 40 years, these business executive are still clueless about that "book business," as opposed to their "book business." If you don't believe that, watch them screwing up the eBook business in just about every imaginable way. (Nobody said they weren't creative.) For instance, here’s what MacMillan CEO John Sargent recently had to say about libraries and ebooks:

    "That is a very thorny problem”, said Sargent. In the past, getting a book from libraries has had a tremendous amount of friction. You have to go to the library, maybe the book has been checked out and you have to come back another time. If it’s a popular book, maybe it gets lent ten times, there’s a lot of wear and tear, and the library will then put in a reorder. With ebooks, you sit on your couch in your living room and go to the library website, see if the library has it, maybe you check libraries in three other states. You get the book, read it, return it and get another, all without paying a thing. “It’s like Netflix, but you don’t pay for it. How is that a good model for us?"

    "If there’s a model where the publisher gets a piece of the action every time the book is borrowed, that’s an interesting model." - from http://go-to-hellman.blogspot.com/2010/03/ebooks-in-libraries-thorny-problem-says.html


If you don't understand what's wrong with this statement, take a look at the article and comments, "Friday Alert: HarperCollins in cagematch with Macmillan to see who can alienate readers better." <http://dearauthor.com/wordpress/2011/02/25/friday-alert-harpercollins-in-cagematch-with-macmillan-to-see-who-can-alienate-readers-better/>

Or, if that's not helping, take a look at past history—for example, the reaction of the Western Union executives when the technology for voice-over-wire (telephone) became available. Or, study the music industry executives' bungling of the digital music scene.
Whichever example you choose, it's always the same pattern of response to new science or new technology: The people on top of the existing industry always try to stifle the new in order to preserve the old. They bungle, and that opens the door for all sorts of brash newcomers. Brash, that is, until they become the fat cats and play the same bungling role when the next innovation comes along—as it always does.

The only question is "Who will be the brash newcomers this time around?"

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Find my eBook novels and nonfiction listed at these stores

• Barnes and Noble bookstore: http://tinyurl.com/4eudqk5

• Amazon Store: http://amazon.com/-/e/B000AP8TZ8

• Apple Store: http://apple.com

• Smashwords Store:
http://www.smashwords.com/profile/view/JerryWeinberg?ref=JerryWeinberg
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Tuesday, January 04, 2011

The Universal Pattern of Huge Software Losses


What Do Failures Cost?
Some perfectionists in software engineering are overly preoccupied with failure, and most others don't rationally analyze the value they place on failure-free operation. Nonetheless, when we do measure the cost of failure carefully, we generally find that great value can be added by producing more reliable software. In Responding to Significant Software Events, I give five examples that should convince you.

The national bank of Country X issued loans to all the banks in the country. A tiny error in the interest rate calculation added up to more than a billion dollars that the national bank could never recover.

A utility company was changing its billing algorithm to accommodate rate changes (a utility company euphemism for "rate increases"). All this involved was updating a few numerical constants in the existing billing program. A slight error in one constant was multiplied by millions of customers, adding up to X dollars that the utility could never recover. The reason I say "X dollars" is that I've heard this story from four different clients, with different values of X. Estimated losses ranged from a low of $42 million to a high of $1.1 billion. Given that this happened four times to my clients, and given how few public utilities are clients of mine, I'm sure it's actually happened many more times.

I know of the next case through the public press, so I can tell you that it's about the New York State Lottery. The New York State legislature authorized a special lottery to raise extra money for some worthy purpose. As this special lottery was a variant of the regular lottery, the program to print the lottery tickets had to be modified. Fortunately, all this involved was changing one digit in the existing program. A tiny error caused duplicate tickets to be printed, and public confidence in the lottery plunged with a total loss of revenue estimated between $44 million and $55 million.

I know the next story from the outside, as a customer of a large brokerage firm:
One month, a spurious line of $100,000.00 was printed on the summary portion of 1,500,000 accounts, and nobody knew why it was there. The total cost of this failure was at least $2,000,000, and the failure resulted from one of the simplest known errors in COBOL coding: failing to clear a blank line in a printing area.

I know this story, too, from the outside, as a customer of a mail-order company, and also from the inside, as their consultant. One month, a new service phone number for customer inquiries was printed on each bill. Unfortunately, the phone number had one digit incorrect, producing the number of a local doctor instead of the mail-order company. The doctor's phone was continuously busy for a week until he could get it disconnected. Many patients suffered, though I don't know if anyone died as a result of not being able to reach the doctor. The total cost of this failure would have been hard to calculate except for the fact that the doctor sued the mail-order company and won a large settlement.

The Pattern of Large Failures
Every such case that I have investigated follows a universal pattern:

1. There is an existing system in operation, and it is considered reliable and crucial to the operation.

2. A quick change to the system is desired, usually from very high in the organization.

3. The change is labeled "trivial."

4. Nobody notices that statement 3 is a statement about the difficulty of making the change, not the consequences of making it, or of making it wrong.

5. The change is made without any of the usual software engineering safeguards, however minimal, that the organization has in place.

6. The change is put directly into the normal operations.

7. The individual effect of the change is small, so that nobody notices immediately.

8. This small effect is multiplied by many uses, producing a large consequence.

Whenever I have been able to trace management action subsequent to the loss, I have found that the universal pattern continues. After the failure is spotted:

9. Management's first reaction is to minimize its magnitude, so the consequences are continued for somewhat longer than necessary.

10. When the magnitude of the loss becomes undeniable, the programmer who actually touched the code is fired—for having done exactly what the supervisor said.

11. The supervisor is demoted to programmer, perhaps because of a demonstrated understanding of the technical aspects of the job. [not]

12. The manager who assigned the work to the supervisor is slipped sideways into a staff position, presumably to work on software engineering practices.

13. Higher managers are left untouched. After all, what could they have done?

The First Rule of Failure Prevention
Once you understand the Universal Pattern of Huge Losses, you know what to do whenever you hear someone say things like:

• "This is a trivial change."

• "What can possibly go wrong?"

• "This won't change anything."

When you hear someone express the idea that something is too small to be worth observing, always take a look. That's the First Rule of Failure Prevention:

Nothing is too small to be unworthy of observing.

It doesn't have to be that way
Disaster stories always make good news, but as observations, they distort reality. If we consider only software engineering disasters, we omit all those organizations that are managing effectively. But good management is so boring! Nothing ever happens worth putting in the paper. Or almost nothing. Fortunately, we occasionally get a heart-warming story such as Financial World telling about Charles T. Fisher III of NBD Corporation, one of their award-winning CEO's for the Eighties:

"When Comerica's computers began spewing out erroneous statements to its customers, Fisher introduced Guaranteed Performance Checking, promising $10 for any error in an NBD customer's monthly statement. Within two months, NBD claimed 15,000 new customers and more than $32 million in new accounts."

What the story doesn't tell is what happened inside the Information Systems department when they realized that their CEO, Charles T. Fisher III, had put a value on their work. I wasn't present, but I could guess the effect of knowing each prevented failure was worth $10 cash.

The Second Rule of Failure Prevention
One moral of the NBD story is that those other organizations do not know how to assign meaning to their losses, even when they finally observed them. It's as if they went to school, paid a large tuition, and failed to learn the one important lesson—the First Principle of Financial Management, which is also the Second Rule of Failure Prevention:

A loss of X dollars is always the responsibility of an executive whose financial responsibility exceeds X dollars.

Will these other firms ever realize that exposure to a potential billion dollar loss has to be the responsibility of their highest ranking officer? A programmer who is not even authorized to make a long distance phone call can never be responsible for a loss of a billion dollars. Because of the potential for billion dollar losses, reliable performance of the firm's information systems is a CEO level responsibility.

Of course I don't expect Charles T. Fisher III or any other CEO to touch even one digit of a COBOL program. But I do expect that when the CEOs realize the value of trouble-free operation, they'll take the right CEO-action. Once this happens, this message will then trickle down to the levels that can do something about it—along with the resources to do something about it.

Learning from others
Another moral of all these stories is that by the time you observe failures, it's much later than you think. Hopefully, your CEO will read about your exposure in these case studies, not in a disaster report from your office. Better to find ways of preventing failures before they get out of the office.

Here's a question to test your software engineering knowledge:
What is the earliest, cheapest, easiest, and most practical way to detect failures?

And here's the answer that you may not have been expecting:

The earliest, cheapest, easiest, and most practical way to detect failures is in the other guy's organization.

Over my half-century in the information systems business, there have been many unsolved mysteries. For instance, why don't we do what we know how to do? Or, why don't we learn from our mistakes? But the one mystery that beats all the others is why don't we learn from the mistakes of others?

Cases such as those cited above are in the news every week, with strong impact on the general public's attitudes about computers. But they seem to have no impact at all on the attitudes of software engineering professionals. Is it because they are such enormous losses that the only safe psychological reaction is, "It can't happen here (because if it did, I would lose my job, and I can't afford to lose my job, therefore I won't think about it)"?

(Adapted from Responding to Significant Software Events )
http://www.smashwords.com/books/view/35783

Sunday, June 14, 2009

Was There Process Before Agile?

Reader June Kim recently wrote:

On p.328 of Volume 4 of your Quality Software Management series, the first line goes:

"Here's another example, mixing incremental development and hacking:"

In this Chapter 18, the process models you mention and describe are Waterfall, Cascade, Iterative Enhancement, and Prototyping (with Hacking and Rapid Prototyping as its variants). There isn't incremental development.

"incremental development" as you wrote on that page, doesn't appear before that line in the chapter.

Is Agile An Example of Incremental Development?

QUESTION: Which process model are you referring to when you wrote "incremental development"?
Is it one among the four process models that you mentioned earlier in the chapter, or Is it something different?

ANSWER: First, you have to remember that when QSM was written, there was no "Agile" development craze. We were doing various development process, some of which were given capital letters, some which were not, and some which were "owned" by certain advocates. I was trying to be descriptive then, not favor anybody's pet process.

There were some "owned" processes (using the hated word, "methodology," and charging tens of thousands of dollars for shelves full of notebooks which nobody read). I suppose some of them are still around, but most of the organizations I work with are now smarter than to fall for that fallacy. For the most part, every organization custom-tailored its own process (or, in most cases, processes, plural).

Of course, that's still true today. I don't find many organizations using some "pure" version of agile.

Where Do You Place Agile?

QUESTION: I am interested where you would put Agile process. I think Agile(XP and Scrum, for example) is closest to Rapid Prototyping as you described.

ANSWER: Historically, the people who first named "Agile" processes were borrowing the best of all these methods. You could also say that any agile process is a cascade (or iterative enhancement if you're actually putting each iteration's output into use). Agile is much more than these processes, making explicit many team practices to support the iterative nature of the development.

What Happens When the Customer Won't Participate?

QUESTION: If so, it has the same danger when the customer isn't willing to be the integral part of the process.

ANSWER: That's always the case, no matter what the method, if the customer is reluctant to participate. (Until the end, when they whine, "But that's not what we wanted.")

QUESTION: What could you do in this case? Drop Agile process?

ANSWER: It's not an Agile process if the customer (or surrogate) isn't participating. In fact, I would drop any customer who doesn't participate. That's the rule I use in all my consulting, too. I don't believe you can help people who aren't willing to help themselves.

QUESTION: Or, make the customer be the part of the process? Then how? This is still a hard question to me, even with 10 years of experience in agile.

ANSWER: It's definitely one of the hardest questions with Agile or any method. Hard for most technical leaders because they lack the training and skills to work with reluctant customers.

So, I train them in these skills (a major goal of the AYE Conference and the PSL workshop), but primarily everything starts by simply pausing the work unless and until the customer has been identified and persuaded to participate.