Monday, June 23, 2014

The History of an eBook

I'm a bit late with this latest excerpt from one of my books. I was a bit busy last week, doing the grunt work of formatting a ebook (the fourth editioin of The Handbook of Technical Reviews), correcting typos in another (What Did You Say?), updating software and hardware, and having my eyes replaced (corneal transplants).

Speaking of new eyes, that's pretty much what this week's post is about: General Systems Design Principles: Passive Regulation, which Dani and I co-authored. This week, I'm printing excerpts from the preface to that book, which should give you some sense of what it's about and how it came about. Next week, I hope to publish an excerpt from the companion volume, General Systems Design Principles: Active Regulation, which Dani and I also wrote together. This Preface actually introduces both books, which form volumes 2 and 3 of my General Systems series. But more of that later.
Preface to the E-book Edition
With books, as with children, you never know what you've produced until they've grown up. When Jerry's book An Introduction to General Systems Thinking was first published, we imagined it would be used by practical systems designers in a variety of disciplines, not as a university text. To be sure, Dani used it in a graduate seminar in Anthropology, but she was biased. Then, after a few years, Jerry was surprised yet delighted to discover how many other courses were using it.

About the time of that discovery, we first published parent of this volume, under the title On the Design of Stable Systems. Looking at the original preface, it's easy to see that―perhaps influenced by the earlier companion volume―we thought we were producing an academic work. That impression undoubtedly influenced the title, leading us to the rather abstract and distant sounding On the Design of Stable Systems. (We actually received purchases from some horse owners who wanted to design stables for their mounts.) It also influenced the original publisher to promote the book solely as a text, where it was only moderately successful.

On the other hand, some very practical people managed to discover the book despite the publisher's promotion. We received many kind letters testifying to its practical usefulness—and complaining about the title. So, when the opportunity came to reprint the book, we sought to make two changes. First, we found a publisher who was more attuned to the people who design and build real systems in the real world. Second, we changed the title.

The new title, we hope, gives just the right impression of the book's contents and usefulness. It is a practical book about systems design―not just computer systems, but information systems in the larger sense: human organizations of all kinds, and systems in nature, like organisms, species, or forests.

With such a general scope, it obviously cannot be a "nuts and bolts" or "recipe book, yet it has many extremely practical applications to the daily work of people who design, for example, information processing systems, training programs, business organizations, parks, or cities.

The field of software development has recently become enamored of "methodologies"―integrated step-by-step approaches to developing and maintaining systems. Ours is not another methodology book―though the general principles we describe have an ancestral relationship to some of the popular methodologies. We believe they will make learning any methodology easier, because they are general principles―the sorts of things you need to know regardless of what methodology you use, or what kind of system you're designing or deciphering.

We considered using the title The Secrets of Systems Design, after Jerry's book The Secrets of Consulting, but we thought that would be confusing. It is about "secrets," though―the deep thinking processes we use in our consulting and practice. These general ways of thinking about systems enable us to get quickly to the heart of existing organizations and information systems, to visualize new systems, and to design training and other interventions to catalyze the transformation from one to the other. Although we don't promise that such deep insights are easy, we've seen how much they can empower you. In the end, that's the only good reason for publishing a book, whatever the title.

Now, as we enter the e-book era, we've made some further changes to make the book more accessible. Based on reader feedback, we've come to realize that we had written two books in one. Both books are about stability and design, but the first half of the original book was about passive stability—through aggregates. We have separated out those first eight chapters for this volume: Passive System Regulation: General Design Principles. We've done this to make each book more focused and less expensive. We hope they now better fits your needs as a reader.

Original Preface
This book is the result of an 18-year collaboration between two people, in two different disciplines, who share a fascination and love for the human animal. Whether from the vantage point of computers or anthropology, we are excited by the capacities of the human mind and alarmed by some of its products.

Our disciplines seem to begin at opposite poles―machine systems and social systems―but they converge as soon as people enter the picture. Thus the social scientist is concerned with the cultural meaning of "math anxiety," while the computer scientist teaches programmers to overcome their "people anxiety."

Both our disciplines daily come to grips with the subtle interplay between system and environment. Cultures and computers both exhibit the effects of adaptation to a constantly changing environment. And anthropologists and computer scientists equally balk at the difficulties of studying conservation and persistence.

This collaboration has been nourished, too, on another level. Our professional activities oscillate between the abstract and the concrete, the theoretical and the practical. The computer scientist designs program logic and enables people to work productively in teams. The anthropologist wrestles with concepts in the classroom and drinks wine with natives in the field. We are accustomed to switching modes, shifting communication styles, all the time. It is perhaps this instability in our professional lives that has taught us to value uncertainty and to make a virtue of indeterminacy.

General systems thinking has been, for us, a way of understanding the complexity of our own lives. We sincerely hope it will be the same for you.


Links to All Jerry's Books
Links to Jerry's non-fiction books can be found at


Links to his fiction, which incorporates general systems laws and principles in stories of smart people using general systems thinking to solve tough, exciting problems:

Sunday, June 08, 2014

Some General Systems Laws

 Continuing my weekly series of excerpts from my books, this week I picked a short section from An Introduction to General Systems Thinking that describes and derives a couple of important general systems laws.

Although what we observe depends on our characteristics as observers, it does not depend entirely on those characteristics. There are two extreme views on this subject—the "realist" and the "solipsist." The "solipsist" imagines there is no reality outside of his own head, while the "realist" flatters himself that what is in his head is all real. Both suffer from the same disease.

There are two components to any observation: a duality long known and often forgotten. Galileo
... distinguished between primary qualities of matter and secondary qualities—the former inherent in matter itself, the latter the product of the interaction of the body possessing certain primary qualities with the sense organs of a human or animal observer.
Many of Galileo's intellectual heirs have forgotten this distinction, making it appear prudent for us to frame a law, which we shall call The Generalized Thermodynamic Law:

More probable states are more likely to be observed than less probable states, unless specific constraints exist to keep them from occurring.
In spite of the risk of antagonizing physicists, we call this law the Generalized Thermodynamic Law because it has two important parts that correspond to very general cases of the First and Second Laws of Thermodynamics. The First law, we recall, concerns the conservation of something called "energy," which seems to conform to a rather severe constraint "out there." The Second Law, however, is of a different type, being concerned with the limited powers of observers when viewing systems of large numbers of particles. By analogy with these laws and with Galileo's primary and secondary qualities, we may reframe our law:

The things we see more frequently are more frequent:

1. because there is some physical reason to favor certain states (the First Law)
or
2. because there is some mental reason (the Second Law).
Because of the predominance of realist beliefs, it is hardly necessary to elaborate on reason 1, though there is more to it than meets the eye. Our purpose in propounding this law is to correct the tendency to overdo realist thinking—to the point of stifling thought. We shall therefore illustrate the Second Law with an extended example.

Which of the two bridge hands in Figure 4.9 is more likely to be seen in a normal bridge game? (You really need know nothing about bridge—we are speaking essentially of dealing 13 cards in an honest deal.)


Hand 1

Hand 2

Figure 4.9. Two bridge hands.
Most bridge players readily answer that Hand 2 is the more likely, but statisticians tell us that the two likelihoods are the same. Why? In an honest deal, any precisely specified hand of 13 cards has to be as likely as any other hand so specified. Indeed, that is what statisticians mean by an "honest deal," and it also agrees with our general systems intuition based on the Principle of Indifference. Why should the deck care what names are painted on its faces?

But the intuition of bridge players is different. We want to know why they instinctively pick Hand 2 to be more likely than Hand 1. The reason must lie in the structure of the game of bridge, the arbitrary rules that attach significance to certain otherwise indifferent card combinations.

When we learn a card game, we learn to ignore certain features which for that game are not usually important. Games such as War and Old Maid where suits are not important may be played without taking any notice of suits whatsoever.

In bridge, high cards are usually important, and cards below 10 are usually not important. That's why bridge players are always impressed when a bridge expert makes a play involving the "insignificant" five of hearts. Bridge books implicitly recognize the unimportance of low cards by printing them as anonymous x's, just as the government implicitly recognizes our unimportance by reducing us to unnamed statistics. In a typical bridge lesson, we might see Hand 3 of Figure 4.10. Seeing this hand, we may understand the bridge player's difficulty by asking the analogous question:

Which is more likely, Hand 1 or Hand 3?

Hand 3

Figure 4.10. A "hand" that is not a hand.
Most bridge players are only dimly aware that Hand 3 is not a "hand" at all, but symbolizes a set of hands. Hand 2 happens to be one member of the set we call Hand 3. When a bridge player looks at a hand such as Hand 2, he unconsciously performs various lumping operations, such as counting "points," counting "distribution," or ignoring the "small" cards. Therefore, when we show him Figure 4.9 he thinks we are asking

Which is more probable, a hand like Hand 1 or a hand like Hand 2?

There is, in the bridge player's mind, only one hand like Hand 1, since it is the only guaranteed unbeatable grand slam in spades, the highest suit. Hand 2, however, is quite "ordinary," and any hand in the set "Hand 3" will be more or less like it. Since the set we call Hand 3 contains more than a million individual hands, the odds are at least a million to one in favor of getting a hand like Hand 2! Because of his past experience, the bridge player has translated our question to another—one to which his answer is perfectly correct.

We habitually translate questions in such ways. Consider what the statistician did with our original question, which asked

Which is more likely to be seen ...
Because we ordinarily speak rather casually, he translates "be seen" into "occur," which leads him to an important mistake. In actual bridge games, Hand 1 will be seen much more often than ever Hand 3! Why? Because although Hand 3 may occur more often, it will rarely be seen—that is, specifically noticed by any of the players. Although Hand 1 may not occur very often, if it ever does occur it is sure to make the morning papers, because it is such a sensational hand in the game of bridge.

The importance of hands and the way they are lumped are obvious mental reasons for seeing some "hands" more frequently than others. These arguments are based on the assumption of an "honest deal," which is another way of saying that there are no physical reasons to favor any particular hand. Those who have played much bridge, though, know that there are such physical reasons. If a player leaves the table for a few minutes during a friendly game, he may come back to find he has been dealt 13 spades. If he is not too gullible, he will look up and laugh, because he will know that his friends have rigged the hand as a practical joke.

But how does he know they have rigged the hand? Well, because he knows that in an honest deal, such a hand is vastly improbable. Yet so is any other hand, but we do not think that every hand comes from a dishonest deal. Whenever we observe a state both conspicuous and improbable, we are faced with a quandary. Do we believe our observation or do we invoke some special hypothesis? Do we call the newspaper, or do we accuse our friends of rigging? Only a fool would call the newspaper. If he did, no sensible person would believe him.

In the same manner, conservatism is introduced into scientific investigation by the very assumption that observations must be consonant with present theories. An observation is more likely to be simply discarded as "erroneous" if it is out of consonance with theory. If the observation is nonrepeatable, it is therefore lost forever, leading to a selection for "positive" results.

Such selection can be seen most vividly when researchers dig up historical data, and especially in connection with old astronomical observations needed to give a time perspective not obtainable in the present. Robert Newton calls these procedures for assigning dates and localities to questionable observations the "identification game," and nicely shows the way the procedures can lead to the "classical" results even if a table of random numbers is used as the "raw" data to be identified.

The complete substitution of theory for observation is, of course, not scientific. Even worse is going through the motions of observing, but discarding as "spurious" every observation that does not fit theory—like the Viennese ladies who weigh themselves before entering Demel's Tea Room. If they're down a kilo, they have an extra mochatorte, and if they're up a kilo they pronounce the scale "in error" and have an extra mochatorte anyway.

This, then, is the problem. Raw, detailed observation of the world is just too rich a diet for science. No two situations are exactly alike unless we make them so. Every license plate we see is a miracle. Every human being born is a much greater miracle, being a genetic combination which has less than 1 chance in 10100 of existing among all possible genetic combinations. Yet the same is true for any particular state—in the superobserver sense—of any complex system.

"A state is a situation which can be recognized if it occurs again." But no state will ever occur again if we don't lump many states into one "state." Thus, in order to learn at all, we must forego some potential discrimination of states, some possibility of learning everything. Or, codified as The Lump Law:

If we want to learn anything, we mustn't try to learn everything.
Examples? Wherever we turn they are at hand. We have a category of things called "books" and another called "stepladders." If we could not tell one from the other, we would waste a lot of time in libraries. But suppose we want a book off the top shelf and no stepladder is at hand. If we can relax our lumping a bit, we may think to stack up some books and stand on them. When psychologists try this problem on people, some take hours to figure out how to get the top-shelf book, and some never do.

It's the same in any field of study. If psychologists saw every white rat as a miracle, there would be no psychology. If historians saw every war as a miracle, there would be no history. And if theologians saw every miracle as a miracle, there would be no religion, because every miracle belongs to the set of all miracles, and thus is not entirely unique.


Science does not, and cannot, deal with miracles. Science deals only with repetitive events. Each science has to have characteristic ways of lumping the states of the systems it observes, in order to generate repetition. How does it lump? Not in arbitrary ways, but in ways determined by its past experience—ways that "work" for that science. Gradually, as the science matures, the "brain" is traded for the "eye " until it becomes almost impossible to break a scientific paradigm (a traditional way of lumping) with mere empirical observations.
________________________________________________________________
I hope you're enjoying these weekly excerpts. You might also enjoy some similar lessons cast in the format of fictional accounts of smart people using such principles in solving interesting life problem or cracking tough mysteries. If you think you might like a little different approach, take a look at my fiction website:

Sunday, June 01, 2014

Facts and Fantasies about Feedback

Continuing my weekly posting of excerpts from all my books, this week I offer the opening of the book on feedback I wrote with Charle and Edie Seashore: What Did You Say?

* * *

Life is one man gettin' hugged for sneakin' a kiss 'n another gettin' slapped.
Most people buy books on subjects they know about, but want to know more about–not on subjects they know nothing about. If that's true of you, then you may already know quite a bit about feedback–that's why you've picked up this book.
It's a good thing when a reader begins a book with a head start on the subject–yet it can also create problems. In this instance, there are so many different meanings and connotations to the word "feedback"–depending on the discipline or specialized field in which it is being used–that each reader may know something different about feedback.
Chapter 1. What is Feedback?
Here is the idea about feedback upon which we are basing this book. Feedback can be defined as:
  information about past behavior
  delivered in the present
  which may influence future behavior.
Examples of Feedback at Work
To clarify this definition, here are three examples of feedback:
Example #1.  Alice, an accounting supervisor in a construction company, had an outstanding record for high quality work carried out in a timely manner. She often wondered why other people got promoted to jobs for which she was better qualified. Brenda, who worked for her, also noticed that Alice always came in second. One day, at lunch, she remarked, "You know, Alice, I think a big reason you don't get promoted is that you lack visibility, professionally and in the community." Alice gave a speech on the new tax code at the local chapter of Administrative Management Society, and became chair of the public library board's financial committee. In five months, she was promoted.
Example #2.  Arthur, an assistant manager in a branch bank, also had an outstanding record for high quality work carried out in a timely manner. He, too, often wondered why other people got promoted to jobs for which he was better qualified.  Brent, his manager, also noticed that Arthur always came in second. One day, on the golf course, he remarked, "You know, Arthur, I think a big reason you don't get promoted is that you lack visibility, professionally and in the community." Arthur was a panel member in a debate on auditing at the local chapter of Auditors' Club, and became fund raising chairman of the zoological society. His opinions on the panel offended a great many people, and his stand on refrigeration for the polar bear cages irritated even more. In five months, he lost his job.
Example #3.  Amy, a sales team leader in a recreational equipment company, also had an outstanding record for high quality work carried out in a timely manner. She, too, often wondered why other people got promoted to jobs for which she was better qualified. Her colleague, Bert, also noticed that Amy always came in second. One day, in the elevator, he remarked, "You know, Amy, I think a big reason you don't get promoted is that you lack visibility, professionally and in the community." Amy said, "Thank you for telling me that," but as she wasn't interested in promotions, she didn't do anything about it. In five months, she was still in the same job, doing high quality work carried out in a timely manner–and quite happy about it. Two years later, she got promoted to sales manager, and was very happy about that, too.
Feedback May Influence Future Behavior
Alice, Arthur, and Amy each received feedback about their approach to their work, including things they did and didn't do. Indeed, they each received the same feedback: "A big reason you don't get promoted is that you lack visibility, professionally and in the community." But since the influence of feedback depends on who receives it, they each experienced a different outcome.
Example #1.  Alice used the information about her visibility to try some new approaches, and succeeded in reaching her objective, a promotion.
Example #2.  Arthur used the information about his visibility to try some new approaches, and succeeded in offending a lot of important people. His reward was getting fired.
Example #3.  Amy heard the information, appreciated it for the way it was intended, but regarded it as irrelevant to her career. She did essentially nothing, but life went on, anyway.
The concept of "feedback" comes from cybernetics, the theory of control. We can see from these three examples, however, that feedback may influence future behavior, it doesn't necessarily control anything.
What The Process of Feedback Looks Like
Feedback in cybernetics emphasizes the concept of a closed loop in a system providing a control function. The thermostat controlling the temperature in a room; the automatic pilot controlling the motions of an airplane; both are classic examples of cybernetic feedback systems.
In other words, cybernetics tells us that feedback is a relationship between two systems which can be visualized in a simple diagram (Figure_1.1):
________________________________________
1. A does something 

2. B notices what A does or doesn't do

3. B responds to what A did or didn't do

4. A notices that B responds

5. A decides what if anything to do about B's response

6. A does something (which takes us back to 1.)
________________________________________
Figure_1.1. The simple feedback diagram.

Applied to our three human examples, what this diagram says is that:
  A (Alice, Arthur, or Amy) does something.
  B (Brenda, Brent, or Bert) notices what A does, or doesn't do.
  B responds to what A does. or doesn't do.
  A notices that B responds.
  A makes a decision what, if anything, to do about B's response.
Although the diagram may seem simple, notice that, in the language of cybernetics, it forms a "closed loop." Once A responds to B responding to A's response, then the whole cycle reverses and repeats.
A loop represents a relationship between two systems, in this case, two people. It could go on indefinitely, or it can come to a stop. Once it gets going, the closed loop resembles the chicken-egg problem. It may start with the most trivial action by A, but the final interaction can explode all out of proportion to the beginning.
In such a situation, it's not always reasonable to say that A's opening action is the cause of the interaction. This breakdown of simple cause and effect is one of the reasons that feedback is sometimes so confusing. On the other hand, without the concept of closed feedback loop, we may never be able to understand human interactions at all.
Why is Feedback Important?
If we want to build, maintain, or test our relationships, feedback is our only source of information. Without feedback, how could we test the reality of our perceptions, reactions, observations, or intentions? If we want to share our feelings, what other way do we have but feedback? If we want to influence someone to start, stop, or modify their behavior, how else but feedback? In short, feedback is critical every time you interact with anybody, about anything.
Carl Rogers, the psychologist, observed that one of our most powerful needs is to be heard and understood. Without feedback, what would keep us from inventing our own reality? Without feedback, how could we distinguish between what's going on inside us and what's happening in the rest of the world?
At work, we see many examples of feedback, because feedback is fundamental to helping anyone who wishes to improve their performance, reach an objective, or avoid unpleasant reactions to their efforts. Feedback enables people to join with other people to achieve more than any one could achieve alone. Feedback also lets us avoid people who will obstruct our efforts.
Feedback is also important for keeping performance the same when the environment changes. For instance, now that she has a new job, Alice will need feedback to adjust her outside activities to her new situation.
Under some conditions, feedback may become critical to our very survival. Without response for stimulation, people withdraw, hallucinate, and eventually die. In the work situation, the worst punishment you can inflict on a person is to isolate them from all co-workers, with nothing whatsoever to do.
Why is feedback so universally important? Our environment is constantly changing, so we can't survive unless we adapt, grow, and achieve with others. But, unless we can do magic, we need information about how we performed in the past in order to improve our performance in the future.
The Secrets of Interaction
Which brings us to the subject of you. If you want to change your behavior in some way, or to preserve your behavior in a changing environment, it's not likely to happen by magic. You will need some kind of feedback.
Why? Feedback is a systems concept. We are used to thinking of airplanes or computers or businesses or governments as systems. You may not think of yourself that way, but you're a system, too. Of course, you're a much more complex system than, say, an airplane. Although airplanes need very expensive feedback control systems, you need the finest feedback that money can buy.
In fact, if you want to make significant changes in your life, you'll probably need something better than money can buy. You'll probably need information from other people about what impact you have on them, and that can only be obtained by a process of give and take.
We call that kind of information interpersonal feedback. This book is about how to give interpersonal feedback, how to take it, and how to make the most of what you give and take. It's about sifting the important from the irrelevant, distinguishing the information from the distortions, and seeing the patterns in a series of specific instances. It's about adding new information without becoming confused or incoherent. And, finally, it's about how to maximize the conditions in which you can share your thoughts and feelings with others.
Examples of Interpersonal Feedback at Work
Although interpersonal feedback is essential in all aspects of your life, we're going to focus on your work life, where you already have an invaluable collection of experiences. If something you learn at work happens to influence other aspects of your life, consider it a bonus.
Let's look at one series of examples of feedback that one person might receive at work. Sally is an accountant, and all these things happened to her on one Friday. Are any of them familiar in your own typical day at work?
During a morning meeting with clients, Sally corrects an error in one of Jack's figures. The client is angry with Jack. After the meeting, Jack screams at Sally and tells her that she did the same thing two years ago. Then the boss calls her in to his office where he congratulates her on her sharp vision.
At lunch in the cafeteria, Sally coughs without covering her mouth. Willard turns the other way, Sylvia raises her eyebrow, and Jack makes no visible response.
After lunch, Sally interviews a potential new client. The client compliments her on her suit, but later she hears that he has decided to engage a competing firm. Sally is sorry to lose an account, but also pleased because she is sick and tired of getting feedback from clients on her clothes instead of her work. She was not looking forward to giving yet another client that feedback.
At coffee break, Sylvia tells Sally that Willard is upset because Sally is too friendly with the boss.
Her daughter's teacher calls to say that her daughter is very upset about the small amount of time she spends with her mother. The teacher wants to know when they can schedule a conference to figure out how Sally can take time from her work to help her daughter make a smoother transition into first grade.
In the elevator at the end of the day, Sally tells a joke. Jack laughs, Sylvia snorts, and Willard pinches Sally's shoulder.
These few examples illustrate the wide range of interaction that comes under the title of "interpersonal feedback at work," yet these are only a few of the thousands of instances of feedback that Sally receives in a single day.
All this information may be "better than money can buy," but if you are like Sally, you'll see how easily she could be confused when she tries to use this feedback to improve or maintain her performance at work. Not all the feedback is about her and her impact. Some of it is about what someone else thinks she is doing, or wants her to be doing, or even who Sally reminds them of.
Not only that, but the timing of the feedback is often unfortunate. The feedback often arrives days, months, or years late. It comes in different ways from many people at the same time, or from the same person at different times. And, when it comes, it often finds Sally off balance and vulnerable. Is this familiar, too?
It's not surprising that Sally doesn't always receive the feedback as intended, or act on it even if she does. In most cases, after an hour or so, she won't even remember what she heard.
Or maybe you also find yourself on the sending end of feedback. Perhaps your job requires it, or perhaps you just like to give more than you like to receive. We'll also be writing about why some people's feedback is heard fairly much as they intend, while others can't seem to get their message across no matter how hard they try.

If you're interested in how to use feedback more effectively within an organization, then the place to start is right here–learning how to use feedback more effectively with the individuals who make up that organization.

Feedback in Fiction

For snother view of the consequences of feedback, take a look at my novel, The Aremac Project. When one of the protagonists experiences an accident that renders her unable to communicate, she is unable to provide feedback about a murder she witnessed. Without normal feedback, how can she save her life and the lives of the ones she loves?

Friday, May 23, 2014

Overconstraint: Some Effects on S/W Design

Continuing my weekly excerpt series, the following sections are taken from Exploring Requirements, Volume 2: First Steps to Design



16.5 Overconstraint
Relaxing a constraint increases the size of the solution space, and therefore may increase the number of potential solutions. Conversely, tightening a constraint decreases the size of the solution space and may eventually result in an overconstrained set of requirements. In this event, finding any acceptable solution would be very difficult, or even impossible.
It will certainly be impossible to find a solution if there are conflicting constraints, as when one constraint says Superchalk must be longer than four inches and another says it must be shorter than three inches. If you attempt to sketch this solution space, you'll find it has zero area.
Even when the solution space doesn't have zero area, it may still be impossible to find any real solution possibility falling within it. Of course, this unfortunate condition won't become apparent until design. Experienced designers, though, can often guess what is happening, especially as they watch the solution space shrinking.
If the solution space is actually void of possible solutions, you will have to negotiate one constraint against another. If you can't do that, you'll have to drop the project. There are worse things than dropping a project in the requirements stage, such as dropping it in the post-implementation stage. So be on the lookout for an empty solution space, and don't be afraid to suggest termination if it's clear the project is overconstrained.
It's extremely important when and how you suggest termination. Sometimes people are implicitly pre-negotiating constraints, because they have an image of the system becoming overconstrained. For example, they won't mention something they really want, or they'll start trying to suppress someone else's idea.
Einstein once said, "A theory should be as simple as possible, but no simpler." We can paraphrase Einstein and say,
Requirements should be as constrained as possible but no more constrained.
Applying this principle is equivalent to saying,
The solution space should be as large as possible, but no larger.
That's why it's a much better idea to deal with constraints explicitly.
16.6 Psychology of Constraints
Mathematically, we know excessive constraints limit the size of the solution space, but even worse, they limit us psychologically.
16.6.1 The tilt concept
If you watch people play pinball, you'll notice some of them tilt the machine occasionally, while others never tilt. The ones who tilt too much are not good players, because they are unable to restrain themselves. But the ones who never tilt are terrible players, because they restrain themselves excessively. Why? Because tilting shows the players just how far they can safely bump the machine. If you never exceed your limits, you never learn what your limits are. So, this is the tilt concept:
If you never tilt, you don't know if you're using your full resources.
As designers, we are often intimidated by constraints. We toss out an idea, then:
• Someone says, "That information is confidential," so we stop pursuing an idea which may be essential to a superior design.
• Someone points to a seven-foot shelf of standards manuals and declares, "Doing what you propose violates our standards," so we drop a promising idea.
• Someone frowns and whispers, "The boss would never approve of what you're proposing." Even though it's merely an idea for a brainstorm, and not a proposal, we shrivel into a ball and change the subject, never daring to check it out.
Much of the time, most of us tend to operate at some safe distance from the constraint boundaries, because we're afraid of tilting something. Thus, none of us search the entire solution space, but only a reduced solution space, as suggested in Figure 16-8.


Figure 16-8. Most designers don't search the entire solution space, but merely a reduced solution space determined by their fear of constraints.
16.6.2 Breaking constraints
You can always overcome this limitation by probing politely, but firmly. At one of Jerry's clients, the requirements team indicated the product had to be shipped by November 1. As this seemed an impossible overconstraint, Jerry asked, "Where does that date come from?" Everybody in the room got a shocked and frightened look, and one man lowered his voice and said, "The boss told us."
If Jerry didn't understand the tilt concept, he would have become just as frightened as they were—and dropped the subject. But he was an outsider anyway, so he asked for a break and went down the hall to talk to the boss. "There seems to be some uncertainty in the other room," he said. "What deadline did you set on this project, anyway?"
"November 1."
Jerry might have stopped there, too, but the tilt concept kept him going, "That date may turn out to be an overconstraint. Can you explain where it comes from?"
The boss looked a little puzzled. "Constraint? I didn't mean it to be a constraint. They asked me when I wanted it, so I said it would be nice to have it by November 1. I didn't want it after November 1 because it would interfere with our year-end processing, but after January 15 would be just fine. Heck, as long as I get it by June there won't be any problem."
In other words, there was a hole in the solution space, between November 1 and January 15, but there was plenty of solution space on the other side of the hole.
When Jerry returned to the requirements room, the team was shocked to hear the news. "You must be a super negotiator," they said, their eyes sparkling with admiration.
"No," Jerry said, "but I'm a terrific pinball player."
16.6.3 The self-esteem-bad-design cycle

Now we can see one reason the quality of the requirements work depends both on the personality of the requirements team, and on the perceived safety of the working environment. A team with low self-esteem will be afraid to check over-constraints, and consequently will create a more difficult design task. Then because the design task is extra difficult, they won't do as good a job as they might have. As a result, their self-esteem will drop, and they'll be even worse on the next project. Only a skilled intervention from outside can break such a vicious cycle of low self-esteem and bad design.

Postscript
Many of my Women of Power novels capture principles of requirementss gathering in exciting stories as object lessons. Check out The Women of Power.

Even better, buy one of the novels, read it, and if you don't learn something from it, let me know and I'll refund your purchase price along with a bonus book. - Jerry

Sunday, May 18, 2014

Methodologies Aren't Enough

This week we continue with sample essays from my various books. The topic this week is taken from the beginning of Exploring Requirements, Volume 1.

Some of our readers would readily agree with the need to resolve the ambiguities in requirements, but they would argue the problem is not with the techniques, but with the technicians. In order to do a better requirements job, they would claim, remove the people from the process and instead use a methodology. Indeed, their preference would be a totally automated methodology, with no people in the development process whatsoever.
When we started teaching software development in 1958, there were few organized development methods. Today, however, packaged methodologies flood the market and almost everyone who develops software has some sort of organized process. For many years, computer-aided software engineering (case) and computer-aided design (cad) dominated the news, both promising to eliminate people from the process. More recently, the Agile movement seems to have rediscovered people, and they are seeing the need for a book about "people-oriented" tools to support their approach. But regardless of the approach, it must ensure everyone gets the right requirements, and gets the requirements right. Won't automated tools do the job without people? We think not, and the story of the Guaranteed Cockroach Killer will tell you why.
1.1 CASE, CAD, and the Cockroach Killer
For many years, a man in New York made a living selling his Guaranteed Cockroach Killer through the classified ads. When you sent your check for five dollars, he cashed it and sent you the kit shown in Figure 1-1.



Figure 1-1. The Guaranteed Cockroach Killer kit. All you have to do is place the roach on block A and then strike it with block B.
The instructions read
1. Place cockroach on block A.
2. Hit cockroach with block B.
If you follow these instructions, you are guaranteed to kill the cockroach. By rights, there should have been a third instruction:
3. Clean up mess. 
Quite likely, nobody ever needed the third instruction—which also meant nobody could collect on the iron-clad guarantee.
Now, what has the Guaranteed Cockroach Killer to do with automated tools? One case document told us case tools are comprised of three basic application development technologies:
1. analysis and design workstations to facilitate the capture of specifications
2. dictionaries to manage and control the specification information
3. generators to transform specifications into code and documentation
In our experience, "analysis and design workstations" resemble block A of the cockroach killer. "Dictionaries" resemble block B. If you can somehow figure out the specifications, the workstations will "capture" them and the dictionaries will "manage" them. Once these two steps are done, the generators will clean up the mess and provide your product.
These automated tools are guaranteed to do their job if you simply do your part. This book is about doing your part: getting the roach on the block, and convincing it to stand still long enough for you to deliver the crushing blow.

Whether it's roaches or requirements, the hard part is catching them and getting them to stand still. Generating the code and cleaning up the squashed roach are messy jobs, but in a different way than the first two steps. That's why we think you'll still need the tools described in this book. Not only that, but the better your automated tools, the more you'll need our people-oriented tools. You can see why in the book sample at Smashwords.

Saturday, May 10, 2014

Excessive Realism in Teaching Simulations

Continuing My Weekly Series of Book Excerpts

(from Experiential Learning: Simulation)

A simulation doesn’t have to be “real” to be successful as an experiential learning tool. What has to be real are the feelings it stimulates in the participants, for feelings are what drive learning. Indeed, too much realism can be harmful to the goal of learning. If the simulation is too close to the participants’ real situation, they be unable to be objective about what they did and what resulted from what they did.

The following sections offer some other effects you may experience if your simulation hits too close to home.

Too accurate detail
Scaling difficulties aren’t the only reason, but they’re a more than adequate reason, why we don’t attempt to do direct simulations to obtain meaningful measurements. We’re not looking for numbers. It’s not answers we seek, but insights. Consequently, when you design a direct simulation, don’t waste your time trying to find “exact” information about the factors in the simulation.

There are other reasons, as well, for not attempting to use a more realistic project. It’s possible to do that, and we have done it often for software projects. Usually, though, it takes more time than people are willing to devote to “a class,” but even more important, it usually generates far more data than we can analyze effectively in hours or even days. But we have done such simulations, for example, in a semester-long college class in software development. Teams in such classes have developed useful software tools that became successful commercial products. One team even developed a hardware computer that executed a high-level language as its machine language.

Too embarrassing
When teaching within a real organization, there may be a much stronger reason to avoid too much accuracy in a direct simulation. If the simulation reveals something embarrassing, anyone who might look bad will try to discredit the entire learning cycle.

A case in point: We were engaged to conduct a week-long workshop for a university’s IT department, to help them improve the quality and timeliness of their in-house-developed software. On Tuesday, we conducted a four-hour simulation of their development process. One of the inventions showed the bad effects of management’s unwillingness to follow some simple suggestions by the testing team.

Both sides–the testers and the developers–were excited to learn how these management actions, or inactions, affected quality and schedules. The participants were so excited, we were sure the final three days of the workshop would result in significant productive changes in the way they developed software. But nothing ever happened.

On Wednesday morning, half the students–the development team–simply didn’t show up. When we investigated, we learned that the development manager had forbidden his employees to attend the remainder of the class. When we confronted him about his decision, he said, “It’s dangerous to let developers and testers get to know one another. If they become friends, the testers will not want to embarrass the developers by finding their bugs.”

Unnecessary detail
Paradoxically, realism often interferes directly with learning from a simulation. For some years, as part of our consulting, we used a simulation of a software development project in which the class was to organize and produce an actual software tool. We were hoping the participants would discover many project pitfalls–information they could apply directly on their jobs.

To add realism to the project, we added a number of the pitfalls we were hoping they would learn to recognize and perhaps avoid. For instance, one of the learning leaders acted as the top executive above their project, and every fifteen minutes or so, he would call a “progress meeting.” We were hoping they would learn to appreciate how much such frequent meetings could disrupt a project. We were pleased to see how well these meetings actually disrupted their project.
But after a few classes, we began to see a pattern in the invention phase. Yes, the participants recognized how much frequent status meetings disrupted the project, but they said, “If management didn’t keep interrupting us to check on progress, we would have done fine. You prevented us from succeeding.”

In other words, they were saying, “We don’t have to learn something or change our behavior. We need our management to change.” If the class involved their executive management, that might have been a good result, but that was not the result we were seeking.

We decided to remove that detail from the simulation. The learning leader who played the role of executive management called no meetings at all. But guess what? The project was still messed up– late and out of control. In the invention phase, they said, “If management didn’t keep interrupting us to check on progress, we would have done fine.”

But this time, they couldn’t blame their learning leaders. It was their own teammates who kept checking status, so now they had to look inward to discover what it was about status meetings that needed to change. By not trying to insert so much detail in the simulation, we allowed the pitfalls to arise from the participants’ own behavior–behavior they could actually do something about.

In realistic simulations, less is more. It’s more because it’s more realistic to let their failures (or successes) arise from their own behaviors, not from intervention by the learning leaders. Once we learned this principle, we began pruning our simulations to the lowest level of leader participation. For instance, they argued that they would have been more successful without specification changes we were introducing randomly. Such changes are a familiar obstacle to anyone who has ever developed software, but again, by adding this detail, we simply gave them another excuse–another way to avoid responsibility for their own unproductive behavior. When we stopped introducing such spec changes, their own developers keep thinking of “nifty features” to introduce into their own product.

Almost but not quite
But realism isn’t the only problem with detail. Just providing detail itself may mess up the learning value of a simulation. The more details you supply, the more it seems to invite the participants to blame slight flaws in some detail you do supply. It’s as if the participants say, “It all seemed so real that we were thrown off course when this detail (pick one) wasn’t quite right.

Even if that detail was essentially unrelated to the learning goals of the simulation, it seems to provide an excuse for failure–and this an excuse for not learning. 

Sunday, May 04, 2014

Tinkering With Toys

Continuing our weekly series of excerpts from Jerry's books, today's post is taken from the Experiential Learning series, book 2, Inventing.

Tinkering with Toys is an exercise we designed to explore the relationships among design criteria, documentation difficulty, and system maintenance. It involves one team building and then documenting a device from Tinkertoys, then passing that documentation to another team that must rebuild the device. To give you an idea of the richness of the exercise, which is described in the book, we'll instead list some of the lessons participants have learned.

Each outcome is different, but over the years certain lessons emerge as common to many of the experiences with this exercise. But some of these lessons are important observations about experiential learning in general, so we’ll start the chapter with the first handful, saving the exercise-specific lessons for the end.

Can learning be fun?
Learning from structured experiences can be great fun–and that’s a problem. Many people believe learning and fun are incompatible. They believe that to learn, you must suffer. Looking at an experiential session with that attitude, you are quite likely to believe that people are wasting time, and learning is not happening.

What follows are four essential lessons people need to learn if they’re to overcome the anti-fun myth, and so can be successful participants in some sort of experiential learning session.

Just because you’re having fun doesn’t mean you aren’t learning.
Someone coming in from outside and seeing you at work with your Tinkertoys might not appreciate that you are, in fact, doing something useful. We tend, in our society, to distinguish between “work” and “play”. Part of this tendency is a kind of envy or nostalgia on the part of managers, who are no longer allowed the style of “play” we call “technical work”. (Of course, managers “play” in executive lunches and other activities which some technical people cannot recognize as useful work).

Just because you’re learning doesn’t mean you feel right about having fun.
Most of us have so internalized the work-play dichotomy that when we’re having fun, we look around to see if someone is watching. And, even if nobody is watching, we feel guilty, somehow, about what we’re doing. If only there were some way to measure what we’re accomplishing, we could free ourselves from these guilt feelings, but many of us oppose being measured, too. (And with good reason, if the measurements are not soundly based).

Just because you’re not having fun doesn’t mean you’re not learning.
When the fun is over in the first part of the Tinkertoy exercise and someone has to “pay” for it by documenting the creative mess, a lot of people think that’s the “lesson.” They’ve seen the “point” of the exercise, so why go on to the bitter end? Well, our exercises–unlike schools but much like “life”–don’t have one single “point.” There’s much to be learned in working through to the “bitter end.” For instance, sometimes we have to learn that the apparent end isn’t the end at all.

Just because you’re having fun or not having fun doesn’t mean you are learning anything.
More than anything else, what you learn from experiences depends upon the attitude with which you approach them. In some situations, we just keep repeating our old behaviors, even though they weren’t very successful in the past. In other situations, we repeat our old behavior because it was fun, not really caring if it was accomplishing anything. That view, at least, has some inner logic: you may not get the job done, but you have a good time.

The other way round has no logic at all. If you find yourself having a miserable time, then turn on your learning faculties full blast so you will learn to avoid the situation in the future.