Thursday, July 31, 2014

The Tester's Library

This week, I'm trying something different—The Tester's Library.

I've written many books, which is good, but sometimes the sheer number creates a financial problem for my fans. To make collecting the books easier on fans' wallets, I've begun to put together bargain-priced bundles of books to form beginning libraries for different audiences. At the moment, I'm planning a Tester's Library, a Developer Library, a Consultant's Library, and a Manager's Library. Perhaps more will come later.

The first one, The Tester's Library, can be found at

I won't release them all at once, leaving a bit of breathing room between one library and the next. This week, I've started with the Tester's Library, which consists of eight five-star books that every software tester should read and re-read. As bound books, this collection would cost over $200. Even as e-books, their price would exceed $80, but in this bundle, their cost is only $49.99. Here are the books, and why they should be in your library:

  • Perfect Software and Other Illusions About Testing
James Bach says, "Read this book and get your head straight about testing. I consider Jerry (Weinberg) to be the greatest living tester."
Perfect Software sets the stage for the bundle by answering the questions that puzzle the most people, but whose answers must be on the lips of every professional software tester:
• Why do we have to bother testing?
• Why not just test everything?
• What is it that makes testing so hard?
• Why does testing take so long?
• Is perfect software even possible?
• Why can't we just accept a few bugs?

  • Are Your Lights On: How to Know What the Problem Really Is
The tester's fundamental job is to identify problems in systems. Whether you are a novice or a veteran, this powerful little book will make you more effective at precisely identifying and describing problems. Any tester involved in product and systems development will appreciate this practical illustrated guide, which was first published in 1982 and has since become a cult classic. Are Your Lights On provides an entertaining look at ways to improve one's thinking power and the power to communicate effectively about problems discovered in testing:
  • First how to identify the problem.
  • Second how to determine the problem's owner.
  • Third who to discover where the problem came from.
  • Fourth how to decide whether or not to solve it.
Delightfully illustrated with 55 line drawings by artist Sally Cox, the book has changed the way thousands of testers think about the job of producing quality software.

  • Handbook of Technical Reviews (4th edition)
Experienced testers know that technical reviews are probably the most powerful testing tool. Every tester should participate in reviews, and this book explains how to do it.
One reviewer said, "For me there are many, many valuable lessons in this book. Not only does it provide a step-by-step explanation of how to run software reviews and how to get them accepted in the organization, what is even more important is that everywhere the "why" behind choices is explained. That allows me to transfer sound principles to a wide variety of settings. In every company reviews "work" slightly differently, and this book has helped me figure out how to match the implementation to the specific setting."
"Quite apart from the great content, I found the writing style a delight: witty, chock full of wisdom, and a breeze to get through. At over 400 pages it "looks" like a tome, but I went through it like a breeze. And I keep returning to it, which says a lot about the depth of coverage."

  • General Systems Thinking: An Introduction
For many years, An Introduction to General Systems Thinking has been hailed as an innovative introduction to systems theory, with applications in software development and testing, medicine, engineering, social sciences, architecture, and beyond. Used in university courses and professional seminars all over the world, the text has proven its ability to open minds and sharpen thinking.
A reviewer wrote: "In computing, a timeless classic is anything that is worth reading for any reason other than to obtain a historical context after five years. If that still holds true after twenty five years, then it is truly an extraordinary piece of work. That label applies to this book. It is not about computing per se, but about how humans think about things and how 'facts' are relative to time, our personal experience and environmental context."
"This is a book that is a true classic, not only in computing but in the broad area of scholarship. It is partly about the philosophy and mechanisms of science; partly about designing things so they work but mostly it is about how humans view the world and create things that match that view. This book will still be worth reading for a long time to come and it is on my list of top ten computing books.

  • What Did You Say?: The Art of Giving and Receiving Feedback
Perhaps the most important—and most difficult—of the tester's jobs is giving information to developers about problems in the software they produced. This brief and engaging book can be of use to anyone who has to interact with other people. You'll enjoy the "read" so much that you may not realize how much you have gained - all in words of one syllable! 
• How to offer feedback when asked (or hired) to do so.
  • Why feedback tells more about the giver than the receiver.
  • How feedback is distorted or resisted by the receiver's point of view and defense mechanisms. 
  • How humans have struggled to understand each others' responses.
One reviewer wrote: "If I had the power to transport one book back in time and send it to myself, this would be the one. This is the book I needed when I became a people manager. It's also the book I needed when I began to raise my kids. In fact, I can't think of a time in my life when I did not wish I had more of the skills this book teaches. A simple but very deep book that causes a new level of understanding about how to talk to people with each reading."

  • More Secrets of Consulting: The Consultant's Tool Kit
Ultimately, a tester's job is like a consultant to developers, advising them on how to improve their products. Like all consultants, testers need tools to help them have their advice used productively.
Here's how a reviewer described the book: The "Consultant's Tool Kit" of the subtitle is actually a complex metaphor. Each component of the toolkit is a metaphor for a certain aspect of your personality and personal capabilities. For example, the wishing wand is a metaphor for understanding, and being able to ask for, what you want from a professional relationship. The chapter around this metaphor first explores why most people either don't know what they want or are unable to express it, and suggests ways to make your wishes clearer. It places this in a professional context, contract negotiation, and emphasizes how the personal ability to express and value your wishes will help you negotiate more successfully. 
In a similar way other chapters focus on developing wisdom and new knowledge, managing time and information, being courageous with your decisions, learning how to say yes and no, understanding why you and others are in the current situation, and keeping yourself in balance, avoiding burnout and other self-destructive conditions. 
These are all important not only to consultants, but to anyone trying to establish a more satisfying professional or personal life by managing problems, by self-improvement and by better handling their relationships to other people.

  • Becoming a Technical Leader
Ultimately, the best testers are leaders, guiding their organizations to better quality software products. Becoming a Technical Leader is a personalized guide to developing the qualities that make a successful technical leader. We all possess the ingredients for leadership, some better developed than others.
The book focuses on the problem-solving style–a unique blend of skills in 3 main areas: innovation, motivation, and organization. Ways to analyze your own leadership skills, with practical steps for developing those skills.
From one tester's review: "It is most difficult for a technical expert to transition from a individual contributor to a leader. This book tells you exactly how to do that !!! Brilliant, witty and extremely enjoyable. One of raw all-time classica on leadership. If you have only one book to read on leadership then this is it."

  • The Aremac Project

  • This is an intriguing fictional story, based on true events, showing how software testing, done well or done poorly, makes all the difference in the outcomes.

    Thank you for reading. If you'd like, please use the green button below to share this news with your Tester Friends.

    Monday, June 23, 2014

    The History of an eBook

    I'm a bit late with this latest excerpt from one of my books. I was a bit busy last week, doing the grunt work of formatting a ebook (the fourth editioin of The Handbook of Technical Reviews), correcting typos in another (What Did You Say?), updating software and hardware, and having my eyes replaced (corneal transplants).

    Speaking of new eyes, that's pretty much what this week's post is about: General Systems Design Principles: Passive Regulation, which Dani and I co-authored. This week, I'm printing excerpts from the preface to that book, which should give you some sense of what it's about and how it came about. Next week, I hope to publish an excerpt from the companion volume, General Systems Design Principles: Active Regulation, which Dani and I also wrote together. This Preface actually introduces both books, which form volumes 2 and 3 of my General Systems series. But more of that later.
    Preface to the E-book Edition
    With books, as with children, you never know what you've produced until they've grown up. When Jerry's book An Introduction to General Systems Thinking was first published, we imagined it would be used by practical systems designers in a variety of disciplines, not as a university text. To be sure, Dani used it in a graduate seminar in Anthropology, but she was biased. Then, after a few years, Jerry was surprised yet delighted to discover how many other courses were using it.

    About the time of that discovery, we first published parent of this volume, under the title On the Design of Stable Systems. Looking at the original preface, it's easy to see that―perhaps influenced by the earlier companion volume―we thought we were producing an academic work. That impression undoubtedly influenced the title, leading us to the rather abstract and distant sounding On the Design of Stable Systems. (We actually received purchases from some horse owners who wanted to design stables for their mounts.) It also influenced the original publisher to promote the book solely as a text, where it was only moderately successful.

    On the other hand, some very practical people managed to discover the book despite the publisher's promotion. We received many kind letters testifying to its practical usefulness—and complaining about the title. So, when the opportunity came to reprint the book, we sought to make two changes. First, we found a publisher who was more attuned to the people who design and build real systems in the real world. Second, we changed the title.

    The new title, we hope, gives just the right impression of the book's contents and usefulness. It is a practical book about systems design―not just computer systems, but information systems in the larger sense: human organizations of all kinds, and systems in nature, like organisms, species, or forests.

    With such a general scope, it obviously cannot be a "nuts and bolts" or "recipe book, yet it has many extremely practical applications to the daily work of people who design, for example, information processing systems, training programs, business organizations, parks, or cities.

    The field of software development has recently become enamored of "methodologies"―integrated step-by-step approaches to developing and maintaining systems. Ours is not another methodology book―though the general principles we describe have an ancestral relationship to some of the popular methodologies. We believe they will make learning any methodology easier, because they are general principles―the sorts of things you need to know regardless of what methodology you use, or what kind of system you're designing or deciphering.

    We considered using the title The Secrets of Systems Design, after Jerry's book The Secrets of Consulting, but we thought that would be confusing. It is about "secrets," though―the deep thinking processes we use in our consulting and practice. These general ways of thinking about systems enable us to get quickly to the heart of existing organizations and information systems, to visualize new systems, and to design training and other interventions to catalyze the transformation from one to the other. Although we don't promise that such deep insights are easy, we've seen how much they can empower you. In the end, that's the only good reason for publishing a book, whatever the title.

    Now, as we enter the e-book era, we've made some further changes to make the book more accessible. Based on reader feedback, we've come to realize that we had written two books in one. Both books are about stability and design, but the first half of the original book was about passive stability—through aggregates. We have separated out those first eight chapters for this volume: Passive System Regulation: General Design Principles. We've done this to make each book more focused and less expensive. We hope they now better fits your needs as a reader.

    Original Preface
    This book is the result of an 18-year collaboration between two people, in two different disciplines, who share a fascination and love for the human animal. Whether from the vantage point of computers or anthropology, we are excited by the capacities of the human mind and alarmed by some of its products.

    Our disciplines seem to begin at opposite poles―machine systems and social systems―but they converge as soon as people enter the picture. Thus the social scientist is concerned with the cultural meaning of "math anxiety," while the computer scientist teaches programmers to overcome their "people anxiety."

    Both our disciplines daily come to grips with the subtle interplay between system and environment. Cultures and computers both exhibit the effects of adaptation to a constantly changing environment. And anthropologists and computer scientists equally balk at the difficulties of studying conservation and persistence.

    This collaboration has been nourished, too, on another level. Our professional activities oscillate between the abstract and the concrete, the theoretical and the practical. The computer scientist designs program logic and enables people to work productively in teams. The anthropologist wrestles with concepts in the classroom and drinks wine with natives in the field. We are accustomed to switching modes, shifting communication styles, all the time. It is perhaps this instability in our professional lives that has taught us to value uncertainty and to make a virtue of indeterminacy.

    General systems thinking has been, for us, a way of understanding the complexity of our own lives. We sincerely hope it will be the same for you.

    Links to All Jerry's Books
    Links to Jerry's non-fiction books can be found at

    Links to his fiction, which incorporates general systems laws and principles in stories of smart people using general systems thinking to solve tough, exciting problems:

    Sunday, June 08, 2014

    Some General Systems Laws

     Continuing my weekly series of excerpts from my books, this week I picked a short section from An Introduction to General Systems Thinking that describes and derives a couple of important general systems laws.

    Although what we observe depends on our characteristics as observers, it does not depend entirely on those characteristics. There are two extreme views on this subject—the "realist" and the "solipsist." The "solipsist" imagines there is no reality outside of his own head, while the "realist" flatters himself that what is in his head is all real. Both suffer from the same disease.

    There are two components to any observation: a duality long known and often forgotten. Galileo
    ... distinguished between primary qualities of matter and secondary qualities—the former inherent in matter itself, the latter the product of the interaction of the body possessing certain primary qualities with the sense organs of a human or animal observer.
    Many of Galileo's intellectual heirs have forgotten this distinction, making it appear prudent for us to frame a law, which we shall call The Generalized Thermodynamic Law:

    More probable states are more likely to be observed than less probable states, unless specific constraints exist to keep them from occurring.
    In spite of the risk of antagonizing physicists, we call this law the Generalized Thermodynamic Law because it has two important parts that correspond to very general cases of the First and Second Laws of Thermodynamics. The First law, we recall, concerns the conservation of something called "energy," which seems to conform to a rather severe constraint "out there." The Second Law, however, is of a different type, being concerned with the limited powers of observers when viewing systems of large numbers of particles. By analogy with these laws and with Galileo's primary and secondary qualities, we may reframe our law:

    The things we see more frequently are more frequent:

    1. because there is some physical reason to favor certain states (the First Law)
    2. because there is some mental reason (the Second Law).
    Because of the predominance of realist beliefs, it is hardly necessary to elaborate on reason 1, though there is more to it than meets the eye. Our purpose in propounding this law is to correct the tendency to overdo realist thinking—to the point of stifling thought. We shall therefore illustrate the Second Law with an extended example.

    Which of the two bridge hands in Figure 4.9 is more likely to be seen in a normal bridge game? (You really need know nothing about bridge—we are speaking essentially of dealing 13 cards in an honest deal.)

    Hand 1

    Hand 2

    Figure 4.9. Two bridge hands.
    Most bridge players readily answer that Hand 2 is the more likely, but statisticians tell us that the two likelihoods are the same. Why? In an honest deal, any precisely specified hand of 13 cards has to be as likely as any other hand so specified. Indeed, that is what statisticians mean by an "honest deal," and it also agrees with our general systems intuition based on the Principle of Indifference. Why should the deck care what names are painted on its faces?

    But the intuition of bridge players is different. We want to know why they instinctively pick Hand 2 to be more likely than Hand 1. The reason must lie in the structure of the game of bridge, the arbitrary rules that attach significance to certain otherwise indifferent card combinations.

    When we learn a card game, we learn to ignore certain features which for that game are not usually important. Games such as War and Old Maid where suits are not important may be played without taking any notice of suits whatsoever.

    In bridge, high cards are usually important, and cards below 10 are usually not important. That's why bridge players are always impressed when a bridge expert makes a play involving the "insignificant" five of hearts. Bridge books implicitly recognize the unimportance of low cards by printing them as anonymous x's, just as the government implicitly recognizes our unimportance by reducing us to unnamed statistics. In a typical bridge lesson, we might see Hand 3 of Figure 4.10. Seeing this hand, we may understand the bridge player's difficulty by asking the analogous question:

    Which is more likely, Hand 1 or Hand 3?

    Hand 3

    Figure 4.10. A "hand" that is not a hand.
    Most bridge players are only dimly aware that Hand 3 is not a "hand" at all, but symbolizes a set of hands. Hand 2 happens to be one member of the set we call Hand 3. When a bridge player looks at a hand such as Hand 2, he unconsciously performs various lumping operations, such as counting "points," counting "distribution," or ignoring the "small" cards. Therefore, when we show him Figure 4.9 he thinks we are asking

    Which is more probable, a hand like Hand 1 or a hand like Hand 2?

    There is, in the bridge player's mind, only one hand like Hand 1, since it is the only guaranteed unbeatable grand slam in spades, the highest suit. Hand 2, however, is quite "ordinary," and any hand in the set "Hand 3" will be more or less like it. Since the set we call Hand 3 contains more than a million individual hands, the odds are at least a million to one in favor of getting a hand like Hand 2! Because of his past experience, the bridge player has translated our question to another—one to which his answer is perfectly correct.

    We habitually translate questions in such ways. Consider what the statistician did with our original question, which asked

    Which is more likely to be seen ...
    Because we ordinarily speak rather casually, he translates "be seen" into "occur," which leads him to an important mistake. In actual bridge games, Hand 1 will be seen much more often than ever Hand 3! Why? Because although Hand 3 may occur more often, it will rarely be seen—that is, specifically noticed by any of the players. Although Hand 1 may not occur very often, if it ever does occur it is sure to make the morning papers, because it is such a sensational hand in the game of bridge.

    The importance of hands and the way they are lumped are obvious mental reasons for seeing some "hands" more frequently than others. These arguments are based on the assumption of an "honest deal," which is another way of saying that there are no physical reasons to favor any particular hand. Those who have played much bridge, though, know that there are such physical reasons. If a player leaves the table for a few minutes during a friendly game, he may come back to find he has been dealt 13 spades. If he is not too gullible, he will look up and laugh, because he will know that his friends have rigged the hand as a practical joke.

    But how does he know they have rigged the hand? Well, because he knows that in an honest deal, such a hand is vastly improbable. Yet so is any other hand, but we do not think that every hand comes from a dishonest deal. Whenever we observe a state both conspicuous and improbable, we are faced with a quandary. Do we believe our observation or do we invoke some special hypothesis? Do we call the newspaper, or do we accuse our friends of rigging? Only a fool would call the newspaper. If he did, no sensible person would believe him.

    In the same manner, conservatism is introduced into scientific investigation by the very assumption that observations must be consonant with present theories. An observation is more likely to be simply discarded as "erroneous" if it is out of consonance with theory. If the observation is nonrepeatable, it is therefore lost forever, leading to a selection for "positive" results.

    Such selection can be seen most vividly when researchers dig up historical data, and especially in connection with old astronomical observations needed to give a time perspective not obtainable in the present. Robert Newton calls these procedures for assigning dates and localities to questionable observations the "identification game," and nicely shows the way the procedures can lead to the "classical" results even if a table of random numbers is used as the "raw" data to be identified.

    The complete substitution of theory for observation is, of course, not scientific. Even worse is going through the motions of observing, but discarding as "spurious" every observation that does not fit theory—like the Viennese ladies who weigh themselves before entering Demel's Tea Room. If they're down a kilo, they have an extra mochatorte, and if they're up a kilo they pronounce the scale "in error" and have an extra mochatorte anyway.

    This, then, is the problem. Raw, detailed observation of the world is just too rich a diet for science. No two situations are exactly alike unless we make them so. Every license plate we see is a miracle. Every human being born is a much greater miracle, being a genetic combination which has less than 1 chance in 10100 of existing among all possible genetic combinations. Yet the same is true for any particular state—in the superobserver sense—of any complex system.

    "A state is a situation which can be recognized if it occurs again." But no state will ever occur again if we don't lump many states into one "state." Thus, in order to learn at all, we must forego some potential discrimination of states, some possibility of learning everything. Or, codified as The Lump Law:

    If we want to learn anything, we mustn't try to learn everything.
    Examples? Wherever we turn they are at hand. We have a category of things called "books" and another called "stepladders." If we could not tell one from the other, we would waste a lot of time in libraries. But suppose we want a book off the top shelf and no stepladder is at hand. If we can relax our lumping a bit, we may think to stack up some books and stand on them. When psychologists try this problem on people, some take hours to figure out how to get the top-shelf book, and some never do.

    It's the same in any field of study. If psychologists saw every white rat as a miracle, there would be no psychology. If historians saw every war as a miracle, there would be no history. And if theologians saw every miracle as a miracle, there would be no religion, because every miracle belongs to the set of all miracles, and thus is not entirely unique.

    Science does not, and cannot, deal with miracles. Science deals only with repetitive events. Each science has to have characteristic ways of lumping the states of the systems it observes, in order to generate repetition. How does it lump? Not in arbitrary ways, but in ways determined by its past experience—ways that "work" for that science. Gradually, as the science matures, the "brain" is traded for the "eye " until it becomes almost impossible to break a scientific paradigm (a traditional way of lumping) with mere empirical observations.
    I hope you're enjoying these weekly excerpts. You might also enjoy some similar lessons cast in the format of fictional accounts of smart people using such principles in solving interesting life problem or cracking tough mysteries. If you think you might like a little different approach, take a look at my fiction website:

    Sunday, June 01, 2014

    Facts and Fantasies about Feedback

    Continuing my weekly posting of excerpts from all my books, this week I offer the opening of the book on feedback I wrote with Charle and Edie Seashore: What Did You Say?

    * * *

    Life is one man gettin' hugged for sneakin' a kiss 'n another gettin' slapped.
    Most people buy books on subjects they know about, but want to know more about–not on subjects they know nothing about. If that's true of you, then you may already know quite a bit about feedback–that's why you've picked up this book.
    It's a good thing when a reader begins a book with a head start on the subject–yet it can also create problems. In this instance, there are so many different meanings and connotations to the word "feedback"–depending on the discipline or specialized field in which it is being used–that each reader may know something different about feedback.
    Chapter 1. What is Feedback?
    Here is the idea about feedback upon which we are basing this book. Feedback can be defined as:
      information about past behavior
      delivered in the present
      which may influence future behavior.
    Examples of Feedback at Work
    To clarify this definition, here are three examples of feedback:
    Example #1.  Alice, an accounting supervisor in a construction company, had an outstanding record for high quality work carried out in a timely manner. She often wondered why other people got promoted to jobs for which she was better qualified. Brenda, who worked for her, also noticed that Alice always came in second. One day, at lunch, she remarked, "You know, Alice, I think a big reason you don't get promoted is that you lack visibility, professionally and in the community." Alice gave a speech on the new tax code at the local chapter of Administrative Management Society, and became chair of the public library board's financial committee. In five months, she was promoted.
    Example #2.  Arthur, an assistant manager in a branch bank, also had an outstanding record for high quality work carried out in a timely manner. He, too, often wondered why other people got promoted to jobs for which he was better qualified.  Brent, his manager, also noticed that Arthur always came in second. One day, on the golf course, he remarked, "You know, Arthur, I think a big reason you don't get promoted is that you lack visibility, professionally and in the community." Arthur was a panel member in a debate on auditing at the local chapter of Auditors' Club, and became fund raising chairman of the zoological society. His opinions on the panel offended a great many people, and his stand on refrigeration for the polar bear cages irritated even more. In five months, he lost his job.
    Example #3.  Amy, a sales team leader in a recreational equipment company, also had an outstanding record for high quality work carried out in a timely manner. She, too, often wondered why other people got promoted to jobs for which she was better qualified. Her colleague, Bert, also noticed that Amy always came in second. One day, in the elevator, he remarked, "You know, Amy, I think a big reason you don't get promoted is that you lack visibility, professionally and in the community." Amy said, "Thank you for telling me that," but as she wasn't interested in promotions, she didn't do anything about it. In five months, she was still in the same job, doing high quality work carried out in a timely manner–and quite happy about it. Two years later, she got promoted to sales manager, and was very happy about that, too.
    Feedback May Influence Future Behavior
    Alice, Arthur, and Amy each received feedback about their approach to their work, including things they did and didn't do. Indeed, they each received the same feedback: "A big reason you don't get promoted is that you lack visibility, professionally and in the community." But since the influence of feedback depends on who receives it, they each experienced a different outcome.
    Example #1.  Alice used the information about her visibility to try some new approaches, and succeeded in reaching her objective, a promotion.
    Example #2.  Arthur used the information about his visibility to try some new approaches, and succeeded in offending a lot of important people. His reward was getting fired.
    Example #3.  Amy heard the information, appreciated it for the way it was intended, but regarded it as irrelevant to her career. She did essentially nothing, but life went on, anyway.
    The concept of "feedback" comes from cybernetics, the theory of control. We can see from these three examples, however, that feedback may influence future behavior, it doesn't necessarily control anything.
    What The Process of Feedback Looks Like
    Feedback in cybernetics emphasizes the concept of a closed loop in a system providing a control function. The thermostat controlling the temperature in a room; the automatic pilot controlling the motions of an airplane; both are classic examples of cybernetic feedback systems.
    In other words, cybernetics tells us that feedback is a relationship between two systems which can be visualized in a simple diagram (Figure_1.1):
    1. A does something 

    2. B notices what A does or doesn't do

    3. B responds to what A did or didn't do

    4. A notices that B responds

    5. A decides what if anything to do about B's response

    6. A does something (which takes us back to 1.)
    Figure_1.1. The simple feedback diagram.

    Applied to our three human examples, what this diagram says is that:
      A (Alice, Arthur, or Amy) does something.
      B (Brenda, Brent, or Bert) notices what A does, or doesn't do.
      B responds to what A does. or doesn't do.
      A notices that B responds.
      A makes a decision what, if anything, to do about B's response.
    Although the diagram may seem simple, notice that, in the language of cybernetics, it forms a "closed loop." Once A responds to B responding to A's response, then the whole cycle reverses and repeats.
    A loop represents a relationship between two systems, in this case, two people. It could go on indefinitely, or it can come to a stop. Once it gets going, the closed loop resembles the chicken-egg problem. It may start with the most trivial action by A, but the final interaction can explode all out of proportion to the beginning.
    In such a situation, it's not always reasonable to say that A's opening action is the cause of the interaction. This breakdown of simple cause and effect is one of the reasons that feedback is sometimes so confusing. On the other hand, without the concept of closed feedback loop, we may never be able to understand human interactions at all.
    Why is Feedback Important?
    If we want to build, maintain, or test our relationships, feedback is our only source of information. Without feedback, how could we test the reality of our perceptions, reactions, observations, or intentions? If we want to share our feelings, what other way do we have but feedback? If we want to influence someone to start, stop, or modify their behavior, how else but feedback? In short, feedback is critical every time you interact with anybody, about anything.
    Carl Rogers, the psychologist, observed that one of our most powerful needs is to be heard and understood. Without feedback, what would keep us from inventing our own reality? Without feedback, how could we distinguish between what's going on inside us and what's happening in the rest of the world?
    At work, we see many examples of feedback, because feedback is fundamental to helping anyone who wishes to improve their performance, reach an objective, or avoid unpleasant reactions to their efforts. Feedback enables people to join with other people to achieve more than any one could achieve alone. Feedback also lets us avoid people who will obstruct our efforts.
    Feedback is also important for keeping performance the same when the environment changes. For instance, now that she has a new job, Alice will need feedback to adjust her outside activities to her new situation.
    Under some conditions, feedback may become critical to our very survival. Without response for stimulation, people withdraw, hallucinate, and eventually die. In the work situation, the worst punishment you can inflict on a person is to isolate them from all co-workers, with nothing whatsoever to do.
    Why is feedback so universally important? Our environment is constantly changing, so we can't survive unless we adapt, grow, and achieve with others. But, unless we can do magic, we need information about how we performed in the past in order to improve our performance in the future.
    The Secrets of Interaction
    Which brings us to the subject of you. If you want to change your behavior in some way, or to preserve your behavior in a changing environment, it's not likely to happen by magic. You will need some kind of feedback.
    Why? Feedback is a systems concept. We are used to thinking of airplanes or computers or businesses or governments as systems. You may not think of yourself that way, but you're a system, too. Of course, you're a much more complex system than, say, an airplane. Although airplanes need very expensive feedback control systems, you need the finest feedback that money can buy.
    In fact, if you want to make significant changes in your life, you'll probably need something better than money can buy. You'll probably need information from other people about what impact you have on them, and that can only be obtained by a process of give and take.
    We call that kind of information interpersonal feedback. This book is about how to give interpersonal feedback, how to take it, and how to make the most of what you give and take. It's about sifting the important from the irrelevant, distinguishing the information from the distortions, and seeing the patterns in a series of specific instances. It's about adding new information without becoming confused or incoherent. And, finally, it's about how to maximize the conditions in which you can share your thoughts and feelings with others.
    Examples of Interpersonal Feedback at Work
    Although interpersonal feedback is essential in all aspects of your life, we're going to focus on your work life, where you already have an invaluable collection of experiences. If something you learn at work happens to influence other aspects of your life, consider it a bonus.
    Let's look at one series of examples of feedback that one person might receive at work. Sally is an accountant, and all these things happened to her on one Friday. Are any of them familiar in your own typical day at work?
    During a morning meeting with clients, Sally corrects an error in one of Jack's figures. The client is angry with Jack. After the meeting, Jack screams at Sally and tells her that she did the same thing two years ago. Then the boss calls her in to his office where he congratulates her on her sharp vision.
    At lunch in the cafeteria, Sally coughs without covering her mouth. Willard turns the other way, Sylvia raises her eyebrow, and Jack makes no visible response.
    After lunch, Sally interviews a potential new client. The client compliments her on her suit, but later she hears that he has decided to engage a competing firm. Sally is sorry to lose an account, but also pleased because she is sick and tired of getting feedback from clients on her clothes instead of her work. She was not looking forward to giving yet another client that feedback.
    At coffee break, Sylvia tells Sally that Willard is upset because Sally is too friendly with the boss.
    Her daughter's teacher calls to say that her daughter is very upset about the small amount of time she spends with her mother. The teacher wants to know when they can schedule a conference to figure out how Sally can take time from her work to help her daughter make a smoother transition into first grade.
    In the elevator at the end of the day, Sally tells a joke. Jack laughs, Sylvia snorts, and Willard pinches Sally's shoulder.
    These few examples illustrate the wide range of interaction that comes under the title of "interpersonal feedback at work," yet these are only a few of the thousands of instances of feedback that Sally receives in a single day.
    All this information may be "better than money can buy," but if you are like Sally, you'll see how easily she could be confused when she tries to use this feedback to improve or maintain her performance at work. Not all the feedback is about her and her impact. Some of it is about what someone else thinks she is doing, or wants her to be doing, or even who Sally reminds them of.
    Not only that, but the timing of the feedback is often unfortunate. The feedback often arrives days, months, or years late. It comes in different ways from many people at the same time, or from the same person at different times. And, when it comes, it often finds Sally off balance and vulnerable. Is this familiar, too?
    It's not surprising that Sally doesn't always receive the feedback as intended, or act on it even if she does. In most cases, after an hour or so, she won't even remember what she heard.
    Or maybe you also find yourself on the sending end of feedback. Perhaps your job requires it, or perhaps you just like to give more than you like to receive. We'll also be writing about why some people's feedback is heard fairly much as they intend, while others can't seem to get their message across no matter how hard they try.

    If you're interested in how to use feedback more effectively within an organization, then the place to start is right here–learning how to use feedback more effectively with the individuals who make up that organization.

    Feedback in Fiction

    For snother view of the consequences of feedback, take a look at my novel, The Aremac Project. When one of the protagonists experiences an accident that renders her unable to communicate, she is unable to provide feedback about a murder she witnessed. Without normal feedback, how can she save her life and the lives of the ones she loves?

    Friday, May 23, 2014

    Overconstraint: Some Effects on S/W Design

    Continuing my weekly excerpt series, the following sections are taken from Exploring Requirements, Volume 2: First Steps to Design

    16.5 Overconstraint
    Relaxing a constraint increases the size of the solution space, and therefore may increase the number of potential solutions. Conversely, tightening a constraint decreases the size of the solution space and may eventually result in an overconstrained set of requirements. In this event, finding any acceptable solution would be very difficult, or even impossible.
    It will certainly be impossible to find a solution if there are conflicting constraints, as when one constraint says Superchalk must be longer than four inches and another says it must be shorter than three inches. If you attempt to sketch this solution space, you'll find it has zero area.
    Even when the solution space doesn't have zero area, it may still be impossible to find any real solution possibility falling within it. Of course, this unfortunate condition won't become apparent until design. Experienced designers, though, can often guess what is happening, especially as they watch the solution space shrinking.
    If the solution space is actually void of possible solutions, you will have to negotiate one constraint against another. If you can't do that, you'll have to drop the project. There are worse things than dropping a project in the requirements stage, such as dropping it in the post-implementation stage. So be on the lookout for an empty solution space, and don't be afraid to suggest termination if it's clear the project is overconstrained.
    It's extremely important when and how you suggest termination. Sometimes people are implicitly pre-negotiating constraints, because they have an image of the system becoming overconstrained. For example, they won't mention something they really want, or they'll start trying to suppress someone else's idea.
    Einstein once said, "A theory should be as simple as possible, but no simpler." We can paraphrase Einstein and say,
    Requirements should be as constrained as possible but no more constrained.
    Applying this principle is equivalent to saying,
    The solution space should be as large as possible, but no larger.
    That's why it's a much better idea to deal with constraints explicitly.
    16.6 Psychology of Constraints
    Mathematically, we know excessive constraints limit the size of the solution space, but even worse, they limit us psychologically.
    16.6.1 The tilt concept
    If you watch people play pinball, you'll notice some of them tilt the machine occasionally, while others never tilt. The ones who tilt too much are not good players, because they are unable to restrain themselves. But the ones who never tilt are terrible players, because they restrain themselves excessively. Why? Because tilting shows the players just how far they can safely bump the machine. If you never exceed your limits, you never learn what your limits are. So, this is the tilt concept:
    If you never tilt, you don't know if you're using your full resources.
    As designers, we are often intimidated by constraints. We toss out an idea, then:
    • Someone says, "That information is confidential," so we stop pursuing an idea which may be essential to a superior design.
    • Someone points to a seven-foot shelf of standards manuals and declares, "Doing what you propose violates our standards," so we drop a promising idea.
    • Someone frowns and whispers, "The boss would never approve of what you're proposing." Even though it's merely an idea for a brainstorm, and not a proposal, we shrivel into a ball and change the subject, never daring to check it out.
    Much of the time, most of us tend to operate at some safe distance from the constraint boundaries, because we're afraid of tilting something. Thus, none of us search the entire solution space, but only a reduced solution space, as suggested in Figure 16-8.

    Figure 16-8. Most designers don't search the entire solution space, but merely a reduced solution space determined by their fear of constraints.
    16.6.2 Breaking constraints
    You can always overcome this limitation by probing politely, but firmly. At one of Jerry's clients, the requirements team indicated the product had to be shipped by November 1. As this seemed an impossible overconstraint, Jerry asked, "Where does that date come from?" Everybody in the room got a shocked and frightened look, and one man lowered his voice and said, "The boss told us."
    If Jerry didn't understand the tilt concept, he would have become just as frightened as they were—and dropped the subject. But he was an outsider anyway, so he asked for a break and went down the hall to talk to the boss. "There seems to be some uncertainty in the other room," he said. "What deadline did you set on this project, anyway?"
    "November 1."
    Jerry might have stopped there, too, but the tilt concept kept him going, "That date may turn out to be an overconstraint. Can you explain where it comes from?"
    The boss looked a little puzzled. "Constraint? I didn't mean it to be a constraint. They asked me when I wanted it, so I said it would be nice to have it by November 1. I didn't want it after November 1 because it would interfere with our year-end processing, but after January 15 would be just fine. Heck, as long as I get it by June there won't be any problem."
    In other words, there was a hole in the solution space, between November 1 and January 15, but there was plenty of solution space on the other side of the hole.
    When Jerry returned to the requirements room, the team was shocked to hear the news. "You must be a super negotiator," they said, their eyes sparkling with admiration.
    "No," Jerry said, "but I'm a terrific pinball player."
    16.6.3 The self-esteem-bad-design cycle

    Now we can see one reason the quality of the requirements work depends both on the personality of the requirements team, and on the perceived safety of the working environment. A team with low self-esteem will be afraid to check over-constraints, and consequently will create a more difficult design task. Then because the design task is extra difficult, they won't do as good a job as they might have. As a result, their self-esteem will drop, and they'll be even worse on the next project. Only a skilled intervention from outside can break such a vicious cycle of low self-esteem and bad design.

    Many of my Women of Power novels capture principles of requirementss gathering in exciting stories as object lessons. Check out The Women of Power.

    Even better, buy one of the novels, read it, and if you don't learn something from it, let me know and I'll refund your purchase price along with a bonus book. - Jerry