Friday, May 23, 2014

Overconstraint: Some Effects on S/W Design

Continuing my weekly excerpt series, the following sections are taken from Exploring Requirements, Volume 2: First Steps to Design



16.5 Overconstraint
Relaxing a constraint increases the size of the solution space, and therefore may increase the number of potential solutions. Conversely, tightening a constraint decreases the size of the solution space and may eventually result in an overconstrained set of requirements. In this event, finding any acceptable solution would be very difficult, or even impossible.
It will certainly be impossible to find a solution if there are conflicting constraints, as when one constraint says Superchalk must be longer than four inches and another says it must be shorter than three inches. If you attempt to sketch this solution space, you'll find it has zero area.
Even when the solution space doesn't have zero area, it may still be impossible to find any real solution possibility falling within it. Of course, this unfortunate condition won't become apparent until design. Experienced designers, though, can often guess what is happening, especially as they watch the solution space shrinking.
If the solution space is actually void of possible solutions, you will have to negotiate one constraint against another. If you can't do that, you'll have to drop the project. There are worse things than dropping a project in the requirements stage, such as dropping it in the post-implementation stage. So be on the lookout for an empty solution space, and don't be afraid to suggest termination if it's clear the project is overconstrained.
It's extremely important when and how you suggest termination. Sometimes people are implicitly pre-negotiating constraints, because they have an image of the system becoming overconstrained. For example, they won't mention something they really want, or they'll start trying to suppress someone else's idea.
Einstein once said, "A theory should be as simple as possible, but no simpler." We can paraphrase Einstein and say,
Requirements should be as constrained as possible but no more constrained.
Applying this principle is equivalent to saying,
The solution space should be as large as possible, but no larger.
That's why it's a much better idea to deal with constraints explicitly.
16.6 Psychology of Constraints
Mathematically, we know excessive constraints limit the size of the solution space, but even worse, they limit us psychologically.
16.6.1 The tilt concept
If you watch people play pinball, you'll notice some of them tilt the machine occasionally, while others never tilt. The ones who tilt too much are not good players, because they are unable to restrain themselves. But the ones who never tilt are terrible players, because they restrain themselves excessively. Why? Because tilting shows the players just how far they can safely bump the machine. If you never exceed your limits, you never learn what your limits are. So, this is the tilt concept:
If you never tilt, you don't know if you're using your full resources.
As designers, we are often intimidated by constraints. We toss out an idea, then:
• Someone says, "That information is confidential," so we stop pursuing an idea which may be essential to a superior design.
• Someone points to a seven-foot shelf of standards manuals and declares, "Doing what you propose violates our standards," so we drop a promising idea.
• Someone frowns and whispers, "The boss would never approve of what you're proposing." Even though it's merely an idea for a brainstorm, and not a proposal, we shrivel into a ball and change the subject, never daring to check it out.
Much of the time, most of us tend to operate at some safe distance from the constraint boundaries, because we're afraid of tilting something. Thus, none of us search the entire solution space, but only a reduced solution space, as suggested in Figure 16-8.


Figure 16-8. Most designers don't search the entire solution space, but merely a reduced solution space determined by their fear of constraints.
16.6.2 Breaking constraints
You can always overcome this limitation by probing politely, but firmly. At one of Jerry's clients, the requirements team indicated the product had to be shipped by November 1. As this seemed an impossible overconstraint, Jerry asked, "Where does that date come from?" Everybody in the room got a shocked and frightened look, and one man lowered his voice and said, "The boss told us."
If Jerry didn't understand the tilt concept, he would have become just as frightened as they were—and dropped the subject. But he was an outsider anyway, so he asked for a break and went down the hall to talk to the boss. "There seems to be some uncertainty in the other room," he said. "What deadline did you set on this project, anyway?"
"November 1."
Jerry might have stopped there, too, but the tilt concept kept him going, "That date may turn out to be an overconstraint. Can you explain where it comes from?"
The boss looked a little puzzled. "Constraint? I didn't mean it to be a constraint. They asked me when I wanted it, so I said it would be nice to have it by November 1. I didn't want it after November 1 because it would interfere with our year-end processing, but after January 15 would be just fine. Heck, as long as I get it by June there won't be any problem."
In other words, there was a hole in the solution space, between November 1 and January 15, but there was plenty of solution space on the other side of the hole.
When Jerry returned to the requirements room, the team was shocked to hear the news. "You must be a super negotiator," they said, their eyes sparkling with admiration.
"No," Jerry said, "but I'm a terrific pinball player."
16.6.3 The self-esteem-bad-design cycle

Now we can see one reason the quality of the requirements work depends both on the personality of the requirements team, and on the perceived safety of the working environment. A team with low self-esteem will be afraid to check over-constraints, and consequently will create a more difficult design task. Then because the design task is extra difficult, they won't do as good a job as they might have. As a result, their self-esteem will drop, and they'll be even worse on the next project. Only a skilled intervention from outside can break such a vicious cycle of low self-esteem and bad design.

Postscript
Many of my Women of Power novels capture principles of requirementss gathering in exciting stories as object lessons. Check out The Women of Power.

Even better, buy one of the novels, read it, and if you don't learn something from it, let me know and I'll refund your purchase price along with a bonus book. - Jerry

Sunday, May 18, 2014

Methodologies Aren't Enough

This week we continue with sample essays from my various books. The topic this week is taken from the beginning of Exploring Requirements, Volume 1.

Some of our readers would readily agree with the need to resolve the ambiguities in requirements, but they would argue the problem is not with the techniques, but with the technicians. In order to do a better requirements job, they would claim, remove the people from the process and instead use a methodology. Indeed, their preference would be a totally automated methodology, with no people in the development process whatsoever.
When we started teaching software development in 1958, there were few organized development methods. Today, however, packaged methodologies flood the market and almost everyone who develops software has some sort of organized process. For many years, computer-aided software engineering (case) and computer-aided design (cad) dominated the news, both promising to eliminate people from the process. More recently, the Agile movement seems to have rediscovered people, and they are seeing the need for a book about "people-oriented" tools to support their approach. But regardless of the approach, it must ensure everyone gets the right requirements, and gets the requirements right. Won't automated tools do the job without people? We think not, and the story of the Guaranteed Cockroach Killer will tell you why.
1.1 CASE, CAD, and the Cockroach Killer
For many years, a man in New York made a living selling his Guaranteed Cockroach Killer through the classified ads. When you sent your check for five dollars, he cashed it and sent you the kit shown in Figure 1-1.



Figure 1-1. The Guaranteed Cockroach Killer kit. All you have to do is place the roach on block A and then strike it with block B.
The instructions read
1. Place cockroach on block A.
2. Hit cockroach with block B.
If you follow these instructions, you are guaranteed to kill the cockroach. By rights, there should have been a third instruction:
3. Clean up mess. 
Quite likely, nobody ever needed the third instruction—which also meant nobody could collect on the iron-clad guarantee.
Now, what has the Guaranteed Cockroach Killer to do with automated tools? One case document told us case tools are comprised of three basic application development technologies:
1. analysis and design workstations to facilitate the capture of specifications
2. dictionaries to manage and control the specification information
3. generators to transform specifications into code and documentation
In our experience, "analysis and design workstations" resemble block A of the cockroach killer. "Dictionaries" resemble block B. If you can somehow figure out the specifications, the workstations will "capture" them and the dictionaries will "manage" them. Once these two steps are done, the generators will clean up the mess and provide your product.
These automated tools are guaranteed to do their job if you simply do your part. This book is about doing your part: getting the roach on the block, and convincing it to stand still long enough for you to deliver the crushing blow.

Whether it's roaches or requirements, the hard part is catching them and getting them to stand still. Generating the code and cleaning up the squashed roach are messy jobs, but in a different way than the first two steps. That's why we think you'll still need the tools described in this book. Not only that, but the better your automated tools, the more you'll need our people-oriented tools. You can see why in the book sample at Smashwords.

Saturday, May 10, 2014

Excessive Realism in Teaching Simulations

Continuing My Weekly Series of Book Excerpts

(from Experiential Learning: Simulation)

A simulation doesn’t have to be “real” to be successful as an experiential learning tool. What has to be real are the feelings it stimulates in the participants, for feelings are what drive learning. Indeed, too much realism can be harmful to the goal of learning. If the simulation is too close to the participants’ real situation, they be unable to be objective about what they did and what resulted from what they did.

The following sections offer some other effects you may experience if your simulation hits too close to home.

Too accurate detail
Scaling difficulties aren’t the only reason, but they’re a more than adequate reason, why we don’t attempt to do direct simulations to obtain meaningful measurements. We’re not looking for numbers. It’s not answers we seek, but insights. Consequently, when you design a direct simulation, don’t waste your time trying to find “exact” information about the factors in the simulation.

There are other reasons, as well, for not attempting to use a more realistic project. It’s possible to do that, and we have done it often for software projects. Usually, though, it takes more time than people are willing to devote to “a class,” but even more important, it usually generates far more data than we can analyze effectively in hours or even days. But we have done such simulations, for example, in a semester-long college class in software development. Teams in such classes have developed useful software tools that became successful commercial products. One team even developed a hardware computer that executed a high-level language as its machine language.

Too embarrassing
When teaching within a real organization, there may be a much stronger reason to avoid too much accuracy in a direct simulation. If the simulation reveals something embarrassing, anyone who might look bad will try to discredit the entire learning cycle.

A case in point: We were engaged to conduct a week-long workshop for a university’s IT department, to help them improve the quality and timeliness of their in-house-developed software. On Tuesday, we conducted a four-hour simulation of their development process. One of the inventions showed the bad effects of management’s unwillingness to follow some simple suggestions by the testing team.

Both sides–the testers and the developers–were excited to learn how these management actions, or inactions, affected quality and schedules. The participants were so excited, we were sure the final three days of the workshop would result in significant productive changes in the way they developed software. But nothing ever happened.

On Wednesday morning, half the students–the development team–simply didn’t show up. When we investigated, we learned that the development manager had forbidden his employees to attend the remainder of the class. When we confronted him about his decision, he said, “It’s dangerous to let developers and testers get to know one another. If they become friends, the testers will not want to embarrass the developers by finding their bugs.”

Unnecessary detail
Paradoxically, realism often interferes directly with learning from a simulation. For some years, as part of our consulting, we used a simulation of a software development project in which the class was to organize and produce an actual software tool. We were hoping the participants would discover many project pitfalls–information they could apply directly on their jobs.

To add realism to the project, we added a number of the pitfalls we were hoping they would learn to recognize and perhaps avoid. For instance, one of the learning leaders acted as the top executive above their project, and every fifteen minutes or so, he would call a “progress meeting.” We were hoping they would learn to appreciate how much such frequent meetings could disrupt a project. We were pleased to see how well these meetings actually disrupted their project.
But after a few classes, we began to see a pattern in the invention phase. Yes, the participants recognized how much frequent status meetings disrupted the project, but they said, “If management didn’t keep interrupting us to check on progress, we would have done fine. You prevented us from succeeding.”

In other words, they were saying, “We don’t have to learn something or change our behavior. We need our management to change.” If the class involved their executive management, that might have been a good result, but that was not the result we were seeking.

We decided to remove that detail from the simulation. The learning leader who played the role of executive management called no meetings at all. But guess what? The project was still messed up– late and out of control. In the invention phase, they said, “If management didn’t keep interrupting us to check on progress, we would have done fine.”

But this time, they couldn’t blame their learning leaders. It was their own teammates who kept checking status, so now they had to look inward to discover what it was about status meetings that needed to change. By not trying to insert so much detail in the simulation, we allowed the pitfalls to arise from the participants’ own behavior–behavior they could actually do something about.

In realistic simulations, less is more. It’s more because it’s more realistic to let their failures (or successes) arise from their own behaviors, not from intervention by the learning leaders. Once we learned this principle, we began pruning our simulations to the lowest level of leader participation. For instance, they argued that they would have been more successful without specification changes we were introducing randomly. Such changes are a familiar obstacle to anyone who has ever developed software, but again, by adding this detail, we simply gave them another excuse–another way to avoid responsibility for their own unproductive behavior. When we stopped introducing such spec changes, their own developers keep thinking of “nifty features” to introduce into their own product.

Almost but not quite
But realism isn’t the only problem with detail. Just providing detail itself may mess up the learning value of a simulation. The more details you supply, the more it seems to invite the participants to blame slight flaws in some detail you do supply. It’s as if the participants say, “It all seemed so real that we were thrown off course when this detail (pick one) wasn’t quite right.

Even if that detail was essentially unrelated to the learning goals of the simulation, it seems to provide an excuse for failure–and this an excuse for not learning. 

Sunday, May 04, 2014

Tinkering With Toys

Continuing our weekly series of excerpts from Jerry's books, today's post is taken from the Experiential Learning series, book 2, Inventing.

Tinkering with Toys is an exercise we designed to explore the relationships among design criteria, documentation difficulty, and system maintenance. It involves one team building and then documenting a device from Tinkertoys, then passing that documentation to another team that must rebuild the device. To give you an idea of the richness of the exercise, which is described in the book, we'll instead list some of the lessons participants have learned.

Each outcome is different, but over the years certain lessons emerge as common to many of the experiences with this exercise. But some of these lessons are important observations about experiential learning in general, so we’ll start the chapter with the first handful, saving the exercise-specific lessons for the end.

Can learning be fun?
Learning from structured experiences can be great fun–and that’s a problem. Many people believe learning and fun are incompatible. They believe that to learn, you must suffer. Looking at an experiential session with that attitude, you are quite likely to believe that people are wasting time, and learning is not happening.

What follows are four essential lessons people need to learn if they’re to overcome the anti-fun myth, and so can be successful participants in some sort of experiential learning session.

Just because you’re having fun doesn’t mean you aren’t learning.
Someone coming in from outside and seeing you at work with your Tinkertoys might not appreciate that you are, in fact, doing something useful. We tend, in our society, to distinguish between “work” and “play”. Part of this tendency is a kind of envy or nostalgia on the part of managers, who are no longer allowed the style of “play” we call “technical work”. (Of course, managers “play” in executive lunches and other activities which some technical people cannot recognize as useful work).

Just because you’re learning doesn’t mean you feel right about having fun.
Most of us have so internalized the work-play dichotomy that when we’re having fun, we look around to see if someone is watching. And, even if nobody is watching, we feel guilty, somehow, about what we’re doing. If only there were some way to measure what we’re accomplishing, we could free ourselves from these guilt feelings, but many of us oppose being measured, too. (And with good reason, if the measurements are not soundly based).

Just because you’re not having fun doesn’t mean you’re not learning.
When the fun is over in the first part of the Tinkertoy exercise and someone has to “pay” for it by documenting the creative mess, a lot of people think that’s the “lesson.” They’ve seen the “point” of the exercise, so why go on to the bitter end? Well, our exercises–unlike schools but much like “life”–don’t have one single “point.” There’s much to be learned in working through to the “bitter end.” For instance, sometimes we have to learn that the apparent end isn’t the end at all.

Just because you’re having fun or not having fun doesn’t mean you are learning anything.
More than anything else, what you learn from experiences depends upon the attitude with which you approach them. In some situations, we just keep repeating our old behaviors, even though they weren’t very successful in the past. In other situations, we repeat our old behavior because it was fun, not really caring if it was accomplishing anything. That view, at least, has some inner logic: you may not get the job done, but you have a good time.

The other way round has no logic at all. If you find yourself having a miserable time, then turn on your learning faculties full blast so you will learn to avoid the situation in the future.