Saturday, March 24, 2018

How do I fix a really difficult bug in programming?

Here was the question:

"How do I fix a really difficult bug in programming?"

Here was my first answer:

There is no such thing as a “difficult bug.”

I suspect my answer requires further explanation. First of all, I doubt that you have experienced actual bugs in your computer, the kind with 8 legs that bite and swarm. I have, a couple of times, but they are rare, and usually not difficult to eradicate.

Perhaps you are talking about errors, but using inaccurate language. In that case, I will assert “there is no such thing as a difficult error.” The same error might be handled easily by a different person. I have seen that circumstance often. For instance, I once spent a month trying to pinpoint a coding error. When I finally asked the help of a colleague, she found it in less than two minutes.

No, there are no difficult errors, but there are people who have difficulty with an error. We have all been there, and we tend to want to blame the error rather than ourselves.

So, the first thing you need to do to handle a “difficult bug” is to ask yourself,

“What is it about me that is making this error so difficult to handle?”

Perhaps you are having difficulty because you are impatient, or think failure to handle the error will make you look bad to your boss or colleagues.

Perhaps pressure to handle the error is throwing you off your center, distorting your thinking.

Perhaps you do not know enough about the system with the error, or the language in which the program is written.

Perhaps your mind is on other things in your life, things distracting you because they are more important to you than this darn “bug.”

Maybe you should discuss this error with a colleague or two, What is it about you that is keeping you from doing that?


Anyway, good luck in your quest for resolution.

Tuesday, February 13, 2018

My most insidious bug

I was asked, "As a coder, what is the most insidious bug you have ever come across, and how did you find it?"

It’s really hard to pick one error out of hundreds I’ve encountered in my long career, but some of the toughest have been:

  • compiler errors, where the compiler has created object code incorrectly. We usually found these by hacking around, changing the source code to express the program in different ways, or by examining the object code the compiler had produced.
  • hardware errors, both from the failure of a component and an actual design error in the hardware. Such errors are not as frequent today as they were in the old days of vacuum tubes (or relays), but in a way that infrequency makes them all the more difficult when they do occur, because we have so little experience with them.
  • requirements errors, where the program has actually solved the wrong problem. These errors can usually be found only after users have been in contact with the code for some time, and only when there is some communication channel between the users and the programmers.
  • So, what were your most insidious errors?

You can read more about errors and their consequences in

Errors: Bugs, Boo-Boos, and Blunders (https://leanpub.com/errors)

Saturday, January 06, 2018

New: #System #Design #Heuristics

You'd think that after publishing books for half a century, I'd know how to write a book. If that's what you think, you'd be wrong.

Sure, I've even written a book on writing books (Weinberg on Writing, the Fieldstone Method), and I've applied those methods to dozens of successful books. But way back around 1960, I started collecting notes on the process of design, thinking I would shortly gather them into a book. Back then, I didn't call these bits and pieces "fieldstones," but that's what they turned out to be: the pieces that, when assembled properly, would ultimately become my design book.

Ultimately? Assembled properly? Aye, there's the rub!

Building walls from randomly found fieldstones requires patience. So does writing books by the Fieldstone Method. My Introduction to General Systems Thinking took fourteen years to write. But a writer only lives one lifetime, so there's a limit to patience. I'm growing old, and I'm beginning to think that fifty years is as close to "ultimately" as I'm going to get.

So, I've begun to tackle the task of properly assembling my collection of design fieldstones. Unfortunately, it's a much larger collection that I'd ever tackled before. My Mac tells me I have more than 36,000,000 digitized bytes of notes. My filing cabinets told me I had more than twenty-five pounds of paper notes, but I've managed to digitize some of them and discard others, so there's only a bit more than ten pounds left to consider.

For the past couple of years, I've periodically perused these fieldstones and tried to assemble them "properly." I just can't seem to do it. I'm stuck.

Some writers would say I am suffering from "writer's block," but I believe "writer's block" is a myth. I've published three other books in these frustrating years, so I can't be "blocked" as a writer, but just over this specific design book. You can hear me talk more about the Writer's Block myth on YouTube 

[https://www.youtube.com/watch?v=77xrdj9YH3M&t=7s]

but the short version is that "blocking" is simply a lack of ideas about how to write. I finally decided to take my own advice and conjure up some new ideas about how to write this design book.

Why I Was Stuck

To properly assemble a fieldstone pile, I always need an "organizing principle." For instance, my recent book, Do You Want to Be a (Better) Manager? is organized around the principle of better management. Or, for my book, Errors, the principle is actually the title.  So, I had been thinking the organizing principle for a book on design ought to be Design

Well, that seemed simple enough, but there was a problem. Everybody seemed to know what design is, but nobody seemed able to give a clear, consistent definition that covered all my notes. I finally came to the conclusion that's because "design" is not one thing, but many, many different things.

In the past, I ran a forum (SHAPE: Software as a Human Activity Practiced Effectively) whose members were among the most skilled software professionals in the world. We held a number of threads on the subject, "What is Design?" The result was several hundred pages of brilliant thoughts about design, all of which were correct in some context. But many of them were contradictory.

Some said design was a bottom-up process, but others asserted it was top-down. Still others talked about some kind of sideways process, and there were several of these. Some argued for an intuitive process, but others laid out an algorithmic, step-by-step process. There were many other variations: designs as imagined (intentional designs), designs as implemented, and designs as evolved in the world. All in all, there were simply too many organizing principles—certainly too many to compress into a title, let alone organize an entire book.

After two years of fumbling, I finally came up with an idea that couldn't have been implemented fifty years ago: the book will be composed of a variety of those consulting ideas that have been most helpful to my clients' designers. I will make no attempt (or very little) to organize them, but release them incrementally in an ever-growing ebook titled Design Heuristics.

How to Buy System Design Heuristics

My plan for offering the book is actually an old one, using a new technology. More than a century ago Charles Dickens released many of his immortal novels one chapter at a time in the weekly newspaper. Today, using the internet, I will release System Design Heuristics a single element at a time to subscribing readers.

To subscribe to the book, including all future additions, a reader will make a one-time payment. The price will be quite low when the collection is small, but will grow as the collection grows. That way, early subscribers will receive a bargain in compensation for the risk of an unknown future. Hopefully, however, even the small first collection will be worth the price. (If not, there will be a full money-back guarantee.)

Good designs tend to have unexpected benefits. When I first thought of this design, I didn't realize that it would allow readers to contribute ideas that I might incorporate in each new release. Now I aware of that potential benefit, and look forward to it.

Before I upload the first increment of System Design Heuristics, I'll wait a short while for feedback on this idea from my readers. If you'd like to tell me something about the plan, email me or write a comment on this blog.

Thanks for listening. Tell me what you think.

Sunday, December 31, 2017

What is Software?


Ir's a new year, so let's start out with something fundamental, cleaning up something that's bothered me for many years.

The other day I was lunching with a computer-naive friend who asked, "What is software?"

Seems like it would be an easy question for those of us who make and break software for a living, but I had to think carefully to come up with an explanation that she could understand:

Software is that part of a computer system that adapts the machinery to various different uses. For instance, with the same computer, but different software, you could play a game, compute your taxes, write a letter or a book, or obtain answers to your questions about dating.

I then explained to her that it’s unfortunate that early in the history of computers this function was given the name “software,” in contrast to “hardware.” What it should have been called was “flexibleware.”

Unfortunately the term “soft” has been interpreted by many to mean “easy,” which is exactly wrong. Don't be fooled. 
What we call “hardware” should have been called “easyware,” and what we call “software” could then have been appropriately called “difficultware.”

Monday, December 25, 2017

Unnecessary Code

We were asked, "How can I tell if my code does extra unnecessary work?"
To answer this question well, I’d need to know what you mean by “unnecessary.” Not knowing your meaning, I’ll just mention one kind of code I would consider unnecessary: code that makes your program run slower than necessary but can be replaced with faster code.

To rid your program of such unnecessary code, start by timing the program’s operations. If it’s fast enough, then you’re done. You have no unnecessary code of this type.

If it’s not fast enough, then you’ll want to run a profiler that shows where the time is being spent. Then you search those areas (there can be only one that consumes more than half the time) and work it over, looking first at the design.

There’s one situation I’ve encountered where this approach can bring you trouble. Code that’s fast enough with some sets of data may be unreasonably slow with other sets. The most frequent case of this kind is when the algorithm’s time grows non-linearly with the size of the data. To prevent this kind of unnecessary code, you must do your performance testing with (possibly artificially) large data sets.

Paradoxically, though, some algorithms are faster with large data sets than small ones.

Here’s a striking example: My wife, Dani, wanted to generate tests in her large Anthropology class. She wanted to give all students the same test, but she wanted the questions for each student to be given in a random order, to prevent cheating by peeking. She gave 20 questions to a programmer who said he already had a program that would do that job. The program, however, seemed to fall into an unending loop. Closer examination eventually showed that it wasn't an infinite loop, but would have finally ended about the same time the Sun ran out of hydrogen to burn.

Here’s what happened: The program was originally built to select random test questions from a large (500+ questions) data base. The algorithm would construct a test candidate by choosing, say, twenty questions at random, then checking the twenty to see if there were any duplicates among those chosen. If there were duplicates, the program would discard that test candidate and construct another.

With a 500 question data base, there was very little chance that twenty questions chosen at random would contain a duplicate. It could happen, but throwing out a few test candidates didn’t materially affect performance. But, when the data base had only twenty questions, and all Dani wanted was to randomize the order of the questions, the situation was quite different.

Choosing twenty from twenty at random (with replacement) was VERY likely to produce duplicates, so virtually every candidate was discarded, but the program just ground away, trying to find that rare set of twenty without duplicates.

As an exercise, you might want to figure out the probability of a non-duplicate set of twenty. Indeed, that’s an outstanding way to eliminate unnecessary code: by analyzing your algorithm before coding it.

Over the years, I’ve seen many other things you might consider unnecessary, but which do no harm except to the reputation of the programmer. For example:
* Setting a value that’s already set.
* Sorting a table that’s already sorted.
* Testing a variable that can have only one value.

These redundancies are found by reading the program, and may be important for another reason besides performance. Such idiotic pieces of code may be indications that the code was written carelessly, or perhaps modified by someone without full understanding. In such cases, there’s quite likely to be an error nearby, so don’t just ignore them.


Wednesday, December 20, 2017

Which code is more readable?

We were asked, "Which code is more readable, one that uses longer variable names or short ones?" 

Maybe some historical perspective will help answer this question.

In the very early days of computing (I was there), we used short variable names because:

* Programs were fairly short and simple, so scope wasn’t much of a problem.

  • Memories were small, so programmers didn’t want to waste memory with long names.

  • Compilers and assemblers were slow, and long names made them slower.

  • Many compilers and assemblers wouldn’t allow names longer than a few characters, because of speed and memory limitations.

  • We didn’t think much, if at all, about who would maintain a program once it left the hands of the original programmer.

As programs grew larger, one result of short naming was difficult maintenance, so the movement toward longer names grew stronger. It wasn’t helped by COBOL, which asserted that executives should be able to read code. Lots of COBOL code was littered with super-long names, but that didn't help executives read it.

The COBOL argument proved to be nonsense. Still, the maintenance argument for longer, more descriptive names made sense.

Unfortunately, like many movements, the long-name movement went too far, at least for my taste. It wasn’t because long names were harder to write. After all, a typical program is written oncem but read for modification and testing many, many times. So, if long names really made reading easier and more reliable, it was good.

But the length of a name is not really the issue. I’ve seen many programs with long, long names that were so similar that they were easily confused, one with another. For instance, we once wasted many days trying to find an error when the name radar_data_station_#46395_azimuth_reading was mistaken for radar_data_station_#46895_azimuth_reading. Psychologists and writers know well that items in the middle of long lists are frequently glossed over.

So, like lots of other things in software development, long versus short names becomes a tradeoff, a design decision for a programmer for which there is no “right” answer. Programmers must design their name-sets with the same kind of engineering thought they put into all their design decisions.

And, as maintainers modify a program, they must maintain the name-set, so as to avoid building up design debt as the program ages.

So, sorry, there’s no easy answer to this question, nothing a programmer can apply  mindlessly. Just as it’s always been, programmers who think will do a better job than those who blindly follow simplistic rules.



Saturday, December 16, 2017

My First Week in a Software Job

We were asked, "What was your first week like at your first software engineering job?"

In June, 1955, I went to work for IBM in San Francisco. Of course, at that time there was no such thing as "software engineering." In fact, there was no such thing as a "programmer." My title was "Applied Science Representative." I was supposed to apply science to the sale of IBM computers.

I was told that in two weeks I was to teach a course in programming the IBM 650.

That presented a few problems.

  • I had never programmed any computer before.

  • Nobody in the IBM office had ever programmed a computer before.

  • Nobody in the IBM office had ever seen a computer before.

  • There was no computer in the office—just a bunch of punch card machines.

  • In fact, as far as we knew, there was no computer in San Francisco.

I spent the next two weeks in a closet in the IBM office studying all the IBM manuals that were stored there, preparing myself to teach this course. I was pretty much a lone ranger, without the horse or any faithful Indian companion. Actually, no companion at all.

That was over 60 years ago, and now I have a multitude of companions. Even so, it was a special time and an unforgettable first two weeks, so thank you for asking this question.

If you want to know more about what it was like in those thrilling days of yesteryear, you should follow Danny Faught's blog. Back then, we used to listen to the Lone Ranger on radio (there wasn't much, if any, television).

"Hi-Yo, Silver! A fiery horse with the speed of light, a cloud of dust and a hearty ‘Hi-Yo Silver'... The Lone Ranger! With his faithful Indian companion, Tonto, the daring and resourceful masked rider of the plains led the fight for law and order in the early Western United States. Nowhere in the pages of history can one find a greater champion of justice. Return with us now to those thrilling days of yesteryear. From out of the past come the thundering hoof-beats of the great horse Silver. The Lone Ranger rides again!"


<http://www.geraldmweinberg.com (Formerly The Lone Programmer)

Sunday, December 10, 2017

Do programmers really know how to program?

I was asked, "Do programmers really know how to program?"

I believe this question is unproductive and  vague. What does it mean by “program”?

The person who asked this question seemed to think programmers were not really programming when all they did was copy some existing program, using it whole or perhaps pasting it in as part of a shell.

To me, programming a computer means instructing it to do something you want done, and to continue doing it as desired.

If that’s what we’re asking about, then yes, of course, some of us out here know how to program. (Some do not, of course.)

It is irrelevant how we do that. Whether we use genetic algorithms, cut-and-paste, or divine inspiration? Do we use Scrum or Agile or Waterfall? How about the programming language? C++, or Java, or Lisp, or Python, or APL? Well, none of those choices matters.

Then what does matter? How about, "Can we satisfy someone’s desires?" In other words, can we provide something that someone wants enough to pay what it costs, in time or money? That’s what counts, and we certainly know how do that—sometimes.

Sure, we fail at times, and probably too often. But no profession succeeds in satisfying its customers all the time. Did your teachers always succeed in teaching you something you wanted to know? Do surgeons know how to do surgery?

So what about using existing programs? To my mind, the first and foremost job of a programmer is knowing when not to write a program at all—either because the needed program already exists or because no program was needed in the first place.

In other words, not writing a program when no program is needed is the highest form of programming, and one of the marks of a true expert.




or Kindle for the book in paper or ebook format

Wednesday, December 06, 2017

What is the simplest, most amazing code you have ever written or witnessed?

We were asked to describe the simplest, most amazing code we had ever written or witnessed.

My answer should probably be some esoteric APL code that I personally wrote, like inverting a matrix with a single character program, but many of my readers wouldn’t understand it. In any case, modesty prevents me from choosing my own code.

So, instead, let me tell the story that took place long ago when we were installing an IBM 709 in Bermuda, as part of the NASA space-tracking network. The 709 was a “naked” installation, with little surrounding peripheral equipment, and nothing like it in Bermuda to help us.

In particular, we didn’t have an off-line printer or a punch-card duplicator, so we needed to use the 709 itself to do these jobs—but we had no utilities because we were probably the only naked 709 in the world.

My colleague, Marilyn, who was by far the best programmer I ever knew, went to our keypunch (the only unattached peripheral we had), inserted a blank card, and proceeded to punch (in row binary) a card-to-card duplicator program for the 709. She did it as I watched, in a single pass through the keypunch. 

You’d have to understand row-binary format to appreciate what a feat this was—multiple punched columns of alternate instructions in binary. To top it off, she actually punched in (in the same pass) the self-loading program AND the parity check row for her entire card.

She then loaded this card into the 709’s card reader, picked it up and reentered it as input to itself, and so punched a duplicate. She took the duplicate to the keypunch and added one punch to one of the rows. She now had a 709-to-printer program—two incredible error-free programs for the price of one.

I’ve never seen anything like it before or since. Until that time, I thought I was a pretty good programmer. After Marilyn’s feat, I realized that the best I could ever hope to be was Number Two.

How about you? Any amazing code stories to share?



Sunday, November 26, 2017

How Do I Decide Between appX and appY?

Hardly a day goes by without some developer or tester asking me about some tools or applications. These could be any tools or apps, so let's call them X and Y.

Usually, the question is simple, but asked with heart-stopping urgency:

"Is X better than Y?"

Rather than provide an answer, I tell them they would be better off not asking such "better than?" questions.

Software apps and tools are complex systems. Consequently any X-Y pair will differ on a number of dimensions. X will be better on some; Y will be better on others. Or both will be useless or poor for your needs.

If you're choosing a tool or an app, start with assessing your needs. Then, instead of asking which is better, ask

"Which fits my needs better, X or Y?"

If neither one fits you needs, then look for a third alternative, or a fourth.

In the rare case when both X and Y fit your needs, you might meaningfully ask, "Which is better—for me, at this moment?"

If X and Y still seem equal, then flip a coin. Heads, take X. Tails, take Y.

Then, while the coin is in the air, your mind will usually make the decision, not willing to allow the coin drop to make the decision for you.

But, if your mind doesn't decide, then let the coin drop decide. At that point, it shouldn't matter.

But if you reach this point, wait a moment before you choose X or Y. During that moment, consider the following two questions:

Can I take both X and Y?


What about Z? Is there some third alternative I haven't considered?


Indeed, instead of asking "which is better" questions, ask, "What is the problem I'm trying to solve?"

Are Your Lights On?: How to know what the problem really is?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          

Sunday, November 19, 2017

Terchnical Reviews and Organizational Renewal

We know that success can breed failured and doesn't automatically renew itself. I would like to offer some ideas on how this self-defeating tendency can be resisted.

One way toward renewal is through new perspectives gained from outside technical audits, but audits suffer from several serious disabilities. For one thing, audits are done intermittently. In between one audit and the next, the programmers don't stop programming, the analysts don't.stop analyzing, and the operators don't stop operating. Sometimes, though, the managers stop managing, And there's the rub.

A comparable situation exists when a firm has a system of personnel reviews mandated, say, every six months for each employee. Under such a system, managers tend to postpone difficult interactions with an employee until the next appraisal is forced upon them. A huge dossier may be accumulated, but then ignored in favor of the last, most conspicuous blunder or triumph. Good managers realize that the scheduled personnel review is, at best, a backup device—to catch those situations in which day-to-day management is breaking down.

In the same way, the outside technical audit merely backs up the day-to-day technical renewal processes within the shop. It may prevent utter disasters, but it's much better if we can establish technical renewal on a more continuous and continuing basis. One way to do this is through a technical team, such as an Agile team. For now, though, I want to introduce, or reintroduce, the concept of formal and informal technical reviews as tools for renewing the technical organization.

The Agile Manifesto requires "technical excellence" and "simplicity," and states: 

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. 

To achieve these and other goals, Agile teams conduct "walkthroughs" and "inspections," but these are only two very specific examples of technical review procedures used by Agile teams. In my reading and consulting, I've uncovered dozens of major variants of these two types of review, plus literally hundreds of minor variants that shops have introduced to make the review more suitable to their environments, whether or not they claim to be "Agile."

A more general definition of a technical review could be 

1. a review of technical material on the basis of content (this is what makes it a "technical" review, rather than, say, a financial or personnel review)

2. which is done openly by at least two persons (this is what distinguishes it from "desk checking")

3. who take lull responsibility for the quality of the review (not, please note, for the quality of the product)

Then we distinguish between the informal and formal review. A formal review produces a written report to management. That report is the formal part.

Informal reviews (that is, reviews which need not terminate with a written report to management) are also excellent devices for an organization's self-renewal. Informal reviews take all forms of technical reviews, and are practiced everywhere almost all the time. Indeed, they are an essential part  of the real world of programming work.

For instance, a team member passes a diagram to a teammate for an opinion on how to represent a particular design. Someone asks help from someone else in finding an error. A set of test cases is passed around to see if anyone can think of something that's been overlooked. One person reads another's user document to see if it's understandable.

Without a constant flow of such informal reviewing of one another's work, programming any meaningful project would be impossible. Formal reviewing, on the other hand, is not essential to programming. Many small projects are accomplished without formal reviewing, which is why some programmers don't appreciate the need for formal reviewing. 

As projects grow 'larger and more complex—as they are inclined to do in successful shops—the work of many people must be coordinated over a long period of
time. Such coordination requires management—though not necessarily managers—and such management requires reliable information. Formal reviews, essentially, are designed to provide reliable information about technical matters—particularly to non-technical people.

Clearly, a successful system of formal technical reviews—that is, a system that provides management with reliable information on technical matters—is essential to management participation in the organizational-renewal process. For the large shop, formal reviews provide information that the small shop manager gets "through the seat of the pants." Many, many failures of previously successful programming organizations an be traced directly to the breakdown of the earlier informal mechanisms for communicating reliable information about technical matters.

There may, of course, be other ways of getting this kind of information, and many smaller shops do an excellent job without any system of formal reviews. Even those shops, however, may benefit from an explicit system of reviews to supplement their implicit, or informal, existing system. 

Principally, the formal technical review provides reliable self-monitoring. An informal system may work just as well as a formal one, and if so, there are many reasons to prefer to keep the reviewing informal. In any case, there will always be an informal system at least supplementing the formal one, but we should really view the formal system as supplementing the informal. Its formality guards against creeping deterioration of the organization.

Regular reviews of specifications, designs, code, test plans, documentation, training materials, and other technical matter have many beneficial "side effects" on the health and success of an installation. Reviews have a very marked effect on maintenance—that quicksand in which a thousand successful shops have met an untimely end. A typical program that has been thoroughly reviewed during its development will

1. require less maintenance work per change

2. require fewer changes caused by poor specification, design, coding, or testing.

Instituting a program of technical reviews will not, of course, have any immediate effect on the existing burden of maintenance carried like an albatross from a sinful programming past. Indeed, when maintenance programmers participate in reviews of newer code, they may be further discouraged by the poor quality of the code with which they are burdened. But, the situation can improve rather quickly, for a variety of reasons:

1. Some new ideas can be applied even to patches to old programs, though naturally limited by poor existing structure and style.

2. Through mutual participation in reviews, the entire shop quickly obtains a realistic and sympathetic picture of the true dimensions of the maintenance situation.

3. The worst maintenance problems will, through frequent reviews, become exposed to fresh air and sunlight.

Quite frequently, installation of a review system is quickly followed by replacement of the most cantankerous old beasts left over from the Stone Age of Programming. The effect of even one such replacement on maintenance morale is a wonder to behold,

Other activities besides maintenance are similarly affected. In the long run, certainly, reviews have a remarkable effect on staff competence, perhaps the most important element in continuing success. We also see increased morale and professional attitude, reduced turnover, more reliable estimating and scheduling, and better appreciation of the management role in project success. (Usually, the technical staff has had no difficulty in appreciating the management role in project failure.)

At the same time, reviews provide management with a better appreciation for staff competence, both individually and collectively. The unsung hero who quietly saved a dozen shaky projects is now sung far and wide. The "genius programmer" who was always the darling of the executives has been exposed for the empty and obsolete shell the technical staff always knew, but couldn't expose to management.

No other factor is more depressing to a technical organization than misappraisal of technical competence by non-technical management. The openness of technical reviews marks an end to that misappraisal era. No longer will we all feel cheated by charlatans and incompetents.

As a consultant, I visited dozens of installations every year. The best way to summarize the effects of properly instituted reviews is to report that after five minutes in an installation, I can tell—without asking directly—to what extent there is an effective review-practice, formal or informal.

How do I tell? Metaphorically, I tell in the same way a government health inspector tells about a food processing plant—by the way it smells. It's hard to describe, but when you smell the difference, you-know it!

* * * * * *

Looking back over this essay, I sense its inadequacy to the task at hand. Those programmers and analysts who have experienced a shop with a properly functioning system of reviews will know all this without my giving any details. They've experienced it, and if they are professionals, they'll never agree to work in any other kind of environment.

But those who have never experienced such an environment of a self-renewing organization will not understand, or will misunderstand, what I've written. Some of them will have experienced a misguided attempt to institute reviews. Perhaps the attempt was in the form of a punitive management mandate. Perhaps it was merely a case of another programmer who read one article and blundered ahead with 99% enthusiasm and 1% information—and zero human feeling. To these people, the experience of "reviews" will have left a bitter taste, or a painful memory. They will not recognize their experience in what I've written.

In many ways, technical reviewing is like bicycling. Up until the time you first discover your balance in one miraculous instant, the whole idea seems impossible, contrary to nature, and a good way to break a leg. Indeed, if you'd never seen a cyclist before, and had the process described to you, you'd most certainly declare the whole procedure impossible. And, without instruction and encouragement from experienced cyclists, along with reliable "equipment," the most likely outcome would certainly be skinned knees, torn pants, and a few lumps on the head. And so it has been with technical reviews—until now—-so don't go off and try to learn the whole process on your own.

If you want to get started renewing the success of your own shop through a system of technical reviews, find an experienced shop, or a person from an experienced shop, to guide you. Listen to them, Read the literature. Then try a few simple reviews on an experimental basis, without much fanfare.

Adapt the "rules" to your own environment. Be forgiving and humane. Your rewards  will not be long in coming.


references: