Showing posts with label quality. Show all posts
Showing posts with label quality. Show all posts

Monday, December 25, 2017

Unnecessary Code

We were asked, "How can I tell if my code does extra unnecessary work?"
To answer this question well, I’d need to know what you mean by “unnecessary.” Not knowing your meaning, I’ll just mention one kind of code I would consider unnecessary: code that makes your program run slower than necessary but can be replaced with faster code.

To rid your program of such unnecessary code, start by timing the program’s operations. If it’s fast enough, then you’re done. You have no unnecessary code of this type.

If it’s not fast enough, then you’ll want to run a profiler that shows where the time is being spent. Then you search those areas (there can be only one that consumes more than half the time) and work it over, looking first at the design.

There’s one situation I’ve encountered where this approach can bring you trouble. Code that’s fast enough with some sets of data may be unreasonably slow with other sets. The most frequent case of this kind is when the algorithm’s time grows non-linearly with the size of the data. To prevent this kind of unnecessary code, you must do your performance testing with (possibly artificially) large data sets.

Paradoxically, though, some algorithms are faster with large data sets than small ones.

Here’s a striking example: My wife, Dani, wanted to generate tests in her large Anthropology class. She wanted to give all students the same test, but she wanted the questions for each student to be given in a random order, to prevent cheating by peeking. She gave 20 questions to a programmer who said he already had a program that would do that job. The program, however, seemed to fall into an unending loop. Closer examination eventually showed that it wasn't an infinite loop, but would have finally ended about the same time the Sun ran out of hydrogen to burn.

Here’s what happened: The program was originally built to select random test questions from a large (500+ questions) data base. The algorithm would construct a test candidate by choosing, say, twenty questions at random, then checking the twenty to see if there were any duplicates among those chosen. If there were duplicates, the program would discard that test candidate and construct another.

With a 500 question data base, there was very little chance that twenty questions chosen at random would contain a duplicate. It could happen, but throwing out a few test candidates didn’t materially affect performance. But, when the data base had only twenty questions, and all Dani wanted was to randomize the order of the questions, the situation was quite different.

Choosing twenty from twenty at random (with replacement) was VERY likely to produce duplicates, so virtually every candidate was discarded, but the program just ground away, trying to find that rare set of twenty without duplicates.

As an exercise, you might want to figure out the probability of a non-duplicate set of twenty. Indeed, that’s an outstanding way to eliminate unnecessary code: by analyzing your algorithm before coding it.

Over the years, I’ve seen many other things you might consider unnecessary, but which do no harm except to the reputation of the programmer. For example:
* Setting a value that’s already set.
* Sorting a table that’s already sorted.
* Testing a variable that can have only one value.

These redundancies are found by reading the program, and may be important for another reason besides performance. Such idiotic pieces of code may be indications that the code was written carelessly, or perhaps modified by someone without full understanding. In such cases, there’s quite likely to be an error nearby, so don’t just ignore them.


Sunday, December 10, 2017

Do programmers really know how to program?

I was asked, "Do programmers really know how to program?"

I believe this question is unproductive and  vague. What does it mean by “program”?

The person who asked this question seemed to think programmers were not really programming when all they did was copy some existing program, using it whole or perhaps pasting it in as part of a shell.

To me, programming a computer means instructing it to do something you want done, and to continue doing it as desired.

If that’s what we’re asking about, then yes, of course, some of us out here know how to program. (Some do not, of course.)

It is irrelevant how we do that. Whether we use genetic algorithms, cut-and-paste, or divine inspiration? Do we use Scrum or Agile or Waterfall? How about the programming language? C++, or Java, or Lisp, or Python, or APL? Well, none of those choices matters.

Then what does matter? How about, "Can we satisfy someone’s desires?" In other words, can we provide something that someone wants enough to pay what it costs, in time or money? That’s what counts, and we certainly know how do that—sometimes.

Sure, we fail at times, and probably too often. But no profession succeeds in satisfying its customers all the time. Did your teachers always succeed in teaching you something you wanted to know? Do surgeons know how to do surgery?

So what about using existing programs? To my mind, the first and foremost job of a programmer is knowing when not to write a program at all—either because the needed program already exists or because no program was needed in the first place.

In other words, not writing a program when no program is needed is the highest form of programming, and one of the marks of a true expert.




or Kindle for the book in paper or ebook format

Sunday, November 19, 2017

Terchnical Reviews and Organizational Renewal

We know that success can breed failured and doesn't automatically renew itself. I would like to offer some ideas on how this self-defeating tendency can be resisted.

One way toward renewal is through new perspectives gained from outside technical audits, but audits suffer from several serious disabilities. For one thing, audits are done intermittently. In between one audit and the next, the programmers don't stop programming, the analysts don't.stop analyzing, and the operators don't stop operating. Sometimes, though, the managers stop managing, And there's the rub.

A comparable situation exists when a firm has a system of personnel reviews mandated, say, every six months for each employee. Under such a system, managers tend to postpone difficult interactions with an employee until the next appraisal is forced upon them. A huge dossier may be accumulated, but then ignored in favor of the last, most conspicuous blunder or triumph. Good managers realize that the scheduled personnel review is, at best, a backup device—to catch those situations in which day-to-day management is breaking down.

In the same way, the outside technical audit merely backs up the day-to-day technical renewal processes within the shop. It may prevent utter disasters, but it's much better if we can establish technical renewal on a more continuous and continuing basis. One way to do this is through a technical team, such as an Agile team. For now, though, I want to introduce, or reintroduce, the concept of formal and informal technical reviews as tools for renewing the technical organization.

The Agile Manifesto requires "technical excellence" and "simplicity," and states: 

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. 

To achieve these and other goals, Agile teams conduct "walkthroughs" and "inspections," but these are only two very specific examples of technical review procedures used by Agile teams. In my reading and consulting, I've uncovered dozens of major variants of these two types of review, plus literally hundreds of minor variants that shops have introduced to make the review more suitable to their environments, whether or not they claim to be "Agile."

A more general definition of a technical review could be 

1. a review of technical material on the basis of content (this is what makes it a "technical" review, rather than, say, a financial or personnel review)

2. which is done openly by at least two persons (this is what distinguishes it from "desk checking")

3. who take lull responsibility for the quality of the review (not, please note, for the quality of the product)

Then we distinguish between the informal and formal review. A formal review produces a written report to management. That report is the formal part.

Informal reviews (that is, reviews which need not terminate with a written report to management) are also excellent devices for an organization's self-renewal. Informal reviews take all forms of technical reviews, and are practiced everywhere almost all the time. Indeed, they are an essential part  of the real world of programming work.

For instance, a team member passes a diagram to a teammate for an opinion on how to represent a particular design. Someone asks help from someone else in finding an error. A set of test cases is passed around to see if anyone can think of something that's been overlooked. One person reads another's user document to see if it's understandable.

Without a constant flow of such informal reviewing of one another's work, programming any meaningful project would be impossible. Formal reviewing, on the other hand, is not essential to programming. Many small projects are accomplished without formal reviewing, which is why some programmers don't appreciate the need for formal reviewing. 

As projects grow 'larger and more complex—as they are inclined to do in successful shops—the work of many people must be coordinated over a long period of
time. Such coordination requires management—though not necessarily managers—and such management requires reliable information. Formal reviews, essentially, are designed to provide reliable information about technical matters—particularly to non-technical people.

Clearly, a successful system of formal technical reviews—that is, a system that provides management with reliable information on technical matters—is essential to management participation in the organizational-renewal process. For the large shop, formal reviews provide information that the small shop manager gets "through the seat of the pants." Many, many failures of previously successful programming organizations an be traced directly to the breakdown of the earlier informal mechanisms for communicating reliable information about technical matters.

There may, of course, be other ways of getting this kind of information, and many smaller shops do an excellent job without any system of formal reviews. Even those shops, however, may benefit from an explicit system of reviews to supplement their implicit, or informal, existing system. 

Principally, the formal technical review provides reliable self-monitoring. An informal system may work just as well as a formal one, and if so, there are many reasons to prefer to keep the reviewing informal. In any case, there will always be an informal system at least supplementing the formal one, but we should really view the formal system as supplementing the informal. Its formality guards against creeping deterioration of the organization.

Regular reviews of specifications, designs, code, test plans, documentation, training materials, and other technical matter have many beneficial "side effects" on the health and success of an installation. Reviews have a very marked effect on maintenance—that quicksand in which a thousand successful shops have met an untimely end. A typical program that has been thoroughly reviewed during its development will

1. require less maintenance work per change

2. require fewer changes caused by poor specification, design, coding, or testing.

Instituting a program of technical reviews will not, of course, have any immediate effect on the existing burden of maintenance carried like an albatross from a sinful programming past. Indeed, when maintenance programmers participate in reviews of newer code, they may be further discouraged by the poor quality of the code with which they are burdened. But, the situation can improve rather quickly, for a variety of reasons:

1. Some new ideas can be applied even to patches to old programs, though naturally limited by poor existing structure and style.

2. Through mutual participation in reviews, the entire shop quickly obtains a realistic and sympathetic picture of the true dimensions of the maintenance situation.

3. The worst maintenance problems will, through frequent reviews, become exposed to fresh air and sunlight.

Quite frequently, installation of a review system is quickly followed by replacement of the most cantankerous old beasts left over from the Stone Age of Programming. The effect of even one such replacement on maintenance morale is a wonder to behold,

Other activities besides maintenance are similarly affected. In the long run, certainly, reviews have a remarkable effect on staff competence, perhaps the most important element in continuing success. We also see increased morale and professional attitude, reduced turnover, more reliable estimating and scheduling, and better appreciation of the management role in project success. (Usually, the technical staff has had no difficulty in appreciating the management role in project failure.)

At the same time, reviews provide management with a better appreciation for staff competence, both individually and collectively. The unsung hero who quietly saved a dozen shaky projects is now sung far and wide. The "genius programmer" who was always the darling of the executives has been exposed for the empty and obsolete shell the technical staff always knew, but couldn't expose to management.

No other factor is more depressing to a technical organization than misappraisal of technical competence by non-technical management. The openness of technical reviews marks an end to that misappraisal era. No longer will we all feel cheated by charlatans and incompetents.

As a consultant, I visited dozens of installations every year. The best way to summarize the effects of properly instituted reviews is to report that after five minutes in an installation, I can tell—without asking directly—to what extent there is an effective review-practice, formal or informal.

How do I tell? Metaphorically, I tell in the same way a government health inspector tells about a food processing plant—by the way it smells. It's hard to describe, but when you smell the difference, you-know it!

* * * * * *

Looking back over this essay, I sense its inadequacy to the task at hand. Those programmers and analysts who have experienced a shop with a properly functioning system of reviews will know all this without my giving any details. They've experienced it, and if they are professionals, they'll never agree to work in any other kind of environment.

But those who have never experienced such an environment of a self-renewing organization will not understand, or will misunderstand, what I've written. Some of them will have experienced a misguided attempt to institute reviews. Perhaps the attempt was in the form of a punitive management mandate. Perhaps it was merely a case of another programmer who read one article and blundered ahead with 99% enthusiasm and 1% information—and zero human feeling. To these people, the experience of "reviews" will have left a bitter taste, or a painful memory. They will not recognize their experience in what I've written.

In many ways, technical reviewing is like bicycling. Up until the time you first discover your balance in one miraculous instant, the whole idea seems impossible, contrary to nature, and a good way to break a leg. Indeed, if you'd never seen a cyclist before, and had the process described to you, you'd most certainly declare the whole procedure impossible. And, without instruction and encouragement from experienced cyclists, along with reliable "equipment," the most likely outcome would certainly be skinned knees, torn pants, and a few lumps on the head. And so it has been with technical reviews—until now—-so don't go off and try to learn the whole process on your own.

If you want to get started renewing the success of your own shop through a system of technical reviews, find an experienced shop, or a person from an experienced shop, to guide you. Listen to them, Read the literature. Then try a few simple reviews on an experimental basis, without much fanfare.

Adapt the "rules" to your own environment. Be forgiving and humane. Your rewards  will not be long in coming.


references: 




Thursday, November 02, 2017

How do I become a programmer who gets stuff done?

The young man wanted to know, "How do I become a programmer who gets stuff done?" He received a number of good answers, like how to organize his work and schedule his working hours. Yet I had a different view of things, so I gave a different sort of answer, as follows.

Some good advice here, yet even if you do all these suggested things, you may be having a different problem. When you speak of getting stuff done, you may be speaking of “finishing” things.

Defining what “done” means has been a classic programmer problem for more than 50 years. Part of the problem is that programmers with different personalities have different ideas about what “done” means. For instance, look at the situation from the point of view of MBTI personality temperaments:

NTs tend to think a project is done when they have a clear description of the problem and a general approach to solving it. Someone else can work out the details.

SJs tend to think a project is done when it’s “code complete”—though research at Microsoft and other places seems to indicate that only about two-thirds of the eventual code has been written by that supposed benchmark. Perhaps this work isn't thought of real programming, but only "clean up."

NFs, such as me, tend to think that a project is done when everyone involved is satisfied that it’s done. Of course, because these "others" are of various personality types, our NF estimate of "done" isn't very reliable.

SPs tend to think a project is done when they are bored with doing it. They share this feeling with lots of other temperaments, too. Programmers in general don't tolerate boredom very well.

Not taking that last step to "clean up" is a curse of our profession, leaving many programs in a less-than wholesome state. Unclean, unfinished programs are the source of many maintenance problems, and of many errors shipped to customers.

Of course, this classification is only a rough model of non-finishers. Many good programmers of all temperaments are excellent finishers, having taught themselves to be aware of their tendencies and counter them with a variety of tactics. Primary among those tactics if the technical review by peers (which is an integral part of the Agile approach but  by no means is not confined to Agile).

So, you offer your “finished” project to your peers for review, and if they agree that it’s finished, you’ve truly gotten something “done.”

If they don’t agree that your work is done, they will give you a list of issues that you need to address before you’re “done.” You then go back to work and address these issues, then resubmit the newest version to another technical review.

Finally, you iterate in this reviewing process until your work passes the review. Then you know you’ve gotten something done.

For more detail on the review process, see, 


Sunday, June 25, 2017

How do I get better at writing code?

Nobody writes perfect code. Anyone, no matter how experienced, can improve. So, you ask, how do I get better at writing code?

Of course, to get better at writing code, you must practice writing code. That much is obvious. Still, just writing the same poor code over and over, you're not likely to improve by much.

Writing is a different skill from reading, but reading code is necessary if you want to improve your writing. As with writing natural language, you build up your skill and confidence by reading—and not just reading your own output. So, find yourself some examples of good, clear code and read, read, read until you understand each piece.

Be careful, though. There’s lots of terrible code around. You can read terrible code, of course, and learn to analyze why it’s terrible, but your first attention should be on good code. Excellent code, if possible.

Where can you find good code? Textbooks are an easy choice, but be wary of textbooks. Kernihan and Plauger, in their book, The Elements of Programming Style, showed us how awful textbook code can be. Their little book can teach you a lot about recognizing bad code.

But bad code isn't enough. Knowing what's bad doesn't necessarily teach you what's good. Some open source code is rather good, and it’s easy to obtain, though it may be too complex for a beginning. Complex code can easily be bad code.

Hopefully, you will participate in code reviews, where you can see lots of code and hear various opinions on what’s good and what’s less than good.

Definitely ask you fellow programmers to share code with you, though beware: not all of it will be good examples. Be sure the partners you choose are able to listen objectively to feedback about any smelly code they show you.

If you work alone, use the internet to find some programming pen pals.

As you learn to discern the difference between good and poor code, you can use this discernment in your reading. After a while, you’ll be ready to start writing simple code, then work your way up to more complex tasks—all good.

And date and save all your code-writing examples, so you can review your progress from time to time.


Good luck, and happy learning!

Sunday, March 12, 2017

Classifying Errors

I received an email the other day from Giorgio Valoti in Italy, but when I wrote a response, it bounced back with "recipient unknown." It may have been a transient error, but it made me think that others besides Giorgio might be interested in discussing the issue of classifying errors, so I'll put my answer here and hope Giorgio will see it.

Here's the letter: 

Dear Mr. Weinberg,
My problem is that I’m looking for good way — maybe a standard, more likely a set of guidelines — to classify and put a some kind of label on software defects.

Is there such a thing? Does it even make sense trying to classify software defects?
And here's my reply:

Hello, Giorgio

It can certainly make sense to classify errors/defects, but there are many ways to classify, depending on what you're trying to accomplish. So, that's where you start, by answering "What's my purpose in classifying?"

For instance, here are a few ways my clients have classified errors, and for what purposes:


- cost to fix: to estimate future costs

- costs to customers: to estimate impact on product sales, or product market penetration

- place of origin in the development cycle: to decide where to concentrate quality efforts

- type of activity that led to the error: to improve the training of developers

- type of activity that led to detecting the error: to improve the training of testers

- number of places that had to be fixed to correct the error: to estimate the quality of the design

- and so on and on

I hope this helps ... and thanks for asking.

--------------end of letter-----------

As the letter says, there are numerous ways to classify errors, so I think my readers would love to read about some other ways from other readers. Care to comment?

Wednesday, January 11, 2017

Foreword and Introduction to ERRORS book

Foreword


Ever since this book came out, people have been asking me how I came to write on such an unusual topic. I've pondered their question and decided to add this foreword as an answer.

As far as I can remember, I've always been interested in errors. I was a smart kid, but didn't understand why I  made mistakes. And why other people made more.

I yearned to understand how the brain, my brain, worked, so I studied everything I could find about brains. And then I heard about computers.

Way back then, computers were called "Giant Brains." Edmund Berkeley wrote a book by that title, which I read voraciously.

Those giant brains were "machines that think" and "didn't make errors." Neither turned out to be true, but back then, I believed them. I knew right away, deep down—at age eleven—that I would spend my life with computers.

Much later, I learned that computers didn't make many errors, but their programs sure did.

I realized when I worked on this book that it more or less summarizes my life's work, trying to understand all about errors. That's where it all started.

I think I was upset when I finally figured out that I wasn't going to find a way to perfectly eliminate all errors, but I got over it. How? I think it was my training in physics, where I learned that perfection simply violates the laws of thermodynamics.

Then I was upset when I realized that when a computer program had a fault, the machine could turn out errors millions of times faster than any human or group of humans.

I could actually program a machine to make more errors in a day than all human beings had made in the last 10,000 years. Not many people seemed to understand the consequences of this fact, so I decided to write this book as my contribution to a more perfect world.

Not perfect, of course, but more perfect. I hope it helps.


Introduction


For more than a half-century, I’ve written about errors: what they are, their importance, how we think about them, our attempts to prevent them, and how we deal with them when those attempts fail. People tell me how helpful some of these writings have been, so I felt it would be useful to make them more widely known. Unfortunately, the half-century has left them scattered among several dozen books, so I decided to consolidate some of the more helpful ones in this book.

I’m going to start, though, where it all started, with my first book where Herb Leeds and I made our first public mention of error. Back in those days, Herb and I both worked for IBM. As employees we were not allowed to write about computers making mistakes, but we knew how important the subject was. So, we wrote our book and didn’t ask IBM’s permission.

Computer errors are far more important today than they were back in 1960, but many of the issues haven’t changed. That’s why I’m introducing this book with some historical perspective: reprinting some of that old text about errors along with some notes with the perspective of more than half a century.

1960’s Forbidden Mention of Errors
From: CHAPTER 10
Leeds and Weinberg,
Computer Programming Fundamentals PROGRAM TESTING
When we approach the subject of program testing, we might almost conclude the whole subject immediately with the anecdote about the mathematics professor who, when asked to look at a student’s problem, replied, “If you haven’t made any mistakes, you have the right answer.” He was, of course, being only slightly facetious. We have already stressed this philosophy in programming, where the major problem is knowing when a program is “right.”

In order to be sure that a program is right, a simple and systematic approach is undoubtedly best. However, no approach can assure correctness without adequate testing for verification. We smile when we read the professor’s reply because we know that human beings seldom know immediately when they have made errors—although we know they will at some time make them. The programmer must not have the view that, because he cannot think of any error, there must not be one. On the contrary, extreme skepticism is the only proper attitude. Obviously, if we can recognize an error, it ceases to be an error.

If we had to rely on our own judgment as to the correctness of our programs, we would be in a difficult position. Fortunately the computer usually provides the proof of the pudding. It is such a proper combination of programmer and computer that will ultimately determine the means of judging the program. We hope to provide some insight into the proper mixture of these ingredients. An immediate problem that we must cope with is the somewhat disheartening fact that, even after carefully eliminating clerical errors, experienced programmers will still make an average of approximately one error for every thirty instructions written.

We make errors quite regularly
This statement is still true after half a century—unless it’s actually worse nowadays. (I have some data from Capers Jones suggesting one error in fewer than ten instructions may be typical for very large, complex projects.) It will probably be true after ten centuries, unless by then we’ve made substantial modifications to the human brain. It’s a characteristic of humans would have been true a hundred centuries ago—if we’d had computers then.

1960’s Cost of errors
These errors range from minor misunderstandings of instructions to major errors of logic or problem interpretation. Strangely enough, the trivial errors often lead to spectacular results, while the major errors initially are usually the most difficult to detect.

“Trivial” errors can have great consequences
We knew about large errors way back then, but I suspect we didn’t imagine just how much errors could cost. For examples of some billion dollar errors along with explanations, read the chapter “Some Very Expensive Software Errors.”

Back to 1960 again
Of course, it is possible to write a program without errors, but this fact does not obviate the need for testing. Whether or not a program is working is a matter not to be decided by intuition. Quite often it is obvious when a program is not working. However, situations have occurred where a program which has been apparently successful for years has been exposed as erroneous in some part of its operation.

Errors can escape detection for years
With the wisdom of time, we now have quite specific examples of errors lurking in the background for thirty years or more. For example, read the chapter on “predicting the number of errors.”

How was it tested in 1960
Consequently, when we use a program, we want to know how it was tested in order to give us confidence in—or warning about—its applicability. Woe unto the programmer with “beginner’s luck” whose first program happens to have no errors. If he takes success in the wrong way, many rude shocks may be needed to jar his unfounded confidence into the shape of proper skepticism.

Many people are discouraged by what to them seems the inordinate amount of effort spent on program testing. They rightly indicate that a human being can often be trained to do a job much more easily than a computer can be programmed to do it. The rebuttal to this observation may be one or more of the following statements:
  1. All problems are not suitable for computers. (We must never forget this one.)
  2. The computer, once properly programmed, will give a higher level of performance, if, indeed,
    the problem is suited to a computer approach.
  3. All the human errors are removed from the system in advance, instead of distributing them
    throughout the work like bits of shell in a nutcake, In such instances, unfortunately, the human errors will not necessarily repeat in identical manner. Thus, anticipating and catching such errors may be exceedingly difficult. Often in these eases the tendency is to overcompensate for such errors, resulting in expense and time loss.
  4. The computer is often doing a different job than the man is doing, for there is a tendency– usually a good one—to enlarge the scope of a problem at the same time it is first programmed for a computer. People are often tempted to “compare apples with houses” in this case.
  5. The computer is probably a more steadfast employee, whereas human beings tend to move on to other responsibilities and must be replaced by other human beings who must, in turn, be trained.
In other words, if a job is worth doing, it is worth doing right.

Sometimes the error is creating a program at all.
Unfortunately, the cost of developing, supporting, and maintaining a program frequently exceeds the value it produces. In any case, no amount of fixing small program errors can eliminate the big error of writing the program in the first place. For examples and explanations, read the chapter on “it shouldn’t even be done.”

The full process, 1960
If a job is a computer job, it should be handled as such without hesitation. Of course, we are obligated to include the cost of programming and testing in any justification of a new computer application. Furthermore we must not be tempted to cut costs at the end by skimping on the testing effort. An incorrect program is indeed worth less than no program at all because the false conclusions it may inspire can lead to many expensive errors.

We must not confuse cost and value.
Even after all this time, some managers still believe they can get away with skimping on the testing effort. For examples and explanations, read the section on “What Do Errors Cost?”

Coding is not the end, even in 1960
A greater danger than false economy is ennui. Sometimes a programmer, upon finishing the coding phase of a problem, feels that all the interesting work is done. He yearns to move on to the next problem.

Programs can become erroneous without changing a bit.
You may have noticed the consistent use of “he” and “his” in this quoted passage from an ancient book. These days, this would be identified as “sexist writing,” but it wasn’t called “sexist” way back then. This is an example of how something that wasn’t an error in the past becomes an error with changing culture, changing language, changing hardware, or perhaps new laws. We don’t have to do anything to make an error, but we have to do a whole lot not to make an error.

We keep learning, but is it enough?
Thus as soon as the program looks correct—or, rather, does not look incorrect—he convinces himself it is finished and abandons it. Programmers at this time are much more fickle than young lovers.
Such actions are, of course, foolish. In the first place, we cannot so easily abandon our programs and relieve ourselves of further obligation to them. It is very possible under such circumstances that in the middle of a new problem we shall be called upon to finish our previous shoddy work—which will then seem even more dry and dull, as well as being much less familiar. Such unfamiliarity is no small problem. Much grief can occur before the programmer regains the level of thought activity he achieved in originally writing the program. We have emphasized flow diagramming and its most important assistance to understanding a program but no flow diagram guarantees easy reading of a program. The proper flow diagram does guarantee the correct logical guide through the program and a shorter path to correct understanding.

It is amazing how one goes about developing a coding structure. Often the programmer will review his coding with astonishment. He will ask incredulously, “How was it possible for me to construct this coding logic? I never could have developed this logic initially.” This statement is well-founded. It is a rare case where the programmer can immediately develop the final logical construction. Normally programming is a series of attempts, of two steps forward and one step backward. As experience is gained in understanding the problem and applying techniques—as the programmer becomes more immersed in the program’s intricacies—his logic improves. We could almost relate this logical building to a pyramid. In testing out the problem we must climb the same pyramid as in coding. In this case, however, we must take care to root out all misconstructed blocks, being careful not to lose our footing on the slippery sides. Thus, if we are really bored with a problem, the smartest approach is to finish it as correctly as possible so we shall never see it again.

In the second place, the testing of a program, properly approached, is by far the most intriguing part of programming. Truly the mettle of the programmer is tested along with the program. No puzzle addict could experience the miraculous intricacies and subtleties of the trail left by a program gone wrong. In the past, these interesting aspects of program testing have been dampened by the difficulty in rigorously extracting just the information wanted about the performance of a program. Now, however, sophisticated systems are available to relieve the programmer of much of this burden.

Testing for errors grows more difficult every year.
The previous sentence was an optimistic statement a half-century ago, but not because it was wrong. Over all these years, hundreds of tools have been built attempting to simplify the testing burden. Some of them have actually succeeded. At the same time, however, we’ve never satisfied our hunger for more sophisticated applications. So, though our testing tools have improved, our testing tasks have outpaced them. For examples and explanations, read about “preventing testing from growing more difficult.” 

If you're as interested in errors as I am, you can obtain a copy of Errors here:

ERRORS, bugs, boo-boos, blunders
SaveSave