Showing posts with label estimating. Show all posts
Showing posts with label estimating. Show all posts

Monday, December 25, 2017

Unnecessary Code

We were asked, "How can I tell if my code does extra unnecessary work?"
To answer this question well, I’d need to know what you mean by “unnecessary.” Not knowing your meaning, I’ll just mention one kind of code I would consider unnecessary: code that makes your program run slower than necessary but can be replaced with faster code.

To rid your program of such unnecessary code, start by timing the program’s operations. If it’s fast enough, then you’re done. You have no unnecessary code of this type.

If it’s not fast enough, then you’ll want to run a profiler that shows where the time is being spent. Then you search those areas (there can be only one that consumes more than half the time) and work it over, looking first at the design.

There’s one situation I’ve encountered where this approach can bring you trouble. Code that’s fast enough with some sets of data may be unreasonably slow with other sets. The most frequent case of this kind is when the algorithm’s time grows non-linearly with the size of the data. To prevent this kind of unnecessary code, you must do your performance testing with (possibly artificially) large data sets.

Paradoxically, though, some algorithms are faster with large data sets than small ones.

Here’s a striking example: My wife, Dani, wanted to generate tests in her large Anthropology class. She wanted to give all students the same test, but she wanted the questions for each student to be given in a random order, to prevent cheating by peeking. She gave 20 questions to a programmer who said he already had a program that would do that job. The program, however, seemed to fall into an unending loop. Closer examination eventually showed that it wasn't an infinite loop, but would have finally ended about the same time the Sun ran out of hydrogen to burn.

Here’s what happened: The program was originally built to select random test questions from a large (500+ questions) data base. The algorithm would construct a test candidate by choosing, say, twenty questions at random, then checking the twenty to see if there were any duplicates among those chosen. If there were duplicates, the program would discard that test candidate and construct another.

With a 500 question data base, there was very little chance that twenty questions chosen at random would contain a duplicate. It could happen, but throwing out a few test candidates didn’t materially affect performance. But, when the data base had only twenty questions, and all Dani wanted was to randomize the order of the questions, the situation was quite different.

Choosing twenty from twenty at random (with replacement) was VERY likely to produce duplicates, so virtually every candidate was discarded, but the program just ground away, trying to find that rare set of twenty without duplicates.

As an exercise, you might want to figure out the probability of a non-duplicate set of twenty. Indeed, that’s an outstanding way to eliminate unnecessary code: by analyzing your algorithm before coding it.

Over the years, I’ve seen many other things you might consider unnecessary, but which do no harm except to the reputation of the programmer. For example:
* Setting a value that’s already set.
* Sorting a table that’s already sorted.
* Testing a variable that can have only one value.

These redundancies are found by reading the program, and may be important for another reason besides performance. Such idiotic pieces of code may be indications that the code was written carelessly, or perhaps modified by someone without full understanding. In such cases, there’s quite likely to be an error nearby, so don’t just ignore them.


Sunday, November 19, 2017

Terchnical Reviews and Organizational Renewal

We know that success can breed failured and doesn't automatically renew itself. I would like to offer some ideas on how this self-defeating tendency can be resisted.

One way toward renewal is through new perspectives gained from outside technical audits, but audits suffer from several serious disabilities. For one thing, audits are done intermittently. In between one audit and the next, the programmers don't stop programming, the analysts don't.stop analyzing, and the operators don't stop operating. Sometimes, though, the managers stop managing, And there's the rub.

A comparable situation exists when a firm has a system of personnel reviews mandated, say, every six months for each employee. Under such a system, managers tend to postpone difficult interactions with an employee until the next appraisal is forced upon them. A huge dossier may be accumulated, but then ignored in favor of the last, most conspicuous blunder or triumph. Good managers realize that the scheduled personnel review is, at best, a backup device—to catch those situations in which day-to-day management is breaking down.

In the same way, the outside technical audit merely backs up the day-to-day technical renewal processes within the shop. It may prevent utter disasters, but it's much better if we can establish technical renewal on a more continuous and continuing basis. One way to do this is through a technical team, such as an Agile team. For now, though, I want to introduce, or reintroduce, the concept of formal and informal technical reviews as tools for renewing the technical organization.

The Agile Manifesto requires "technical excellence" and "simplicity," and states: 

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. 

To achieve these and other goals, Agile teams conduct "walkthroughs" and "inspections," but these are only two very specific examples of technical review procedures used by Agile teams. In my reading and consulting, I've uncovered dozens of major variants of these two types of review, plus literally hundreds of minor variants that shops have introduced to make the review more suitable to their environments, whether or not they claim to be "Agile."

A more general definition of a technical review could be 

1. a review of technical material on the basis of content (this is what makes it a "technical" review, rather than, say, a financial or personnel review)

2. which is done openly by at least two persons (this is what distinguishes it from "desk checking")

3. who take lull responsibility for the quality of the review (not, please note, for the quality of the product)

Then we distinguish between the informal and formal review. A formal review produces a written report to management. That report is the formal part.

Informal reviews (that is, reviews which need not terminate with a written report to management) are also excellent devices for an organization's self-renewal. Informal reviews take all forms of technical reviews, and are practiced everywhere almost all the time. Indeed, they are an essential part  of the real world of programming work.

For instance, a team member passes a diagram to a teammate for an opinion on how to represent a particular design. Someone asks help from someone else in finding an error. A set of test cases is passed around to see if anyone can think of something that's been overlooked. One person reads another's user document to see if it's understandable.

Without a constant flow of such informal reviewing of one another's work, programming any meaningful project would be impossible. Formal reviewing, on the other hand, is not essential to programming. Many small projects are accomplished without formal reviewing, which is why some programmers don't appreciate the need for formal reviewing. 

As projects grow 'larger and more complex—as they are inclined to do in successful shops—the work of many people must be coordinated over a long period of
time. Such coordination requires management—though not necessarily managers—and such management requires reliable information. Formal reviews, essentially, are designed to provide reliable information about technical matters—particularly to non-technical people.

Clearly, a successful system of formal technical reviews—that is, a system that provides management with reliable information on technical matters—is essential to management participation in the organizational-renewal process. For the large shop, formal reviews provide information that the small shop manager gets "through the seat of the pants." Many, many failures of previously successful programming organizations an be traced directly to the breakdown of the earlier informal mechanisms for communicating reliable information about technical matters.

There may, of course, be other ways of getting this kind of information, and many smaller shops do an excellent job without any system of formal reviews. Even those shops, however, may benefit from an explicit system of reviews to supplement their implicit, or informal, existing system. 

Principally, the formal technical review provides reliable self-monitoring. An informal system may work just as well as a formal one, and if so, there are many reasons to prefer to keep the reviewing informal. In any case, there will always be an informal system at least supplementing the formal one, but we should really view the formal system as supplementing the informal. Its formality guards against creeping deterioration of the organization.

Regular reviews of specifications, designs, code, test plans, documentation, training materials, and other technical matter have many beneficial "side effects" on the health and success of an installation. Reviews have a very marked effect on maintenance—that quicksand in which a thousand successful shops have met an untimely end. A typical program that has been thoroughly reviewed during its development will

1. require less maintenance work per change

2. require fewer changes caused by poor specification, design, coding, or testing.

Instituting a program of technical reviews will not, of course, have any immediate effect on the existing burden of maintenance carried like an albatross from a sinful programming past. Indeed, when maintenance programmers participate in reviews of newer code, they may be further discouraged by the poor quality of the code with which they are burdened. But, the situation can improve rather quickly, for a variety of reasons:

1. Some new ideas can be applied even to patches to old programs, though naturally limited by poor existing structure and style.

2. Through mutual participation in reviews, the entire shop quickly obtains a realistic and sympathetic picture of the true dimensions of the maintenance situation.

3. The worst maintenance problems will, through frequent reviews, become exposed to fresh air and sunlight.

Quite frequently, installation of a review system is quickly followed by replacement of the most cantankerous old beasts left over from the Stone Age of Programming. The effect of even one such replacement on maintenance morale is a wonder to behold,

Other activities besides maintenance are similarly affected. In the long run, certainly, reviews have a remarkable effect on staff competence, perhaps the most important element in continuing success. We also see increased morale and professional attitude, reduced turnover, more reliable estimating and scheduling, and better appreciation of the management role in project success. (Usually, the technical staff has had no difficulty in appreciating the management role in project failure.)

At the same time, reviews provide management with a better appreciation for staff competence, both individually and collectively. The unsung hero who quietly saved a dozen shaky projects is now sung far and wide. The "genius programmer" who was always the darling of the executives has been exposed for the empty and obsolete shell the technical staff always knew, but couldn't expose to management.

No other factor is more depressing to a technical organization than misappraisal of technical competence by non-technical management. The openness of technical reviews marks an end to that misappraisal era. No longer will we all feel cheated by charlatans and incompetents.

As a consultant, I visited dozens of installations every year. The best way to summarize the effects of properly instituted reviews is to report that after five minutes in an installation, I can tell—without asking directly—to what extent there is an effective review-practice, formal or informal.

How do I tell? Metaphorically, I tell in the same way a government health inspector tells about a food processing plant—by the way it smells. It's hard to describe, but when you smell the difference, you-know it!

* * * * * *

Looking back over this essay, I sense its inadequacy to the task at hand. Those programmers and analysts who have experienced a shop with a properly functioning system of reviews will know all this without my giving any details. They've experienced it, and if they are professionals, they'll never agree to work in any other kind of environment.

But those who have never experienced such an environment of a self-renewing organization will not understand, or will misunderstand, what I've written. Some of them will have experienced a misguided attempt to institute reviews. Perhaps the attempt was in the form of a punitive management mandate. Perhaps it was merely a case of another programmer who read one article and blundered ahead with 99% enthusiasm and 1% information—and zero human feeling. To these people, the experience of "reviews" will have left a bitter taste, or a painful memory. They will not recognize their experience in what I've written.

In many ways, technical reviewing is like bicycling. Up until the time you first discover your balance in one miraculous instant, the whole idea seems impossible, contrary to nature, and a good way to break a leg. Indeed, if you'd never seen a cyclist before, and had the process described to you, you'd most certainly declare the whole procedure impossible. And, without instruction and encouragement from experienced cyclists, along with reliable "equipment," the most likely outcome would certainly be skinned knees, torn pants, and a few lumps on the head. And so it has been with technical reviews—until now—-so don't go off and try to learn the whole process on your own.

If you want to get started renewing the success of your own shop through a system of technical reviews, find an experienced shop, or a person from an experienced shop, to guide you. Listen to them, Read the literature. Then try a few simple reviews on an experimental basis, without much fanfare.

Adapt the "rules" to your own environment. Be forgiving and humane. Your rewards  will not be long in coming.


references: 




Monday, November 28, 2016

How do I find cheap freelance hardware and software developers?

The question was, "How do I find cheap freelance hardware and software developers?"

I warned the questioner to be very careful about what he was asking for:

First of all, you don’t want “cheap” developers; you want inexpensive developers.

Second, the expense of developers is not their hourly or daily rate. It’s the total cost of building and delivering the software and hardware you want.

In my experience, the least expensive developers have much higher rates than the more costly ones. The deliver what you want, the first time, in less time, with less trouble.


However, a high hourly rate doesn’t guarantee an inexpensive product. Freelance developers can charge anything they want, so price doesn’t necessarily indicate value.

Instead, speak to references about any developer you’re considering. Find out first hand what you’re going to get for what you’re paying.

And, by the way, don't think you'll save money by hiring individual developers. Your best bet will generally be to choose a team, perhaps an Agile team, but in any case, a team that has a history of working well together.

Monday, September 12, 2016

What are the most important software quality assurance techniques?

When this question was possed on a public form, the answers were predictably things like estimation, project management, and testing. All those things are important, but not the most important.

a) By far the most important Q/A technique is telling the truth about relevant Q/A issues in a way that can be understood by your audience. 







b) Of course, you must first be able to know what is the truth, and that requires observation skills of a high degree.



(see also, An Introduction to General Systems Thinking )







c) And, too, you must be able to know which truths are relevant. For that you must understand the processes practiced to produc the products whose quality you are supposed to be assuring.(see the Quality Software Series)


All specific techniques are dependent on these underlying techniques. If you lack a, b, or c, you will not be able to perform other techniques well.
If you wish to learn more about any of these techniques, or others, I’ve written many books on these subjects, not just the ones shown here. For a list of these books, see my author page on Leanpub.com

Tuesday, August 02, 2016

Better Estimating Can Lead to Worse Peformance


This blog post is a new chapter, just added to my book, Do You Want To Be a (Better) Manager.

One of the (few) advantages of growing old is gaining historical perspective, something sorely lacking in the computing business. Almost a lifetime ago, I wrote an article about the future dangers of better estimating. I wondered recently if any of my predictions came to pass.

Back then, Tom DeMarco sent me a free copy of his book, Controlling Software Projects: Management, Measurement and Estimation. Unfortunately, I packed it in my bike bag with some takeout barbecue, and I had a little accident. Tom, being a generous man, gave me a second copy to replace the barbecued one.

Because Tom was so generous, I felt obliged to read the book, which proved quite palatable even without sauce. In the book, Tom was quite careful to point out that software development was a long way form maturity, so I was surprised to see an article of his entitled "Software Development—A Coming of Age." Had something happened in less than a year to bring our industry to full growth?

As it turned out, the title was apparently a headline writer's imprecision, based on the following statement in the article:

"In order for the business of software to come of age, we shall have to make some major improvements in our quantitative skills. In the last two years, the beginnings of a coherent quantitative discipline have begun to emerge…"

The article was not about the coming of age of software development, but a survey of the state of software project estimation. After reviewing the work of Barry Boehm, Victor Basili, Capers Jones, and Lawrence Putnam, DeMarco stated that this work

"…provides a framework for analysis of the quantitative parameters of software projects. But none of the four authors addresses entirely the problem of synthesizing this framework into an acceptable answer to the practical question: How do I structure my organization and run my projects in order to maintain reasonable quantitative control?"

As I said before, Tom is generous person. He's also smart. If he held such reservations about the progress of software development, I'd believe him, and not the headline writer. Back then, software development had a long way to go before coming of age.

Anyway, what does it mean to "come of age"? When you come of age, you stop spilling barbecue sauce on books. You also stop making extravagant claims about your abilities. In fact, if someone keeps bragging about how they've come of age, you know they haven't. We could apply that criterion to software development, which has been bragging about it's impending maturity now for over forty years.

Estimates can become standards

One part of coming of age is learning to appraise your own abilities accurately—in other words, to estimate. When we learn to estimate software projects accurately, we'll certainly be a step closer to maturity—but not, by any means, the whole way. For instance, I know that I'm a klutz, and I can measure my klutziness with high reliability. To at least two decimal places, I can estimate the likelihood that I'll spill barbecue sauce on a book—but that hardly qualifies me as grown up.

The mature person can not only estimate performance, but also perform at some reasonably high level. Estimating is a means mature people use to help gain high performance, but sometimes we make the mistake of substituting means for ends. When I was in high school, my gym teacher estimated that only one out of a hundred boys in the school would be able to run a mile in under ten minutes. When he actually tested us, only 13 out of 1,200 boys were able to do this well. One percent was an accurate estimate, but was it an acceptable goal for the fitness of high school boys? (Back in those days, the majority of us boys were smokers.)

This was a problem I was beginning to see among my clients. Once they learned to estimate their software projects reasonably well, there was a tendency to set these estimating parameters as standards. They said, in effect: "As long as we do this well, we have no cause to worry about doing any better." This might be acceptable if there was a single estimating model for all organizations, but there wasn't. DeMarco found that no two of his clients came up with the same estimating model, and mine were just as diverse.

Take the quality measure of "defects per K lines of code." Capers Jones had cited organizations that ranged on this measure from 50 to 0.2. This range of 250-1 was compatible with what I found among my own clients who measured such things. What I found peculiar was that both the 50-defect clients and the 0.2-defect clients had established their own level as an acceptable standard.

Soon after noticing this pattern, I visited a company that was in the 150-defect range. I was fascinated with their manager's reaction when I told him about the 0.2-defect clients. First he simply denied that this could be true. When I documented it, he said: "Those people must be doing very simple projects, as you can see from their low defect rates."

When I showed that they were actually doing objectively harder projects, he reacted: "Well, it must cost them a lot more than we're able to pay for software." 

When I pointed out that it actually costs them less to produce higher quality, he politely decided not to contract for my services, saying: "Evidently, you don't understand our problem." 

Of course, I understood his problem only too well—and he was it. He believed he knew how to develop software, and he did—at an incredibly high cost to his users.

His belief closed his mind to the possibility of learning anything else about the subject. Nowadays, lots of managers know how to develop software—but they each know different things. One of the signs of immaturity is how insular we are, and how insecure we are with the idea of learning from other people.

Another sign of immaturity is the inability to transfer theoretical knowledge into practice. When I spill barbecue sauce on books, it's not because I think it will improve the books. I know perfectly well what I should do. But I can't seem to carry it out. When I was a teenage driver, I knew perfectly well I shouldn't have accidents, but on the way home from earning my driver's license, I smashed into a parked car. (I had been distracted by a teenage girl on the sidewalk.) I was immature because even though I knew better than to gawk at pretty girls while driving, I had an accident anyway. 

The simple fact was that we already knew hundreds of things about software development, but we were not putting those ideas into practice. Forty years later, we're still not putting many of them into practice. Why not? The principal reason is that our managers are often not very good at what they are supposed to do—managing. In Barry Boehm's studies, the one factor that stood above all the others as a cause of costly software was "poor management." Yet neither Boehm nor any of the other writers on estimating had anything to say on how to make good managers—or get rid of bad ones.

Better estimating of software development could give us a standard for detecting the terrible managers. At the same time, however, it may give us a standard behind which the merely mediocre managers can hide.

Using estimates well

So, if you want to be a better manager, how do you use estimates effectively?

Any estimate is based on a model of what you're estimating. The model will have a number of parameters that characterize the situation. In the case of estimating a software project, parameters of the estimate might include

• number of programming languages to be used

• experience level of development staff

• use of formal code reviews

• characteristic error rate per function point

• many other factors

Suppose, for example, the project has six teams, each of which prefers to use a different programming language. Up to now, you've tolerated this mixture of languages because you don't want to offend any of the teams. Your estimating model, however, tells you that reducing the number of languages will reduce risk and speed up the project. On the other hand, if you try to eliminate one of the languages, your model tells you that a team with less experience in a new language will increase risk. By exploring different values of these parameters, you can learn whether it's worthwhile to convert some of the teams to a common language.

To take another example, you've avoided using formal code reviews because you believe they will add time to the schedule. Studying your estimating tool, however, shows you that use of formal reviews will reduce the rate of errors reaching your testing team. The estimating model can then show you how the rate of errors influences the time spent testing to find and correct those errors.


Many poor managers believe an estimating model is a tool for clubbing their workers over the head to make them work faster. Instead, it's a tool for clubbing yourself over the head as a guide to making wise large scale management decisions.

Tuesday, December 06, 2011

Disposable Programs (Part 2)

There are many reasons why a program brought out of hibernation could incur costs:

1. The hardware environment has changed.

2. The system software environment has changed.

3. The size or format of the data has changed.

4. The human environment has changed.

5. Some part of the program or its supporting material has been lost or damaged.

So it does cost to rerun an "unchanged" program, and the longer the period of hibernation, the greater the cost. But you already knew this—we all know this. Then why, oh why, do we keep tumbling into the same trap?

Part 2
I believe the answer lies in our unwillingness or inability to feed-back the true costs of programming and program maintenance to our users. Among our service bureau clients, the problem seems to have been brought to manageable proportions by the following steps:

1. When a program is commissioned, the lifespan and the number of executions must be specified.

2. If there is uncertainty about either of these figures, contingent prices are given, reflecting the differing costs.

3. The contract is written stating that the program will be destroyed after a certain time and/or number of runs, whichever comes first.

4. The program remains the property of the service bureau, unless the customer takes ownership—in which case a much higher cost is placed on the job, in order to pay for preparing the program to be taken over by other than the original programmers.

5. The customer is notified when the program is about to be destroyed, and is given the option (at a substantial and realistic price) of having the program rebuilt for further use.

6. If the program is a "one-time" program, no notification is given, but the program is destroyed—literally—as soon as the customer agrees to accept the results.

When working with inexperienced users, it is not difficult to get these terms accepted. Neither is it difficult with very experienced users, who know quite well the realities of "one-time" programs that turn out to be "N-time" programs. Only the in between users have difficulty accepting these conditions, for they believe they understand about programming, but actually have no solid basis for understanding.

After a few costly lessons, they are more than willing to sit down in advance and decide whether they want to invest in an N-time program or merely in a disposable program that will actually be disposed of.

In internal data processing situations, especially where there is no true chargeback for programming or program maintenance, these lessons are difficult to teach. There is no cost to the users of specifying a one-time program and then asking that it be run N times. Without cost, there is no motivation to learn.

Where there is chargeback, it is possible to do what good, professional service bureaus do. Without chargeback, you can sometimes achieve some relief by manipulating the one parameter you have available—time. You request the user to specify a one-time or N-time program and then give different time estimates for each. The one-time estimate is shorter, but carefully spells out the procedure that will be followed in destroying the program after its first use.

At first, users will not believe this procedure will be enforced. After a few lessons, they will begin to understand and devote some energy to the decision. Of course, some users will simply attack the computing center manager, or the programmer, with an axe, literal or figurative. Such are the perils of our profession. Besides, even an axe in the forehead is better than the pain in some lower anatomy caused by an immortal one-time program.

Friday, May 27, 2011

The Assumption of Fixed Requirements


Note 1: This post is extracted from Chapter 9 of my book, CHANGE: Planned and Unplanned.

Note 2: This book was written for project leaders in high-tech industries, but writers are also project leaders, and writing certainly requires great skill and precision. For writers, requirements may originate from publishers, agents, and for-hire customers—any of whom can cause unending grief by changing those requirements for a writer who has assumed they were fixed.

Until recently, the computing industry seems to have avoided the subject of requirements the way a debutante might avoid the subject of indigestion. We knew such things existed, but if we didn't think about them, perhaps they would simply take care of themselves.
Many of the classic early papers in software engineering were based on the position:
This is how we would design and build software (if we had unchanging requirements.)
[For writers: each of us has her/his own process for writing, but for many of us, that process is also based on having unchanging requirements. If someone changes our task, we may be thrown off our game, and into write-stopping turmoil.]

For instance, many of the early papers on structured programming were based on the Eight Queens Problem, a problem of fixed definition with no input whatsoever. Many papers on recursive programming were based on the Towers of Hanoi problem, another problem of fixed definition with no input whatsoever. The more recent Cleanroom methodology has the same basis: "The starting point for Cleanroom development is a document that states the user requirements." The following quotation from Parnas and Clemens shows how deeply this assumption runs, even in the most sophisticated process designers.


Usually, a requirements document is produced before coding starts and is never used again. However, that has not been the case for [the software requirements for the A-7E Aircraft]. The currently operational version of the software, which satisfies the requirements document, is still undergoing revision. The organization that has to test the software uses our document extensively to choose the tests that they do. When new changes are needed, the requirements document is used in describing what must be changed and what cannot be changed. Here we see that a document produced at the start of the ideal process is still in use many years after the software went into service. The clear message is that if documentation is produced with care, it will be useful for a long time. Conversely, if it is going to be extensively used, it is worth doing right.
Parnas and Clemens describe the benefits of returning after design to create a requirements document as if it had been present from the beginning.
In the light of all this literature, it's easy to understand why so many software engineering managers have made the mistake of believing they should have unchanging requirements before starting any project. This model or belief is what I call the Assumption of Fixed Requirements, an assumption that is a misreading of these classical works. These classics were not addressing the entire process of software engineering, but only selected parts of that process. What they are teaching us is how to translate requirements into code, once we have reliable requirements.
Translating requirements into code is an essential part of software engineering, and it is the part that has received the most research attention over the past decades. Because of that attention, however, it is no longer the most difficult part of the process. Many organizations know how to do this part quite well, but the quality of their products does not adequately reflect their coding prowess.
In recent internal studies of serious quality problems, three different clients of mine arrived at quite similar conclusions. They divided the sources of the most serious problems into a number of categories, including coding and gathering requirements. In all cases, coding contributed least of all categories to quality problems, and my clients would perhaps do better to work on the less glamorous job of improving their logistics processes. Perhaps these rather advanced organizations are not typical of all software engineering organizations. They still have a lot to learn about coding and especially design, but in each case, the majority of their serious problems stem from requirements.
Software engineers and their customers perceive quality differently, and this table accounts in large part for that discrepancy in perception. Over the past decades, engineers have seen coding faults drop dramatically as a result of their quality improvement efforts. The customers, however, have not seen a comparable decrease in the number of requirements problems, and so do not perceive the same increase in quality as the engineers, who are paying attention to their own priorities—which don't happen to coincide with their customers' priorities. The engineers need to learn that they will never become an Anticipating organization by getting better and better at coding—even though that was the improvement process that brought them as far as they've come.