Tuesday, August 02, 2016

Better Estimating Can Lead to Worse Peformance

This blog post is a new chapter, just added to my book, Do You Want To Be a (Better) Manager.

One of the (few) advantages of growing old is gaining historical perspective, something sorely lacking in the computing business. Almost a lifetime ago, I wrote an article about the future dangers of better estimating. I wondered recently if any of my predictions came to pass.

Back then, Tom DeMarco sent me a free copy of his book, Controlling Software Projects: Management, Measurement and Estimation. Unfortunately, I packed it in my bike bag with some takeout barbecue, and I had a little accident. Tom, being a generous man, gave me a second copy to replace the barbecued one.

Because Tom was so generous, I felt obliged to read the book, which proved quite palatable even without sauce. In the book, Tom was quite careful to point out that software development was a long way form maturity, so I was surprised to see an article of his entitled "Software Development—A Coming of Age." Had something happened in less than a year to bring our industry to full growth?

As it turned out, the title was apparently a headline writer's imprecision, based on the following statement in the article:

"In order for the business of software to come of age, we shall have to make some major improvements in our quantitative skills. In the last two years, the beginnings of a coherent quantitative discipline have begun to emerge…"

The article was not about the coming of age of software development, but a survey of the state of software project estimation. After reviewing the work of Barry Boehm, Victor Basili, Capers Jones, and Lawrence Putnam, DeMarco stated that this work

"…provides a framework for analysis of the quantitative parameters of software projects. But none of the four authors addresses entirely the problem of synthesizing this framework into an acceptable answer to the practical question: How do I structure my organization and run my projects in order to maintain reasonable quantitative control?"

As I said before, Tom is generous person. He's also smart. If he held such reservations about the progress of software development, I'd believe him, and not the headline writer. Back then, software development had a long way to go before coming of age.

Anyway, what does it mean to "come of age"? When you come of age, you stop spilling barbecue sauce on books. You also stop making extravagant claims about your abilities. In fact, if someone keeps bragging about how they've come of age, you know they haven't. We could apply that criterion to software development, which has been bragging about it's impending maturity now for over forty years.

Estimates can become standards

One part of coming of age is learning to appraise your own abilities accurately—in other words, to estimate. When we learn to estimate software projects accurately, we'll certainly be a step closer to maturity—but not, by any means, the whole way. For instance, I know that I'm a klutz, and I can measure my klutziness with high reliability. To at least two decimal places, I can estimate the likelihood that I'll spill barbecue sauce on a book—but that hardly qualifies me as grown up.

The mature person can not only estimate performance, but also perform at some reasonably high level. Estimating is a means mature people use to help gain high performance, but sometimes we make the mistake of substituting means for ends. When I was in high school, my gym teacher estimated that only one out of a hundred boys in the school would be able to run a mile in under ten minutes. When he actually tested us, only 13 out of 1,200 boys were able to do this well. One percent was an accurate estimate, but was it an acceptable goal for the fitness of high school boys? (Back in those days, the majority of us boys were smokers.)

This was a problem I was beginning to see among my clients. Once they learned to estimate their software projects reasonably well, there was a tendency to set these estimating parameters as standards. They said, in effect: "As long as we do this well, we have no cause to worry about doing any better." This might be acceptable if there was a single estimating model for all organizations, but there wasn't. DeMarco found that no two of his clients came up with the same estimating model, and mine were just as diverse.

Take the quality measure of "defects per K lines of code." Capers Jones had cited organizations that ranged on this measure from 50 to 0.2. This range of 250-1 was compatible with what I found among my own clients who measured such things. What I found peculiar was that both the 50-defect clients and the 0.2-defect clients had established their own level as an acceptable standard.

Soon after noticing this pattern, I visited a company that was in the 150-defect range. I was fascinated with their manager's reaction when I told him about the 0.2-defect clients. First he simply denied that this could be true. When I documented it, he said: "Those people must be doing very simple projects, as you can see from their low defect rates."

When I showed that they were actually doing objectively harder projects, he reacted: "Well, it must cost them a lot more than we're able to pay for software." 

When I pointed out that it actually costs them less to produce higher quality, he politely decided not to contract for my services, saying: "Evidently, you don't understand our problem." 

Of course, I understood his problem only too well—and he was it. He believed he knew how to develop software, and he did—at an incredibly high cost to his users.

His belief closed his mind to the possibility of learning anything else about the subject. Nowadays, lots of managers know how to develop software—but they each know different things. One of the signs of immaturity is how insular we are, and how insecure we are with the idea of learning from other people.

Another sign of immaturity is the inability to transfer theoretical knowledge into practice. When I spill barbecue sauce on books, it's not because I think it will improve the books. I know perfectly well what I should do. But I can't seem to carry it out. When I was a teenage driver, I knew perfectly well I shouldn't have accidents, but on the way home from earning my driver's license, I smashed into a parked car. (I had been distracted by a teenage girl on the sidewalk.) I was immature because even though I knew better than to gawk at pretty girls while driving, I had an accident anyway. 

The simple fact was that we already knew hundreds of things about software development, but we were not putting those ideas into practice. Forty years later, we're still not putting many of them into practice. Why not? The principal reason is that our managers are often not very good at what they are supposed to do—managing. In Barry Boehm's studies, the one factor that stood above all the others as a cause of costly software was "poor management." Yet neither Boehm nor any of the other writers on estimating had anything to say on how to make good managers—or get rid of bad ones.

Better estimating of software development could give us a standard for detecting the terrible managers. At the same time, however, it may give us a standard behind which the merely mediocre managers can hide.

Using estimates well

So, if you want to be a better manager, how do you use estimates effectively?

Any estimate is based on a model of what you're estimating. The model will have a number of parameters that characterize the situation. In the case of estimating a software project, parameters of the estimate might include

• number of programming languages to be used

• experience level of development staff

• use of formal code reviews

• characteristic error rate per function point

• many other factors

Suppose, for example, the project has six teams, each of which prefers to use a different programming language. Up to now, you've tolerated this mixture of languages because you don't want to offend any of the teams. Your estimating model, however, tells you that reducing the number of languages will reduce risk and speed up the project. On the other hand, if you try to eliminate one of the languages, your model tells you that a team with less experience in a new language will increase risk. By exploring different values of these parameters, you can learn whether it's worthwhile to convert some of the teams to a common language.

To take another example, you've avoided using formal code reviews because you believe they will add time to the schedule. Studying your estimating tool, however, shows you that use of formal reviews will reduce the rate of errors reaching your testing team. The estimating model can then show you how the rate of errors influences the time spent testing to find and correct those errors.

Many poor managers believe an estimating model is a tool for clubbing their workers over the head to make them work faster. Instead, it's a tool for clubbing yourself over the head as a guide to making wise large scale management decisions.

No comments: