Showing posts with label planning. Show all posts
Showing posts with label planning. Show all posts

Monday, August 29, 2016

A small problem-solving tip for your TO DO list

Here's a little principle for all TO DO and NOT TO DO lists. I suspect most of us do this instinctively, but it's worth making it an explicit part of your planning process:

Break the tasks into small tasks for scheduling purposes.

If you have large tasks to do, they may keep falling to the bottom of your stack because they look too imposing or because you rarely have large blocks of empty time.

Also, smaller tasks have fewer conditions needed to start. A large task may have so many pre-conditions that even when you have the time, one of the conditions may be missing, so you don't start. 

One way to break up a large task is to create smaller tasks, each one of which removes a pre-condition for the larger task. For example, if you need to call a source, but you do most of your work at a time when the source may not be available, make the call into one small task so it can be removed from the constraints on the large task.

Small tasks are also motivating because you receive frequent feelings of accomplishment.


Tuesday, August 02, 2016

Better Estimating Can Lead to Worse Peformance


This blog post is a new chapter, just added to my book, Do You Want To Be a (Better) Manager.

One of the (few) advantages of growing old is gaining historical perspective, something sorely lacking in the computing business. Almost a lifetime ago, I wrote an article about the future dangers of better estimating. I wondered recently if any of my predictions came to pass.

Back then, Tom DeMarco sent me a free copy of his book, Controlling Software Projects: Management, Measurement and Estimation. Unfortunately, I packed it in my bike bag with some takeout barbecue, and I had a little accident. Tom, being a generous man, gave me a second copy to replace the barbecued one.

Because Tom was so generous, I felt obliged to read the book, which proved quite palatable even without sauce. In the book, Tom was quite careful to point out that software development was a long way form maturity, so I was surprised to see an article of his entitled "Software Development—A Coming of Age." Had something happened in less than a year to bring our industry to full growth?

As it turned out, the title was apparently a headline writer's imprecision, based on the following statement in the article:

"In order for the business of software to come of age, we shall have to make some major improvements in our quantitative skills. In the last two years, the beginnings of a coherent quantitative discipline have begun to emerge…"

The article was not about the coming of age of software development, but a survey of the state of software project estimation. After reviewing the work of Barry Boehm, Victor Basili, Capers Jones, and Lawrence Putnam, DeMarco stated that this work

"…provides a framework for analysis of the quantitative parameters of software projects. But none of the four authors addresses entirely the problem of synthesizing this framework into an acceptable answer to the practical question: How do I structure my organization and run my projects in order to maintain reasonable quantitative control?"

As I said before, Tom is generous person. He's also smart. If he held such reservations about the progress of software development, I'd believe him, and not the headline writer. Back then, software development had a long way to go before coming of age.

Anyway, what does it mean to "come of age"? When you come of age, you stop spilling barbecue sauce on books. You also stop making extravagant claims about your abilities. In fact, if someone keeps bragging about how they've come of age, you know they haven't. We could apply that criterion to software development, which has been bragging about it's impending maturity now for over forty years.

Estimates can become standards

One part of coming of age is learning to appraise your own abilities accurately—in other words, to estimate. When we learn to estimate software projects accurately, we'll certainly be a step closer to maturity—but not, by any means, the whole way. For instance, I know that I'm a klutz, and I can measure my klutziness with high reliability. To at least two decimal places, I can estimate the likelihood that I'll spill barbecue sauce on a book—but that hardly qualifies me as grown up.

The mature person can not only estimate performance, but also perform at some reasonably high level. Estimating is a means mature people use to help gain high performance, but sometimes we make the mistake of substituting means for ends. When I was in high school, my gym teacher estimated that only one out of a hundred boys in the school would be able to run a mile in under ten minutes. When he actually tested us, only 13 out of 1,200 boys were able to do this well. One percent was an accurate estimate, but was it an acceptable goal for the fitness of high school boys? (Back in those days, the majority of us boys were smokers.)

This was a problem I was beginning to see among my clients. Once they learned to estimate their software projects reasonably well, there was a tendency to set these estimating parameters as standards. They said, in effect: "As long as we do this well, we have no cause to worry about doing any better." This might be acceptable if there was a single estimating model for all organizations, but there wasn't. DeMarco found that no two of his clients came up with the same estimating model, and mine were just as diverse.

Take the quality measure of "defects per K lines of code." Capers Jones had cited organizations that ranged on this measure from 50 to 0.2. This range of 250-1 was compatible with what I found among my own clients who measured such things. What I found peculiar was that both the 50-defect clients and the 0.2-defect clients had established their own level as an acceptable standard.

Soon after noticing this pattern, I visited a company that was in the 150-defect range. I was fascinated with their manager's reaction when I told him about the 0.2-defect clients. First he simply denied that this could be true. When I documented it, he said: "Those people must be doing very simple projects, as you can see from their low defect rates."

When I showed that they were actually doing objectively harder projects, he reacted: "Well, it must cost them a lot more than we're able to pay for software." 

When I pointed out that it actually costs them less to produce higher quality, he politely decided not to contract for my services, saying: "Evidently, you don't understand our problem." 

Of course, I understood his problem only too well—and he was it. He believed he knew how to develop software, and he did—at an incredibly high cost to his users.

His belief closed his mind to the possibility of learning anything else about the subject. Nowadays, lots of managers know how to develop software—but they each know different things. One of the signs of immaturity is how insular we are, and how insecure we are with the idea of learning from other people.

Another sign of immaturity is the inability to transfer theoretical knowledge into practice. When I spill barbecue sauce on books, it's not because I think it will improve the books. I know perfectly well what I should do. But I can't seem to carry it out. When I was a teenage driver, I knew perfectly well I shouldn't have accidents, but on the way home from earning my driver's license, I smashed into a parked car. (I had been distracted by a teenage girl on the sidewalk.) I was immature because even though I knew better than to gawk at pretty girls while driving, I had an accident anyway. 

The simple fact was that we already knew hundreds of things about software development, but we were not putting those ideas into practice. Forty years later, we're still not putting many of them into practice. Why not? The principal reason is that our managers are often not very good at what they are supposed to do—managing. In Barry Boehm's studies, the one factor that stood above all the others as a cause of costly software was "poor management." Yet neither Boehm nor any of the other writers on estimating had anything to say on how to make good managers—or get rid of bad ones.

Better estimating of software development could give us a standard for detecting the terrible managers. At the same time, however, it may give us a standard behind which the merely mediocre managers can hide.

Using estimates well

So, if you want to be a better manager, how do you use estimates effectively?

Any estimate is based on a model of what you're estimating. The model will have a number of parameters that characterize the situation. In the case of estimating a software project, parameters of the estimate might include

• number of programming languages to be used

• experience level of development staff

• use of formal code reviews

• characteristic error rate per function point

• many other factors

Suppose, for example, the project has six teams, each of which prefers to use a different programming language. Up to now, you've tolerated this mixture of languages because you don't want to offend any of the teams. Your estimating model, however, tells you that reducing the number of languages will reduce risk and speed up the project. On the other hand, if you try to eliminate one of the languages, your model tells you that a team with less experience in a new language will increase risk. By exploring different values of these parameters, you can learn whether it's worthwhile to convert some of the teams to a common language.

To take another example, you've avoided using formal code reviews because you believe they will add time to the schedule. Studying your estimating tool, however, shows you that use of formal reviews will reduce the rate of errors reaching your testing team. The estimating model can then show you how the rate of errors influences the time spent testing to find and correct those errors.


Many poor managers believe an estimating model is a tool for clubbing their workers over the head to make them work faster. Instead, it's a tool for clubbing yourself over the head as a guide to making wise large scale management decisions.

Friday, January 20, 2012


WIGGLE Charts—A Sketching Tool for Designers
There's no sense being precise about something when you don't even know what you're talking about. - John von Neumann

For systems designers, it is the best of times and the worst of times. For years we muddled through with a few simple graphic tools for design and documentation—flowcharts, block diagrams, and perhaps decision tables. Then came the diagram explosion, with HIPO, HIPO/DB, Warnier-Orr diagrams, Softech's SADT, Nassi-Shneiderman charts, Petri nets, Constantine structure charts and data flow diagrams, Jackson data structure diagrams, and coding schemes. And for each of these diagrams, you need only bend a line or add a symbol to become known as the inventor of yet another graphic design tool.

Although the choice is large, it is really not very wide. Each of these diagrammatic schemes shares the characteristic of precision—wonderful when you know what you're talking about, but time-consuming and thought-stifling when you don't. And, since most design work is spent thinking roughly, few of these diagrams are of much help through large parts of the design process.

In other design fields, such as architecture, the rough sketch is the most frequently used graphic device, and precise detailed drawings are rarely used at all until the creative part of the design work is finished. The rough sketch has several advantages over the precise drawing:

1. It can be drawn much faster, thus using less time.

2. It represents less investment of time, so we're not afraid to throw it away and try something else.

3. It's very roughness conveys important information about where we are in the design process.

In information processing, rough sketches have always existed, but have never been glorified by a name or by favorable publicity. Schools of architecture offer courses in sketching. The student architect who makes clear quick sketches is much admired by faculty and peers alike. It's time we learned from more mature disciplines and put sketching up on a pedestal.

For many years, I've taught a method of sketching usable with most of the diagrammatic techniques now used in information processing. Although it's been received with enthusiasm, it's never received much publicity, perhaps because:

1. It doesn't require a template.

2. It doesn't have a name.

Although I'll continue to resist the template forces, I've decided to bring the baby to life with a catchy acronym, WIGGLE Charts, for Weinberg's Ideogram for Generating Graphics Lacking Exactitude.
A WIGGLE is merely a box, or block, or line, with one or more rough edges. The rough edges indicate what parts represented by the box or line are imprecisely known. For instance, the following figure is a sketch of a system using a block diagram form

A WIGGLE block diagram
Each box represents input coming from the left, processing inside, and output going to the right. Box 1 has a straight line at its left side, indicating the input to Box 1 is clearly defined somewhere. The right side, however, is rough, indicating we haven't decided what its output will be. As indicated in the diagram, some output will be passed to a second box. but we don't know exactly what. The top and bottom of Box 1 are rough lines, indicating we don't know exactly what this process will be.

Box 2 has undefined input and output, but its process is well known to us, and clearly delimited in scope. Perhaps we have decided to use an off-the-shelf sort, though we don't know which one, so we haven't decided upon a record format.

Box 3 takes the unknown output of Box 2 as its unknown input. By a process that's not yet well defined, it produces two outputs, one well defined and one known only roughly. Perhaps the first report is defined by legal requirements, or by input needs of another system, while the second output is an error report whose format is left open at this stage of the design process. The rough arrows between the boxes indicate we haven't yet decided how control will pass from one box to another. They could be subroutines of the same master routine, or steps in the same job, or separate steps manually coordinated.
Taken together, these three WIGGLE boxes and their arrows give a sketch of the overall design we have in mind. Perhaps more important is what they don't do:

1. They don't give us or any reader an unjustified feeling of precision.

2. They don't intimidate anyone who has an idea about changing something to improve the design.

3. They haven't wasted a lot of time drawing with templates.


Perhaps the nicest feature of WIGGLE charts is the way they can be used with just about anybody's diagrammatic technique. In the second half of this blog post, we'll look at a few more examples of how WIGGLE charts can be used.
(to be continued)

Source
This material on WIGGLE charts is adapted from my book, Rethinking Systems Analysis and Design.

Saturday, July 23, 2011

Change Artist Challenge #5: Being The Catalyst

Look abroad thro' Nature's range.
Nature's mighty law is change.
- Robert Burns

Although change artists often work as prime movers, they more often work through understanding natural forces and creating slight perturbations of Nature. In this challenge, you will practice facilitating the change projects of others, using various ways of empowering from the position of catalyst. In chemistry, a catalyst is a substance that added to a reaction accelerates that reaction by its presence, without itself being changed by the reaction.

A human catalyst is someone who rouses the mind or spirits or incites others to activity with a minimum of self-involvement—in other words, by empowering others. For people to be empowered to change their organization, the MOI model tells us that the following ingredients are required:


Motivation
• self-esteem
• a value system and a vision held in common
• a sense of difference between perceived and desired

Organization
• mutuality of support, based on personal uniqueness
• a plan for reducing the perceived-desired difference
• a diversity of resources relevant to the plan


Information
• a systems understanding of what keeps things from changing
• an understanding of empowerment versus powerlessness
• continuing education appropriate to the tasks

Often, only a single ingredient is missing, but the person who doesn't know which one it is can feel completely disempowered. The recipe suggests which ingredient might be missing. A change artist who supplies that missing ingredient can catalyze change with minimal effort.


The Challenge

Your challenge is to facilitate other people's change projects, approximately one per week, for at least two weeks. You should attempt to be a catalyst, for change, not the prime mover for change. To be a catalyst, you should involve yourself

• as effectively as possible

• in the smallest possible way

• without depleting your capacity to catalyze other changes

If possible, use each ingredient of this recipe for empowerment at least once. Keep notes in your journal and be prepared to share learnings with the group you are catalyzing.

Experiences
1. A group in the shipping department asked me to help them run their planning meetings. I said I would do it if they enrolled two people in our facilitation class, and that after taking the class, they would work alongside me. After one meeting, they are now facilitating their own.

2. I led a technical review of the design of a very controversial project, and apparently I did a good job because I got three other invitations to lead difficult reviews. I did lead two of them, but I decided to try being a catalyst on the third. I told them I wouldn't lead the meeting, but I would play shadow to a leader of their choice and we would switch roles if their leader got in trouble. She didn't.

3. One of my groups wasn't using—or even attempting to use—the new configuration control system. Ordinarily, I would have ordered them to use it, with threats of reprisals. I thought about the minimum thing I could do—with no force and no blaming—to get them moving. I decided to call them in for a meeting and give them the problem of how to get them moving. They told me they just didn't have time to switch their partially developed project to the new system. I asked them how much time they would need. They huddled and came up with a two-week extension to their schedule. (I had been afraid they would say two months.) Since they were off the critical path, I said they could have the two weeks, but only if they switched to the new system. They actually did the job in one week, and in the end, they made up four days of that—partly, at least, because of using the better tool. I've now used this consultation method several more times. "What would you need to give me what I need?" turns out to be a great catalyst. I like being a catalyst much more than being a dictator.

Source
These challenges are adapted from my ebook, Becoming a Change Artist, which can be obtained from most of the popular ebook vendors. See my website <http://www.geraldmweinberg.com> for links to all of my books at the major vendors.