Friday, May 18, 2018

Why I'm a native American

I don’t know if I have any particular kind of ancestry, but I often claim to be “native American.”

Why? I do so when some institution is “surveying” so-called-race, which is a bogus concept to begin with. See 

Man's Most Dangerous Myth: The Fallacy of Race, by Ashley Montagu.


People say I should choose “White,” but I’m definitely not white. My skin is pinkish yellow, or yellowish pink, and it grows red and brown when I’m out in the sun. I can’t imagine why its color would be important to anyone, except maybe a fashion consultant.

So, when surveyed, I choose “native American (small n)” because I was definitely born in America, so I’m a native. It’s a protest. I would be proud to be a Native American (capital N), but as far as I know, I have no such ancestry.

An interesting sidelight. Years ago when the university insisted I make a “race” choice, they assured me that the information was completely confidential. A year later, when I returned from a trip out of town, I found a note on my desk from Russell Means, a prominent Native American who had visited the university.

I wondered why he would write a personal note to me, until I found out that the administration had sent him to see me, their token “Native American Professor.”

So much for confidentiality. So much for the trustworthiness of bureaucrats.


In such a world, I shall remain “native American,” and I hope Elizabeth Warren and other smart people follow my example. 

Saturday, May 05, 2018

What is the difference between a good manager and a bad manager?

Because a previous blog of mine asked about good and bad managers, the question naturally came up about what's the difference.

There are, of course, many ways to be a bad manager. Or a good manager. But if I’d been asked for a single difference (and you didn’t use the plural) I’d say that the First Law of Bad Management is this:

If what you’re doing isn’t working, do more of it, faster and louder.


For more on good vs. bad management, take a look at 

Saturday, April 28, 2018

Stumbles for New Leaders and Managers


We were asked, "What were your greatest stumbling blocks as a new manager?"

Paige’s article is a terrific introduction to this subject. 

These are the four rookie manager mistakes described in the article:

Rookie mistake #1: Creating a blanket policy for one bad apple

Rookie mistake #2: Embracing the mantra, “do as I say, not as I do”

Rookie mistake #3: Fixing things that aren’t really broken

Rookie mistake #4: Not taking an interest in your employees’ futures

In my career, I’ve made all four of those mistakes, and lots of others. But the one I most remember, and most regret, is micromanaging.

Somehow, I couldn’t believe that other people could solve problems as effectively as I (thought I) could. My mantra was something like “for your own good,” or “for the organization’s good.”

It took me far too long to learn that other people’s solutions were simply other solutions than mine. Some might be worse than mine. Some might even be better. But most of all, they usually solved whatever problems we were dealing with. There was no need for me to push in with my approach.

I’ve gradually learned to reduce this micromanaging behavior. (I’ve never learned to stop completely.) As I’ve succeeded, I’ve noticed:

* people learn faster when allowed to make their own mistakes

* people listen to me more attentively on those few occasions when I do intervene

* I have more time for doing my own job

I strongly suggest that you loosen your grip on your own ideas and allow your employees and co-workers to implement theirs.


Wednesday, April 25, 2018

Leaders: Smart-but-evil versus Dull-but-good? 

Who's better in a leadership position, a smart but evil person or an unintelligent but good person?

There are different kinds of unintelligent people. For one thing, some not-so-smart folks know they’re not so smart and have learned some simple tactics to cope with their inadequate intelligence.

For instance, in managing software, such managers will refrain from micromanaging their programmers, whereas the smart-evil person is quite likely to interfere with the development and testing work.

What you want in a manager is a person who knows how and when to delegate, understands their own limitations, and cares about improving the environment for all the people on the staff. You don’t have to be all that smart to do that.

But if you are an evil person, your intelligence may be serving the wrong master. It may happen that your intelligent moves help your employees, but that’s not what you’re attempting to do, so it’s hit or miss.




Thursday, April 19, 2018

Life hacks for introverts?

I was asked, "What are the best life hacks for extremely introverted people?"

Call it a hack if you like, but what helped me most in being an introvert is realizing that we (in the USA, at least) live in a society that (mistakenly) believes there’s something wrong with being an introvert.

Here’s a typical example. In Olympic diving, Greg Louganis won 4 gold medals. As he emerged from the pool after winning his fourth, the TV announcer said something like, “Isn’t that amazing. And he’s an introvert!” As if introversion somehow cripples a person so he cannot dive.

Electroencephalographi studies have shown that introversion is a physical characteristic, so introverts can be identified by their brain waves. So, if you’re an introvert, you might as well accept it. If you don't accept your introversion as being just fine, then you're like a person who thinks there’s something wrong with their genetic skin color, just because they live in a prejudiced culture.

You do live in a prejudiced culture—prejudiced against introversion— but you don’t have to internalize that (or any) societal prejudice.

Don’t listen to people who advise you how to change, unless you want to change. And, then, if you want to change your behavior in some way, try some of their “hacks.”

Take me, for example. I'm a strong, strong introvert, but my training and consulting business required me to be out interacting with lots and lots of people. Few of them suspected my introvert side because I concentrated on my people-loving side whenever I was with others. My introversion was temporarily put on the back-burner, to be brought out again when I found the opportunity to be alone.

So, if you're like me, you could try my "hack" and see how it works for you. Just don’t be down on yourself if someone else's hacks don’t work the way you wanted


You can do whatever you want, whether you’re an introvert or an extravert. Remember, personality is not destiny.


Monday, April 09, 2018

How can I become a software developer who only designs?

A young programmer asked me, "How can I become a software developer who only designs the whole software architecture and gives instructions to other developers rather than actually coding by myself?"

I told him he could do that right away, as long as he didn’t care if the other developers listened to his instructions and followed them. And if he didn't care of he was paid.

In general, software developers will not follow the lead of someone for whose designs they have no respect. And why would they respect your designs unless you had previously proved yourself by what you’ve built?

So, build things, and build more things, until you demonstrate that others have some reason to follow your lead.


And, at the same time, work on your people skills, because even if you’re the greatest designer in the world, if you’re a self-centered jerk, nobody will follow you or your designs.

For more on designing, see

Monday, March 26, 2018

March Madness or December Dementia

Every March, the USA goes wild with something called "March Madness," a pair of college basketball tournaments. The format of the tournaments is called "single elimination," which means that candidate teams are dropped out of the tournament when they lose. Eventually, one team remains, and they are the "winners."

So, what is the overall effect of this type of tournament? The men's tournament starts with 64 of the best teams in the nation, and when all is done, 63 of them have become "losers." No matter how good their season's record may have been, they ended that season with a loss—something to remain on their minds until next year.

I enjoy watching March Madness, but I think it could be improved. At least, there could be another tournament with a much more satisfactory result. Here's how it would go:

First, we choose the 64 worst teams of the season. Then we pair them off to play one another. The winner of each game is dropped out of the tournament, and the losers are paired for the next round. 

This elimination of winners is continued until only one team remains. They are the winners of the tournament. And, notice, that every other team has ended their season with a victory. Doesn't that feel much better?

Now perhaps this sounds like a stupid idea, but in fact it's extremely popular in the business world. Managers devise award systems that select one or a few individuals (rarely teams) as "winners." In doing so, they have managed to make everyone else feel like "losers."

I guess the theory is the these "losers" will be motivated to work harder or smarter for the next award cycle, and I suppose that sometimes that happens. What I've seen, however, is quite the opposite. Most people respond to "losing" by losing their motivation in the next round.


I've noticed that many of these management awards are given at the end of each year. Maybe a few smart managers could come up with not a March Madness but a December Dementia system that would positively motivate all their employees.

www.geraldmweinberg.com

Saturday, March 24, 2018

How do I fix a really difficult bug in programming?

Here was the question:

"How do I fix a really difficult bug in programming?"

Here was my first answer:

There is no such thing as a “difficult bug.”

I suspect my answer requires further explanation. First of all, I doubt that you have experienced actual bugs in your computer, the kind with 8 legs that bite and swarm. I have, a couple of times, but they are rare, and usually not difficult to eradicate.

Perhaps you are talking about errors, but using inaccurate language. In that case, I will assert “there is no such thing as a difficult error.” The same error might be handled easily by a different person. I have seen that circumstance often. For instance, I once spent a month trying to pinpoint a coding error. When I finally asked the help of a colleague, she found it in less than two minutes.

No, there are no difficult errors, but there are people who have difficulty with an error. We have all been there, and we tend to want to blame the error rather than ourselves.

So, the first thing you need to do to handle a “difficult bug” is to ask yourself,

“What is it about me that is making this error so difficult to handle?”

Perhaps you are having difficulty because you are impatient, or think failure to handle the error will make you look bad to your boss or colleagues.

Perhaps pressure to handle the error is throwing you off your center, distorting your thinking.

Perhaps you do not know enough about the system with the error, or the language in which the program is written.

Perhaps your mind is on other things in your life, things distracting you because they are more important to you than this darn “bug.”

Maybe you should discuss this error with a colleague or two, What is it about you that is keeping you from doing that?


Anyway, good luck in your quest for resolution.

Tuesday, February 13, 2018

My most insidious bug

I was asked, "As a coder, what is the most insidious bug you have ever come across, and how did you find it?"

It’s really hard to pick one error out of hundreds I’ve encountered in my long career, but some of the toughest have been:

  • compiler errors, where the compiler has created object code incorrectly. We usually found these by hacking around, changing the source code to express the program in different ways, or by examining the object code the compiler had produced.
  • hardware errors, both from the failure of a component and an actual design error in the hardware. Such errors are not as frequent today as they were in the old days of vacuum tubes (or relays), but in a way that infrequency makes them all the more difficult when they do occur, because we have so little experience with them.
  • requirements errors, where the program has actually solved the wrong problem. These errors can usually be found only after users have been in contact with the code for some time, and only when there is some communication channel between the users and the programmers.
  • So, what were your most insidious errors?

You can read more about errors and their consequences in

Errors: Bugs, Boo-Boos, and Blunders (https://leanpub.com/errors)

Saturday, January 06, 2018

New: #System #Design #Heuristics

You'd think that after publishing books for half a century, I'd know how to write a book. If that's what you think, you'd be wrong.

Sure, I've even written a book on writing books (Weinberg on Writing, the Fieldstone Method), and I've applied those methods to dozens of successful books. But way back around 1960, I started collecting notes on the process of design, thinking I would shortly gather them into a book. Back then, I didn't call these bits and pieces "fieldstones," but that's what they turned out to be: the pieces that, when assembled properly, would ultimately become my design book.

Ultimately? Assembled properly? Aye, there's the rub!

Building walls from randomly found fieldstones requires patience. So does writing books by the Fieldstone Method. My Introduction to General Systems Thinking took fourteen years to write. But a writer only lives one lifetime, so there's a limit to patience. I'm growing old, and I'm beginning to think that fifty years is as close to "ultimately" as I'm going to get.

So, I've begun to tackle the task of properly assembling my collection of design fieldstones. Unfortunately, it's a much larger collection that I'd ever tackled before. My Mac tells me I have more than 36,000,000 digitized bytes of notes. My filing cabinets told me I had more than twenty-five pounds of paper notes, but I've managed to digitize some of them and discard others, so there's only a bit more than ten pounds left to consider.

For the past couple of years, I've periodically perused these fieldstones and tried to assemble them "properly." I just can't seem to do it. I'm stuck.

Some writers would say I am suffering from "writer's block," but I believe "writer's block" is a myth. I've published three other books in these frustrating years, so I can't be "blocked" as a writer, but just over this specific design book. You can hear me talk more about the Writer's Block myth on YouTube 

[https://www.youtube.com/watch?v=77xrdj9YH3M&t=7s]

but the short version is that "blocking" is simply a lack of ideas about how to write. I finally decided to take my own advice and conjure up some new ideas about how to write this design book.

Why I Was Stuck

To properly assemble a fieldstone pile, I always need an "organizing principle." For instance, my recent book, Do You Want to Be a (Better) Manager? is organized around the principle of better management. Or, for my book, Errors, the principle is actually the title.  So, I had been thinking the organizing principle for a book on design ought to be Design

Well, that seemed simple enough, but there was a problem. Everybody seemed to know what design is, but nobody seemed able to give a clear, consistent definition that covered all my notes. I finally came to the conclusion that's because "design" is not one thing, but many, many different things.

In the past, I ran a forum (SHAPE: Software as a Human Activity Practiced Effectively) whose members were among the most skilled software professionals in the world. We held a number of threads on the subject, "What is Design?" The result was several hundred pages of brilliant thoughts about design, all of which were correct in some context. But many of them were contradictory.

Some said design was a bottom-up process, but others asserted it was top-down. Still others talked about some kind of sideways process, and there were several of these. Some argued for an intuitive process, but others laid out an algorithmic, step-by-step process. There were many other variations: designs as imagined (intentional designs), designs as implemented, and designs as evolved in the world. All in all, there were simply too many organizing principles—certainly too many to compress into a title, let alone organize an entire book.

After two years of fumbling, I finally came up with an idea that couldn't have been implemented fifty years ago: the book will be composed of a variety of those consulting ideas that have been most helpful to my clients' designers. I will make no attempt (or very little) to organize them, but release them incrementally in an ever-growing ebook titled Design Heuristics.

How to Buy System Design Heuristics

My plan for offering the book is actually an old one, using a new technology. More than a century ago Charles Dickens released many of his immortal novels one chapter at a time in the weekly newspaper. Today, using the internet, I will release System Design Heuristics a single element at a time to subscribing readers.

To subscribe to the book, including all future additions, a reader will make a one-time payment. The price will be quite low when the collection is small, but will grow as the collection grows. That way, early subscribers will receive a bargain in compensation for the risk of an unknown future. Hopefully, however, even the small first collection will be worth the price. (If not, there will be a full money-back guarantee.)

Good designs tend to have unexpected benefits. When I first thought of this design, I didn't realize that it would allow readers to contribute ideas that I might incorporate in each new release. Now I aware of that potential benefit, and look forward to it.

Before I upload the first increment of System Design Heuristics, I'll wait a short while for feedback on this idea from my readers. If you'd like to tell me something about the plan, email me or write a comment on this blog.

Thanks for listening. Tell me what you think.

Sunday, December 31, 2017

What is Software?


Ir's a new year, so let's start out with something fundamental, cleaning up something that's bothered me for many years.

The other day I was lunching with a computer-naive friend who asked, "What is software?"

Seems like it would be an easy question for those of us who make and break software for a living, but I had to think carefully to come up with an explanation that she could understand:

Software is that part of a computer system that adapts the machinery to various different uses. For instance, with the same computer, but different software, you could play a game, compute your taxes, write a letter or a book, or obtain answers to your questions about dating.

I then explained to her that it’s unfortunate that early in the history of computers this function was given the name “software,” in contrast to “hardware.” What it should have been called was “flexibleware.”

Unfortunately the term “soft” has been interpreted by many to mean “easy,” which is exactly wrong. Don't be fooled. 
What we call “hardware” should have been called “easyware,” and what we call “software” could then have been appropriately called “difficultware.”

Monday, December 25, 2017

Unnecessary Code

We were asked, "How can I tell if my code does extra unnecessary work?"
To answer this question well, I’d need to know what you mean by “unnecessary.” Not knowing your meaning, I’ll just mention one kind of code I would consider unnecessary: code that makes your program run slower than necessary but can be replaced with faster code.

To rid your program of such unnecessary code, start by timing the program’s operations. If it’s fast enough, then you’re done. You have no unnecessary code of this type.

If it’s not fast enough, then you’ll want to run a profiler that shows where the time is being spent. Then you search those areas (there can be only one that consumes more than half the time) and work it over, looking first at the design.

There’s one situation I’ve encountered where this approach can bring you trouble. Code that’s fast enough with some sets of data may be unreasonably slow with other sets. The most frequent case of this kind is when the algorithm’s time grows non-linearly with the size of the data. To prevent this kind of unnecessary code, you must do your performance testing with (possibly artificially) large data sets.

Paradoxically, though, some algorithms are faster with large data sets than small ones.

Here’s a striking example: My wife, Dani, wanted to generate tests in her large Anthropology class. She wanted to give all students the same test, but she wanted the questions for each student to be given in a random order, to prevent cheating by peeking. She gave 20 questions to a programmer who said he already had a program that would do that job. The program, however, seemed to fall into an unending loop. Closer examination eventually showed that it wasn't an infinite loop, but would have finally ended about the same time the Sun ran out of hydrogen to burn.

Here’s what happened: The program was originally built to select random test questions from a large (500+ questions) data base. The algorithm would construct a test candidate by choosing, say, twenty questions at random, then checking the twenty to see if there were any duplicates among those chosen. If there were duplicates, the program would discard that test candidate and construct another.

With a 500 question data base, there was very little chance that twenty questions chosen at random would contain a duplicate. It could happen, but throwing out a few test candidates didn’t materially affect performance. But, when the data base had only twenty questions, and all Dani wanted was to randomize the order of the questions, the situation was quite different.

Choosing twenty from twenty at random (with replacement) was VERY likely to produce duplicates, so virtually every candidate was discarded, but the program just ground away, trying to find that rare set of twenty without duplicates.

As an exercise, you might want to figure out the probability of a non-duplicate set of twenty. Indeed, that’s an outstanding way to eliminate unnecessary code: by analyzing your algorithm before coding it.

Over the years, I’ve seen many other things you might consider unnecessary, but which do no harm except to the reputation of the programmer. For example:
* Setting a value that’s already set.
* Sorting a table that’s already sorted.
* Testing a variable that can have only one value.

These redundancies are found by reading the program, and may be important for another reason besides performance. Such idiotic pieces of code may be indications that the code was written carelessly, or perhaps modified by someone without full understanding. In such cases, there’s quite likely to be an error nearby, so don’t just ignore them.


Wednesday, December 20, 2017

Which code is more readable?

We were asked, "Which code is more readable, one that uses longer variable names or short ones?" 

Maybe some historical perspective will help answer this question.

In the very early days of computing (I was there), we used short variable names because:

* Programs were fairly short and simple, so scope wasn’t much of a problem.

  • Memories were small, so programmers didn’t want to waste memory with long names.

  • Compilers and assemblers were slow, and long names made them slower.

  • Many compilers and assemblers wouldn’t allow names longer than a few characters, because of speed and memory limitations.

  • We didn’t think much, if at all, about who would maintain a program once it left the hands of the original programmer.

As programs grew larger, one result of short naming was difficult maintenance, so the movement toward longer names grew stronger. It wasn’t helped by COBOL, which asserted that executives should be able to read code. Lots of COBOL code was littered with super-long names, but that didn't help executives read it.

The COBOL argument proved to be nonsense. Still, the maintenance argument for longer, more descriptive names made sense.

Unfortunately, like many movements, the long-name movement went too far, at least for my taste. It wasn’t because long names were harder to write. After all, a typical program is written oncem but read for modification and testing many, many times. So, if long names really made reading easier and more reliable, it was good.

But the length of a name is not really the issue. I’ve seen many programs with long, long names that were so similar that they were easily confused, one with another. For instance, we once wasted many days trying to find an error when the name radar_data_station_#46395_azimuth_reading was mistaken for radar_data_station_#46895_azimuth_reading. Psychologists and writers know well that items in the middle of long lists are frequently glossed over.

So, like lots of other things in software development, long versus short names becomes a tradeoff, a design decision for a programmer for which there is no “right” answer. Programmers must design their name-sets with the same kind of engineering thought they put into all their design decisions.

And, as maintainers modify a program, they must maintain the name-set, so as to avoid building up design debt as the program ages.

So, sorry, there’s no easy answer to this question, nothing a programmer can apply  mindlessly. Just as it’s always been, programmers who think will do a better job than those who blindly follow simplistic rules.



Saturday, December 16, 2017

My First Week in a Software Job

We were asked, "What was your first week like at your first software engineering job?"

In June, 1955, I went to work for IBM in San Francisco. Of course, at that time there was no such thing as "software engineering." In fact, there was no such thing as a "programmer." My title was "Applied Science Representative." I was supposed to apply science to the sale of IBM computers.

I was told that in two weeks I was to teach a course in programming the IBM 650.

That presented a few problems.

  • I had never programmed any computer before.

  • Nobody in the IBM office had ever programmed a computer before.

  • Nobody in the IBM office had ever seen a computer before.

  • There was no computer in the office—just a bunch of punch card machines.

  • In fact, as far as we knew, there was no computer in San Francisco.

I spent the next two weeks in a closet in the IBM office studying all the IBM manuals that were stored there, preparing myself to teach this course. I was pretty much a lone ranger, without the horse or any faithful Indian companion. Actually, no companion at all.

That was over 60 years ago, and now I have a multitude of companions. Even so, it was a special time and an unforgettable first two weeks, so thank you for asking this question.

If you want to know more about what it was like in those thrilling days of yesteryear, you should follow Danny Faught's blog. Back then, we used to listen to the Lone Ranger on radio (there wasn't much, if any, television).

"Hi-Yo, Silver! A fiery horse with the speed of light, a cloud of dust and a hearty ‘Hi-Yo Silver'... The Lone Ranger! With his faithful Indian companion, Tonto, the daring and resourceful masked rider of the plains led the fight for law and order in the early Western United States. Nowhere in the pages of history can one find a greater champion of justice. Return with us now to those thrilling days of yesteryear. From out of the past come the thundering hoof-beats of the great horse Silver. The Lone Ranger rides again!"


<http://www.geraldmweinberg.com (Formerly The Lone Programmer)