Showing posts with label history of computers. Show all posts
Showing posts with label history of computers. Show all posts

Sunday, July 15, 2018

What motivated you to learn to code?

I received an interesting question, today: "What motivated you to learn to code?"

Maybe it's old age, but my memory for ancient events seems to have improved. I remember quite clearly those days, back in the 1950s, when I worked as a "computer." That was my job title, computer. I did physics calculations with pencil and paper—and an eraser.

At that time,I had never seen a computing machine, nor another person who had ever seen one. 

When I left graduate school, I went to work for IBM in San Francisco. Nobody else in the IBM office had ever seen a computing machine, at least not a stored program one. We had a machine with 10 wireable instructions (IBM 604) and one 10-digit word of data storage. I was motivated to learn to code that machine by a one-dollar bet that I could turn on all the lights on the console. I won the bet.            

The first stored program machine (IBM 650, with 1,000 words of drum memory) was due in the IBM office two weeks after I started there. I was given the assignment of learning how to program it, as nobody else in the office had a clue. I learned to code by reading the machine manuals for two weeks.

Also, two weeks after the machine arrived, I had to teach a programming class to three other new hires. So, that assignment also motivated me. When the machine arrived, I was the only one who dared to touch it for two weeks, so I wrote programs for the sort of calculations I had been doing in my job as a computer.


It was the thrill of a lifetime.


Saturday, July 07, 2018

What were some jobs that existed 50 years ago but have largely disappeared today?

We often hear that we're in a time of change, but this observation isn't really news. We've been in a time of change for my whole lifetime, and well before that. Many jobs that once existed are no longer available, and many have even disappeared from memory.

We were challenged recently to recall some jobs that have disappeared in the past 50 years, and it was great fun reading all the answers, many of which described jobs I once held back in my youth. I go back a bit more than 50 years, though, so I have a few more to add.

The first, most obvious omission that popped into my mind was the iceman. In the 1930s, my family had an icebox (not a refrigerator, but an actual box that held a block of ice). The iceman’s horse-drawn wagon would come around and be surrounded by us kids, hoping to get free shards of ice caused when he cut up little blocks to fit our iceboxes.

Another job only briefly mentioned was typesetting. I never held that job, but I was trained for manual typesetting for a semester in high school. At least I know where terms like upper-case and lower-case come from.

Someone also mentioned keypunch operator, a task (not a job) that was often done by prisoners who were literally chained to their keypunch machines. What wasn't mentioned, however, were key verifier operators. Not many people today have ever seen a verifier machine, let alone even know what one was.

Even before my time, there were jobs that disappeared, but which I read about in a nineteenth century book about jobs for women. The final two chapters in the book were about a couple of sure-fire women’s jobs for the future (1900 was then the future).

First chapter was about telegraph operators. The chapter “proved” that there was a great future for women because they could operate a telegraph key at least as fast as men (and the telephone had yet to be invented).

Second chapter was about picture tinters. There was, of course, no color photography, and it wasn’t really even conceived of. Women were supposedly much better at coloring photos because of their “artistic bent” and their more delicate hands. Though there are a few photo tinters still around today for special jobs, it’s not a career with a great future.

It's fun to think about these forgotten jobs, but they're also a source of important knowledge, or perhaps even wisdom. Job disappearance is not some new phenomenon caused by computers. It's always gone on through history. True, some jobs lasted a long time, so long that they were passed down from generation to generation, even becoming family names, such as Smith, Turner, Eisenhower, Baker, and Miller. (See, for example, <Meaning of Surnames> for hundreds of examples)

Some of those jobs still exist, though often modified by new technology. Do you still recognize Fuller, Chandler, or Ackerman? And many others have largely disappeared, remaining only in some special niche, like photo tinters. Do you know anybody named Armbruster who still makes crossbows? Well, you probably know a few Coopers, but how many of them still make barrels?

So, what's the lesson for your own future? If you're as old as I am, you probably don't have to worry about your job disappearing, but even my "job" as a writer is changing rapidly with new technology. Even if your type of job doesn't disappear entirely, you will be faced with changes.

I think your preparation for job changes will be the same as your preparation for changed jobs: increased adaptability. Today's market tends to reward specialization, but when you become totally specialized, you become the victim of change. Think what's happened to all those COBOL experts from a few years ago.

I'd suggest that you take advantage of the rewards of specialization but invest a small percentage of your time to learning something new. Always. Keep you mind flexible for a future none of us can predict.

p.s. Minutes after I posted this blog, several readers wrote:

Your first job, "computer" did also disappear. How long was that job around? (Kind of surprised you did not mention it in the blog post.)
----
Well, that's shows I'm a human being. What's that saying about shoemakers' children going barefuot? It never occurred to me to consider my 'computer' job as disappearing, but of course it has been largely taken over by machines. Thanks, readers.

Oh, and some more, including switchboard operator, another job I had.

Maybe you folks could add more via comments here.

Sunday, December 31, 2017

What is Software?


Ir's a new year, so let's start out with something fundamental, cleaning up something that's bothered me for many years.

The other day I was lunching with a computer-naive friend who asked, "What is software?"

Seems like it would be an easy question for those of us who make and break software for a living, but I had to think carefully to come up with an explanation that she could understand:

Software is that part of a computer system that adapts the machinery to various different uses. For instance, with the same computer, but different software, you could play a game, compute your taxes, write a letter or a book, or obtain answers to your questions about dating.

I then explained to her that it’s unfortunate that early in the history of computers this function was given the name “software,” in contrast to “hardware.” What it should have been called was “flexibleware.”

Unfortunately the term “soft” has been interpreted by many to mean “easy,” which is exactly wrong. Don't be fooled. 
What we call “hardware” should have been called “easyware,” and what we call “software” could then have been appropriately called “difficultware.”

Wednesday, December 20, 2017

Which code is more readable?

We were asked, "Which code is more readable, one that uses longer variable names or short ones?" 

Maybe some historical perspective will help answer this question.

In the very early days of computing (I was there), we used short variable names because:

* Programs were fairly short and simple, so scope wasn’t much of a problem.

  • Memories were small, so programmers didn’t want to waste memory with long names.

  • Compilers and assemblers were slow, and long names made them slower.

  • Many compilers and assemblers wouldn’t allow names longer than a few characters, because of speed and memory limitations.

  • We didn’t think much, if at all, about who would maintain a program once it left the hands of the original programmer.

As programs grew larger, one result of short naming was difficult maintenance, so the movement toward longer names grew stronger. It wasn’t helped by COBOL, which asserted that executives should be able to read code. Lots of COBOL code was littered with super-long names, but that didn't help executives read it.

The COBOL argument proved to be nonsense. Still, the maintenance argument for longer, more descriptive names made sense.

Unfortunately, like many movements, the long-name movement went too far, at least for my taste. It wasn’t because long names were harder to write. After all, a typical program is written oncem but read for modification and testing many, many times. So, if long names really made reading easier and more reliable, it was good.

But the length of a name is not really the issue. I’ve seen many programs with long, long names that were so similar that they were easily confused, one with another. For instance, we once wasted many days trying to find an error when the name radar_data_station_#46395_azimuth_reading was mistaken for radar_data_station_#46895_azimuth_reading. Psychologists and writers know well that items in the middle of long lists are frequently glossed over.

So, like lots of other things in software development, long versus short names becomes a tradeoff, a design decision for a programmer for which there is no “right” answer. Programmers must design their name-sets with the same kind of engineering thought they put into all their design decisions.

And, as maintainers modify a program, they must maintain the name-set, so as to avoid building up design debt as the program ages.

So, sorry, there’s no easy answer to this question, nothing a programmer can apply  mindlessly. Just as it’s always been, programmers who think will do a better job than those who blindly follow simplistic rules.



Saturday, December 16, 2017

My First Week in a Software Job

We were asked, "What was your first week like at your first software engineering job?"

In June, 1955, I went to work for IBM in San Francisco. Of course, at that time there was no such thing as "software engineering." In fact, there was no such thing as a "programmer." My title was "Applied Science Representative." I was supposed to apply science to the sale of IBM computers.

I was told that in two weeks I was to teach a course in programming the IBM 650.

That presented a few problems.

  • I had never programmed any computer before.

  • Nobody in the IBM office had ever programmed a computer before.

  • Nobody in the IBM office had ever seen a computer before.

  • There was no computer in the office—just a bunch of punch card machines.

  • In fact, as far as we knew, there was no computer in San Francisco.

I spent the next two weeks in a closet in the IBM office studying all the IBM manuals that were stored there, preparing myself to teach this course. I was pretty much a lone ranger, without the horse or any faithful Indian companion. Actually, no companion at all.

That was over 60 years ago, and now I have a multitude of companions. Even so, it was a special time and an unforgettable first two weeks, so thank you for asking this question.

If you want to know more about what it was like in those thrilling days of yesteryear, you should follow Danny Faught's blog. Back then, we used to listen to the Lone Ranger on radio (there wasn't much, if any, television).

"Hi-Yo, Silver! A fiery horse with the speed of light, a cloud of dust and a hearty ‘Hi-Yo Silver'... The Lone Ranger! With his faithful Indian companion, Tonto, the daring and resourceful masked rider of the plains led the fight for law and order in the early Western United States. Nowhere in the pages of history can one find a greater champion of justice. Return with us now to those thrilling days of yesteryear. From out of the past come the thundering hoof-beats of the great horse Silver. The Lone Ranger rides again!"


<http://www.geraldmweinberg.com (Formerly The Lone Programmer)

Monday, October 23, 2017

Where do old programmers go?

As far as I can tell, I’m the oldest old programmer to answer this question so far. I’m so old that the title “programmer” didn’t even exist when I started.

I celebrate my 84th birthday this week, and as far as I know, most of the programmers who were around under various titles when I started (in 1956, maybe 20 of us in the USA) are now dead. I hope they’ve gone to heaven (the cloud?).

Myself, I gradually ceased writing code for money and transitioned to training younger people to be outstanding professional programmers. I still write lots of code for my own use and amusement and learning, but it’s been at least 40 years since I could tolerate writing code for a boss who didn’t understand what programming was all about.

I’ve earned multiple livings as consultant, teacher, and writer. Always about programming, but more about design rather than coding details as the years went by. If you’re good, you can do any of these things even at advanced age, but you can’t just sit around waiting for someone to find you.

If you’re not good, than either get good (it’s never too late) or retire. We don’t need mediocre programmers, and we never did.


Monday, February 20, 2017

How Long Can I Remain a [Ruby, Java, C++, Python, …]  Programmer?

Several respondents to an earlier post have asked me about the future prospects for workers in one programming language or another. Here's my best answer.

As others have said, "I can predict anything but the future." But also others have said that the only things we know about the future are what we know from the past. Therefore, you might get some idea of your future as a [Ruby …] programmer from the answers to a recent Quora question, "What were some jobs which existed 50 years ago but have largely disappeared today?"

It was great fun reading all these answers, many of which described jobs I held back then. I go back a bit more than 50 years, though, so I have a few more to add. Most obvious omission was the iceman. In the 1930s, we had an icebox (not a refrigerator, but an actual box that held a block of ice). The iceman’s horse-drawn wagon would come around regularly and be surrounded by us kids, hoping to get free shards of ice caused when he cut up little blocks to fit our iceboxes.

Another job only briefly mentioned was typesetting. I never held that job, but I was trained for manual typesetting for a semester in high school. At least I know where terms like upper-case and lower-case come from.

Someone also mentioned keypunch operator, a task (not a job) that was often done by prisoners who were literally chained to their machines. Who weren't mentioned, however, were key verifier operators. Not many people today have ever seen a verifier, let alone even know what one was.

Even before my time, there were jobs that disappeared, but which I read about—for instance, in a nineteenth century book about jobs for women. The final two chapters in the book were about a couple of sure-fire women’s jobs for the future (1900 was then the future).

First chapter was about teletype operators. The chapter “proved” that there was a great future for women because they could operate a telegraph key at least as fast as men (and the telephone had yet to be invented).

Second chapter was about picture tinters. There was, of course, no color photography, and it wasn’t really even conceived of. Women were supposedly much better at coloring photos because of their “artistic bent” and their more delicate hands. Though there are a few photo tinters still around today for special jobs, it’s not a career with a great future.

By the way, one future job for women that wasn't even mentioned in the book was typist (or amenuensis) in spite of the then recent exciting invention of the typewriter. Other sources explained that women would never be typists because everyone knew that women were not good with machines.

It's fun to think about these forgotten jobs, but they're also a source of important knowledge, or perhaps even wisdom. Job disappearance is not some new phenomenon caused by computers. It's always gone on through history. True, some jobs lasted a long time, so long that they were passed down from generation to generation, even becoming family names, such as Smith, Turner, Eisenhower, Baker, and Miller. (See, for example, <surnames.behindthename.com> for hundreds of examples)

Some of those jobs still exist, though often modified by new technology. Do you still recognize Fuller, Chandler, or Ackerman? And many others have largely disappeared, remaining only in some special niche, like photo tinters. Do you know anybody named Armbruster who still makes crossbows? Well, you probably know a few Coopers, but how many of them still make barrels?

No job is guaranteed. Nothing entitles you to hold the same job for life, let alone pass the job down to your children. So, for example, if you think of yourself as a "[Ruby…] Programmer," perhaps you'd better prepare yourself for future with a more general job description, such as "programmer" or "problem-solver."

In fact, there's a whole lot of people out there who think (or hope) the job of "programmer" will disappear one of these days. Some of them have been building apps since the time of COBOL that would "eliminate programmers." I've mocked these overblown efforts for half a century, but history has tried to teach me to be a bit more humble. Whether or not they succeed in your lifetime, you might want to hedge your bets and keep learning additional skills. Perhaps in your lifetime we'll still need problem-solvers and leaders long after we've forgotten the need for Chamberlains and Stringers.

Additional reading: 

Wednesday, January 11, 2017

Foreword and Introduction to ERRORS book

Foreword


Ever since this book came out, people have been asking me how I came to write on such an unusual topic. I've pondered their question and decided to add this foreword as an answer.

As far as I can remember, I've always been interested in errors. I was a smart kid, but didn't understand why I  made mistakes. And why other people made more.

I yearned to understand how the brain, my brain, worked, so I studied everything I could find about brains. And then I heard about computers.

Way back then, computers were called "Giant Brains." Edmund Berkeley wrote a book by that title, which I read voraciously.

Those giant brains were "machines that think" and "didn't make errors." Neither turned out to be true, but back then, I believed them. I knew right away, deep down—at age eleven—that I would spend my life with computers.

Much later, I learned that computers didn't make many errors, but their programs sure did.

I realized when I worked on this book that it more or less summarizes my life's work, trying to understand all about errors. That's where it all started.

I think I was upset when I finally figured out that I wasn't going to find a way to perfectly eliminate all errors, but I got over it. How? I think it was my training in physics, where I learned that perfection simply violates the laws of thermodynamics.

Then I was upset when I realized that when a computer program had a fault, the machine could turn out errors millions of times faster than any human or group of humans.

I could actually program a machine to make more errors in a day than all human beings had made in the last 10,000 years. Not many people seemed to understand the consequences of this fact, so I decided to write this book as my contribution to a more perfect world.

Not perfect, of course, but more perfect. I hope it helps.


Introduction


For more than a half-century, I’ve written about errors: what they are, their importance, how we think about them, our attempts to prevent them, and how we deal with them when those attempts fail. People tell me how helpful some of these writings have been, so I felt it would be useful to make them more widely known. Unfortunately, the half-century has left them scattered among several dozen books, so I decided to consolidate some of the more helpful ones in this book.

I’m going to start, though, where it all started, with my first book where Herb Leeds and I made our first public mention of error. Back in those days, Herb and I both worked for IBM. As employees we were not allowed to write about computers making mistakes, but we knew how important the subject was. So, we wrote our book and didn’t ask IBM’s permission.

Computer errors are far more important today than they were back in 1960, but many of the issues haven’t changed. That’s why I’m introducing this book with some historical perspective: reprinting some of that old text about errors along with some notes with the perspective of more than half a century.

1960’s Forbidden Mention of Errors
From: CHAPTER 10
Leeds and Weinberg,
Computer Programming Fundamentals PROGRAM TESTING
When we approach the subject of program testing, we might almost conclude the whole subject immediately with the anecdote about the mathematics professor who, when asked to look at a student’s problem, replied, “If you haven’t made any mistakes, you have the right answer.” He was, of course, being only slightly facetious. We have already stressed this philosophy in programming, where the major problem is knowing when a program is “right.”

In order to be sure that a program is right, a simple and systematic approach is undoubtedly best. However, no approach can assure correctness without adequate testing for verification. We smile when we read the professor’s reply because we know that human beings seldom know immediately when they have made errors—although we know they will at some time make them. The programmer must not have the view that, because he cannot think of any error, there must not be one. On the contrary, extreme skepticism is the only proper attitude. Obviously, if we can recognize an error, it ceases to be an error.

If we had to rely on our own judgment as to the correctness of our programs, we would be in a difficult position. Fortunately the computer usually provides the proof of the pudding. It is such a proper combination of programmer and computer that will ultimately determine the means of judging the program. We hope to provide some insight into the proper mixture of these ingredients. An immediate problem that we must cope with is the somewhat disheartening fact that, even after carefully eliminating clerical errors, experienced programmers will still make an average of approximately one error for every thirty instructions written.

We make errors quite regularly
This statement is still true after half a century—unless it’s actually worse nowadays. (I have some data from Capers Jones suggesting one error in fewer than ten instructions may be typical for very large, complex projects.) It will probably be true after ten centuries, unless by then we’ve made substantial modifications to the human brain. It’s a characteristic of humans would have been true a hundred centuries ago—if we’d had computers then.

1960’s Cost of errors
These errors range from minor misunderstandings of instructions to major errors of logic or problem interpretation. Strangely enough, the trivial errors often lead to spectacular results, while the major errors initially are usually the most difficult to detect.

“Trivial” errors can have great consequences
We knew about large errors way back then, but I suspect we didn’t imagine just how much errors could cost. For examples of some billion dollar errors along with explanations, read the chapter “Some Very Expensive Software Errors.”

Back to 1960 again
Of course, it is possible to write a program without errors, but this fact does not obviate the need for testing. Whether or not a program is working is a matter not to be decided by intuition. Quite often it is obvious when a program is not working. However, situations have occurred where a program which has been apparently successful for years has been exposed as erroneous in some part of its operation.

Errors can escape detection for years
With the wisdom of time, we now have quite specific examples of errors lurking in the background for thirty years or more. For example, read the chapter on “predicting the number of errors.”

How was it tested in 1960
Consequently, when we use a program, we want to know how it was tested in order to give us confidence in—or warning about—its applicability. Woe unto the programmer with “beginner’s luck” whose first program happens to have no errors. If he takes success in the wrong way, many rude shocks may be needed to jar his unfounded confidence into the shape of proper skepticism.

Many people are discouraged by what to them seems the inordinate amount of effort spent on program testing. They rightly indicate that a human being can often be trained to do a job much more easily than a computer can be programmed to do it. The rebuttal to this observation may be one or more of the following statements:
  1. All problems are not suitable for computers. (We must never forget this one.)
  2. The computer, once properly programmed, will give a higher level of performance, if, indeed,
    the problem is suited to a computer approach.
  3. All the human errors are removed from the system in advance, instead of distributing them
    throughout the work like bits of shell in a nutcake, In such instances, unfortunately, the human errors will not necessarily repeat in identical manner. Thus, anticipating and catching such errors may be exceedingly difficult. Often in these eases the tendency is to overcompensate for such errors, resulting in expense and time loss.
  4. The computer is often doing a different job than the man is doing, for there is a tendency– usually a good one—to enlarge the scope of a problem at the same time it is first programmed for a computer. People are often tempted to “compare apples with houses” in this case.
  5. The computer is probably a more steadfast employee, whereas human beings tend to move on to other responsibilities and must be replaced by other human beings who must, in turn, be trained.
In other words, if a job is worth doing, it is worth doing right.

Sometimes the error is creating a program at all.
Unfortunately, the cost of developing, supporting, and maintaining a program frequently exceeds the value it produces. In any case, no amount of fixing small program errors can eliminate the big error of writing the program in the first place. For examples and explanations, read the chapter on “it shouldn’t even be done.”

The full process, 1960
If a job is a computer job, it should be handled as such without hesitation. Of course, we are obligated to include the cost of programming and testing in any justification of a new computer application. Furthermore we must not be tempted to cut costs at the end by skimping on the testing effort. An incorrect program is indeed worth less than no program at all because the false conclusions it may inspire can lead to many expensive errors.

We must not confuse cost and value.
Even after all this time, some managers still believe they can get away with skimping on the testing effort. For examples and explanations, read the section on “What Do Errors Cost?”

Coding is not the end, even in 1960
A greater danger than false economy is ennui. Sometimes a programmer, upon finishing the coding phase of a problem, feels that all the interesting work is done. He yearns to move on to the next problem.

Programs can become erroneous without changing a bit.
You may have noticed the consistent use of “he” and “his” in this quoted passage from an ancient book. These days, this would be identified as “sexist writing,” but it wasn’t called “sexist” way back then. This is an example of how something that wasn’t an error in the past becomes an error with changing culture, changing language, changing hardware, or perhaps new laws. We don’t have to do anything to make an error, but we have to do a whole lot not to make an error.

We keep learning, but is it enough?
Thus as soon as the program looks correct—or, rather, does not look incorrect—he convinces himself it is finished and abandons it. Programmers at this time are much more fickle than young lovers.
Such actions are, of course, foolish. In the first place, we cannot so easily abandon our programs and relieve ourselves of further obligation to them. It is very possible under such circumstances that in the middle of a new problem we shall be called upon to finish our previous shoddy work—which will then seem even more dry and dull, as well as being much less familiar. Such unfamiliarity is no small problem. Much grief can occur before the programmer regains the level of thought activity he achieved in originally writing the program. We have emphasized flow diagramming and its most important assistance to understanding a program but no flow diagram guarantees easy reading of a program. The proper flow diagram does guarantee the correct logical guide through the program and a shorter path to correct understanding.

It is amazing how one goes about developing a coding structure. Often the programmer will review his coding with astonishment. He will ask incredulously, “How was it possible for me to construct this coding logic? I never could have developed this logic initially.” This statement is well-founded. It is a rare case where the programmer can immediately develop the final logical construction. Normally programming is a series of attempts, of two steps forward and one step backward. As experience is gained in understanding the problem and applying techniques—as the programmer becomes more immersed in the program’s intricacies—his logic improves. We could almost relate this logical building to a pyramid. In testing out the problem we must climb the same pyramid as in coding. In this case, however, we must take care to root out all misconstructed blocks, being careful not to lose our footing on the slippery sides. Thus, if we are really bored with a problem, the smartest approach is to finish it as correctly as possible so we shall never see it again.

In the second place, the testing of a program, properly approached, is by far the most intriguing part of programming. Truly the mettle of the programmer is tested along with the program. No puzzle addict could experience the miraculous intricacies and subtleties of the trail left by a program gone wrong. In the past, these interesting aspects of program testing have been dampened by the difficulty in rigorously extracting just the information wanted about the performance of a program. Now, however, sophisticated systems are available to relieve the programmer of much of this burden.

Testing for errors grows more difficult every year.
The previous sentence was an optimistic statement a half-century ago, but not because it was wrong. Over all these years, hundreds of tools have been built attempting to simplify the testing burden. Some of them have actually succeeded. At the same time, however, we’ve never satisfied our hunger for more sophisticated applications. So, though our testing tools have improved, our testing tasks have outpaced them. For examples and explanations, read about “preventing testing from growing more difficult.” 

If you're as interested in errors as I am, you can obtain a copy of Errors here:

ERRORS, bugs, boo-boos, blunders
SaveSave

Wednesday, May 04, 2016

Understanding Testing

Last week, Joe Colantonio interviewed me for his milestone 100th Test Talk. In case you haven't heard it yet, Joe extracted a few quotes and insights from this Test Talk. I've put some of them here, followed by a fascinating supporting story from one of my listeners.

Secrets
Joe asked me something about secrets of how I managed to do so many thing, and I gave a couple of long-winded answers:

Maybe the secret is, a sort of middle level, is stop looking for secrets and just figure out one little improvement at a time. Get rid of things that are using your time that are not productive and that you don’t like to do. Part of it is you have to love what you’re doing and if you don’t love it, then it’s pretty hard to bring yourself back to it.

I guess the secret of being productive if there is a secret, is to adapt to what is as opposed to what should be. Of course another way to describe testing is that it’s finding out what actually is as opposed to what’s supposed to be. A program is supposed to work in a certain way and the tester finds out it doesn’t work in that way. When you report that then somebody does something about it. If you live your life the same way you’ll be pretty productive.

One Thing Testers Should Do
Joe asked about what testers should be doing that they may not be doing, and one of my answers was this: 

I think that you need to highlight certain things, like I need to just sit down and talk with the developers and ask them what went on, what happened, what was interesting and so on in an informal way. This gives you a lot of clues where you might be having trouble.

History of Testing
On the history of test groups, I had this to say:

We made the first separate testing group that I know of historically (I’ve never found another) for that Mercury project because we knew astronauts could die if we had errors.We took our best developers and we made them into a group. It’s job was to see that astronaut's didn’t die. They built test tools and all kinds of procedures and went through all kinds of thinking and so on. The record over half a century shows that they were able to achieve a higher level of perfection (but it wasn’t quite perfect) than had ever been achieved before or since.

Automation in Testing
Joe's sponsor, Sauce Labs, specializes in automatic testing, so we had an interesting back and forth about test automation. Among other things, I said, 

You can automate partial tasks that are involved in testing and maybe many tests and save yourself a lot of efforts and do things very reliably, and that’s great. It doesn’t do the whole job.

To which, Joe taught me the expression he uses is not "test automation" but "automation in testing." We were in violent agreement.

How to Improve as a Tester
Joe then asked me how I'd recommend someone could improve themself as a tester:

The little trick I give people is that when you find yourself saying, “Well that’s one thing I don’t need to know about,” then stop, catch yourself and go and know about that, because it’s exactly like the finger pointing that we talked about before, when the developer says, “Well that’s a module you don’t need to look at,” that’s the one you look at first. You do the same thing yourself say, “That’s a skill I don’t need, it has nothing to do with testing,” then it does and you better work on it. 



How Testing is Misunderstood
There's a lot more lumps like these in the podcast, but here's one straightforward example that supports most of the things I had to say. Albert Gareev sent me this story after listening to the podcast:

"We are not NASA,"—said a Director of Development.—"My testers just need to verify the requirements to make sure that our products have quality."

What she wanted to "fix" in the first place, was that "the testing takes too long." There were a "Dev testing" phase, and then "QA testing" phase, and then "internal business acceptance testing" phase, then "client business acceptance testing" phase—sometimes even two. If bugs were caught at any point, the updated version would go through the same whole pipeline of testing. The product was indeed doing what's intended for the customers and for the company, with a very low number of incidents that were successfully taken care of.

During my initial analysis I found out that those phases were highly compartmentalized. Certain bugs had a very long life span, before the Dev team would get to know of them. Certain problems kept re-occurring. Testing was performed as a kind of a clueless Easter eggs hunt; Dev team didn't feel a need to share what code module they updated. 
It appeared to me, that the product was in a good shape only thanks to these lengthy testing phases that made chances of "stumbling upon" the errors high enough to catch the majority of bugs.

So I brought my findings back to the Director, and made a few suggestions—around collaboration, feedback loops, and especially about training testers to model risks, and to test for them. 

But she didn't buy it. She still thought that "they're doing too much testing."

I was puzzled. Frustrated. Even slightly offended, being a tester devoted to his profession.

And then I took a project manager for a cup of coffee.

The PM shared that the Director was in her first year being in charge of software development. That previous decade she spent as a director of customer support. 
"Huh."—I said.—"All these years she saw the products that already undergone a thorough testing-and-fixing process before her team could try them."

"Exactly."—replied the PM,—"She has no idea that at first it's always messy."

More on Testing and Quality

If you want to learn more about testing and quality, take a look at some of my books, such as,