Wednesday, May 04, 2016

Understanding Testing

Last week, Joe Colantonio interviewed me for his milestone 100th Test Talk. In case you haven't heard it yet, Joe extracted a few quotes and insights from this Test Talk. I've put some of them here, followed by a fascinating supporting story from one of my listeners.

Secrets
Joe asked me something about secrets of how I managed to do so many thing, and I gave a couple of long-winded answers:

Maybe the secret is, a sort of middle level, is stop looking for secrets and just figure out one little improvement at a time. Get rid of things that are using your time that are not productive and that you don’t like to do. Part of it is you have to love what you’re doing and if you don’t love it, then it’s pretty hard to bring yourself back to it.

I guess the secret of being productive if there is a secret, is to adapt to what is as opposed to what should be. Of course another way to describe testing is that it’s finding out what actually is as opposed to what’s supposed to be. A program is supposed to work in a certain way and the tester finds out it doesn’t work in that way. When you report that then somebody does something about it. If you live your life the same way you’ll be pretty productive.

One Thing Testers Should Do
Joe asked about what testers should be doing that they may not be doing, and one of my answers was this: 

I think that you need to highlight certain things, like I need to just sit down and talk with the developers and ask them what went on, what happened, what was interesting and so on in an informal way. This gives you a lot of clues where you might be having trouble.

History of Testing
On the history of test groups, I had this to say:

We made the first separate testing group that I know of historically (I’ve never found another) for that Mercury project because we knew astronauts could die if we had errors.We took our best developers and we made them into a group. It’s job was to see that astronaut's didn’t die. They built test tools and all kinds of procedures and went through all kinds of thinking and so on. The record over half a century shows that they were able to achieve a higher level of perfection (but it wasn’t quite perfect) than had ever been achieved before or since.

Automation in Testing
Joe's sponsor, Sauce Labs, specializes in automatic testing, so we had an interesting back and forth about test automation. Among other things, I said, 

You can automate partial tasks that are involved in testing and maybe many tests and save yourself a lot of efforts and do things very reliably, and that’s great. It doesn’t do the whole job.

To which, Joe taught me the expression he uses is not "test automation" but "automation in testing." We were in violent agreement.

How to Improve as a Tester
Joe then asked me how I'd recommend someone could improve themself as a tester:

The little trick I give people is that when you find yourself saying, “Well that’s one thing I don’t need to know about,” then stop, catch yourself and go and know about that, because it’s exactly like the finger pointing that we talked about before, when the developer says, “Well that’s a module you don’t need to look at,” that’s the one you look at first. You do the same thing yourself say, “That’s a skill I don’t need, it has nothing to do with testing,” then it does and you better work on it. 



How Testing is Misunderstood
There's a lot more lumps like these in the podcast, but here's one straightforward example that supports most of the things I had to say. Albert Gareev sent me this story after listening to the podcast:

"We are not NASA,"—said a Director of Development.—"My testers just need to verify the requirements to make sure that our products have quality."

What she wanted to "fix" in the first place, was that "the testing takes too long." There were a "Dev testing" phase, and then "QA testing" phase, and then "internal business acceptance testing" phase, then "client business acceptance testing" phase—sometimes even two. If bugs were caught at any point, the updated version would go through the same whole pipeline of testing. The product was indeed doing what's intended for the customers and for the company, with a very low number of incidents that were successfully taken care of.

During my initial analysis I found out that those phases were highly compartmentalized. Certain bugs had a very long life span, before the Dev team would get to know of them. Certain problems kept re-occurring. Testing was performed as a kind of a clueless Easter eggs hunt; Dev team didn't feel a need to share what code module they updated. 
It appeared to me, that the product was in a good shape only thanks to these lengthy testing phases that made chances of "stumbling upon" the errors high enough to catch the majority of bugs.

So I brought my findings back to the Director, and made a few suggestions—around collaboration, feedback loops, and especially about training testers to model risks, and to test for them. 

But she didn't buy it. She still thought that "they're doing too much testing."

I was puzzled. Frustrated. Even slightly offended, being a tester devoted to his profession.

And then I took a project manager for a cup of coffee.

The PM shared that the Director was in her first year being in charge of software development. That previous decade she spent as a director of customer support. 
"Huh."—I said.—"All these years she saw the products that already undergone a thorough testing-and-fixing process before her team could try them."

"Exactly."—replied the PM,—"She has no idea that at first it's always messy."

More on Testing and Quality

If you want to learn more about testing and quality, take a look at some of my books, such as,





No comments: