Sunday, March 12, 2017

Classifying Errors

I received an email the other day from Giorgio Valoti in Italy, but when I wrote a response, it bounced back with "recipient unknown." It may have been a transient error, but it made me think that others besides Giorgio might be interested in discussing the issue of classifying errors, so I'll put my answer here and hope Giorgio will see it.

Here's the letter: 

Dear Mr. Weinberg,
My problem is that I’m looking for good way — maybe a standard, more likely a set of guidelines — to classify and put a some kind of label on software defects.

Is there such a thing? Does it even make sense trying to classify software defects?
And here's my reply:

Hello, Giorgio

It can certainly make sense to classify errors/defects, but there are many ways to classify, depending on what you're trying to accomplish. So, that's where you start, by answering "What's my purpose in classifying?"

For instance, here are a few ways my clients have classified errors, and for what purposes:

- cost to fix: to estimate future costs

- costs to customers: to estimate impact on product sales, or product market penetration

- place of origin in the development cycle: to decide where to concentrate quality efforts

- type of activity that led to the error: to improve the training of developers

- type of activity that led to detecting the error: to improve the training of testers

- number of places that had to be fixed to correct the error: to estimate the quality of the design

- and so on and on

I hope this helps ... and thanks for asking.

--------------end of letter-----------

As the letter says, there are numerous ways to classify errors, so I think my readers would love to read about some other ways from other readers. Care to comment?


Christian M. Mann said...

When I was working as a QA Lead I had the ambition to implement parts of IEEE 1044 (Standard Classification for Software Anomalies) into the company. The idea was to figure out weak points in our development and testing work and find ways to get better. We used Jira as our bug tracking system, so some new attributes and list box items were introduced. I still like the idea of gathering data on anomalies, analyzing it and working out chances for improvement.
In retrospect I must say that in my case gathering data worked (one should keep in mind that this is extra work the testers must do), but to really earn the fruits from classification, the programmers must support this approach. It doesn´t work if it´s only a QA thing, which it was in my former company. But still you might find some interesting facts out of anomaly attributes, which you can use to improve your testing work.

- Christian

Ajay Balamurugadas said...

Some thoughts:
- Based on impact of the error: To decide if it needs to be fixed NOW or can go in next release
- Based on frequency of the error: Apart from knowing when to fix, to check which customers might get affected
- Unknown errors: Those that surprise you about the quality of product and project and gives you an idea of the confidence on the product and project

Ajay Balamurugadas

Giorgio Valoti said...

Yes, to further clarify my request: I’d like to classify my errors so that I can be more effective in improving my skills. That’s the hope at least. So, the ultimate purpose of this classification would be to uncover some kind of regularity in my errors.

I guess the purpose that’s closer to this is the type of activity that led to the error? However, most of the times, it would simply be something like development, I guess?

Thank you for getting in touch with me.

Dan Panachyda said...

Two more ways come immediately to mind. There's the fundamental classification "Is it a defect or not?" and there's Cem Kaner's hierarchy in I think it's also critical to remember that any attempt at classification cannot be assumed to be valid for all stakeholders for all times. Since the classes are abstract, they will be potentially debatable and mutable, so don't spend too much time agonizing over which bucket each error falls in.