Does the business understand the value of the testing that you do for them?

Well, does it?

It’s easy to calculate the cost of testing: we should know how much time people spend doing it and how much we pay for their recruitment, their training and the tools and infrastructure that they use.  It’s also easy for others to look at those numbers and suggest that they are too big.  Most of us who have been in positions of test leadership know what it feels like when a project manager, IT manager or business sponsor asks – even demands – that we reduce these costs ... although not, of course, at the expense of product quality!

If we don’t agree that the testing we do is too costly, and maybe even want a little more money to do more of it, how do we win the ‘battle of the bean-counters’?  We can’t deny that testing is expensive.  Anything that requires well-educated and well-trained people using sophisticated hardware and software is bound to be expensive.  When somebody says that it’s too expensive, however, that can be the start of a discussion.

Too expensive, eh?  Compared to what alternative?  Not doing any testing, perhaps – how much would that cost?

            “Well, nothing, obviously”.

Really?  So would you just ignore any bugs that went live?

        “Oh, no, we’d have to fix those, of course”.

Ahh ... and how much would that cost?

            “Er, well, I’m not sure actually ...”.

And now we’ve reached the tricky part.  It’s difficult to know the exact cost of a bug that goes live.  That’s why few organisations are good at it, and many don’t even try.  However, Tom DeMarco’s often [mis-]quoted “You can’t control what you can’t measure” is true, more often than not, so can we measure this?  Yes, we can …

With a good enough time recording system it’s possible to isolate all the time that is spent on investigation, fixing, confirmation / regression testing and deploying the fix.  That should, of course, include time spent liaising with the user or customer, repairing their broken database, updating their training materials and whatever else needs to be done.  But what about the subtler, indirect, costs?

  • If the sales web site was down for an hour, how much business was lost?

  • How much future business has been lost from customers who found an alternative source of supply during that hour and will never return?

  • How much damage did the resulting bad publicity do to the share price of our publicly-quoted company, and how much actual financial harm did that loss of share price cause by reducing investment opportunities?

  • If the bug made us late in paying our suppliers and some of them then refused to continue supplying us, how much did it cost to find alternatives?

The good news is, somewhere in the organisation there will be someone – a business analyst, marketing analyst, financial accountant, management accountant or director / VP of something-or-another – who will have some idea about whatever the missing cost element is.  Even better, they don’t have to commit to an accurate answer (which they might be reluctant to do) because an estimate, to whatever degree of accuracy they are prepared to give, is good enough.  Good enough for what?  Good enough for us to say “Every time we find a bug like this before it goes live, this is approximately how much money we save you”.  Given the choice between approximate knowledge and no knowledge at all, I’ll take the approximation every time.

It’s not quite that simple, of course.  To get an honest figure for the saving we must deduct the cost of finding and fixing it before going live.  And we’ll have to be clear about what ‘a bug like this’ looks like because there are many different types of bug, each with its own cost profile, so we’ll need some kind of estimating scheme to deal with this.  These, however, are details that can be dealt with and they’re relatively minor.  We’ve already made the biggest step on the way to demonstrating the value of testing to the business in one (or both) of two ways: Return on Investment, and Cost of Quality.

This article is a part of a series. You can check the next one right now.

Author: Richard Taylor

Richard has dedicated more than 40 years of his professional career to IT business. He has been involved in programming, systems analysis and business analysis. Since 1992 he has specialised in test management. He was one of the first members of ISEB (Information Systems Examination Board). At present he is actively involved in the activities of the ISTQB (International Software Testing Qualifications Board), where he mainly contributes with numerous improvements to training materials. Richard is also a very popular lecturer at our training sessions and a regular speaker at international conferences.