Monday, March 18, 2024

Management-friendly test data

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Metrics are trendy, and rightfully so: You can’t manage what you can’t measure. The problem is, almost all of us have slaved over collecting, formatting, and presenting vast storehouses of information that no one ever reads or comprehends, let alone acts upon. In our zeal to educate management about what we are doing, we end up inundating them with reams of reports that often simply confuse them.

The problem is not providing a quantity of information; it’s providing the right information. The solution is to know what managers want to know and why they want to know it.

What’s important?

Correlate all of your metrics into time and money.

A 1997 industry survey of software test organizations revealed an interesting paradox. When respondents–the majority of whom were in management–were asked to rank their most important objectives, first place went to meeting schedule deadlines and second place to producing a quality product. In a later question, when asked what their highest risk and cost were, based on past experience, first place went to high maintenance and support costs and second place to cost and schedule overruns.

Let’s analyze this. Management’s first priority is making the schedule, and their second is delivering a good product. Sometimes, this translates into sacrificing quality to make the schedule. On the other hand, experience shows (but apparently doesn’t teach) that they encounter higher risks and costs from maintenance and support of low-quality products than they do from exceeding the schedule or budget.

Get it? Neither do I.

Why is it important?

The one clear thing is that management cares about time and money, and that makes sense. Most managers are measured on how well they meet their delivery schedules and their budgets. What seems to escape us, consistently, is the interplay between these two: If you meet the schedule but deliver a defective product, you spend more money trying to support and maintain that product.

Why is this painfully evident relationship so obviously ignored? Because of the imagined differentiation between development and maintenance. Few, if any, companies actually associate their maintenance and support costs with the sacrifices made to meet the production schedule. If you ship the product as scheduled, you “made it,” regardless of whether that product boomerangs into a maintenance nightmare.

The most eloquent–and dramatic–example of this was when the new CEO of a software company asked me to review its operations to help him discover the reason why costs were increasing and revenues decreasing. After a couple of days of interviews, the mystery was solved. The company’s budget for customer support was more than the budget for development, testing, training, and documentation combined.

Why was support so expensive? As the harried customer support manager explained it, the company had a huge backlog of bugs, some of which were actually years old, and these generated thousands of phone calls which her team was obliged to field.

Time and money wasted in support and maintenance on poor quality software could be reinvested in more resources to deliver new, high-quality products on time.

Why so many bugs? Because there weren’t enough developers to maintain the products. Why not enough developers? Because there wasn’t enough money to hire more. Why not enough money? Because support was so expensive. Why not increase revenues? Because they didn’t have enough developers to create new products.

Get it? So did the CEO.

How do you say it?

It all comes down to this: How do you measure this phenomenon and communicate it in such a way as to have management understand and–most importantly–care?

Unfortunately, I cannot point to a magic answer. If you have one, send me an e-mail and I’ll tell the world.

Until then, do the only thing you can do: Correlate all of your metrics into time and money. At a minimum, track the issues that arise after shipment and correlate them to the ones you either knew you shipped or could have predicted because you didn’t finish testing. Remember, the number of reported problems is the number of defects multiplied by the number of customers who find them. So, shipping a single known defect can cause hundreds of problem reports.

The next trick is to convert this information into time and money. Money can be determined if you know the following:

What the budget is for customer support and maintenance;

How many problem reports are fielded.

And how many fixes were made.

Time is harder to convert, of course, because you have to know how long it takes to field a call, make a fix, test it, and ship it. If you have a robust problem-tracking system you may have this information. If you don’t, add up the manpower spent in maintenance and support and convert that into time.

Be sure to make the point that the time and money wasted in support and maintenance on poor quality software could be reinvested in more resources to deliver new, high-quality products on time.

The point

The real point, of course, is to understand your audience. Management cares about time and money, in that order. Present your metrics in such a way that you can correlate what you are measuring to what it costs–or saves–in time and money. All of the graphs, charts, and tables in the world won’t matter without metrics that make sense. //

Linda Hayes is CEO of WorkSoft Inc.. She was one of the founders of AutoTester. She can be reached at linda@worksoft.com.


Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles