Arkisto: October 2009



Testing does not pat your back

30. Octoberta, 2009 | Kirjoittaja: Antti Niittyviita
Comments: 5

Business consultants do not pat the backs of their clients and say that your firm is doing well. The consultant assesses the firm mercilessly, looking for things to improve and brings up the problems.

Doesn’t this sound like testing?

The purpose of testing is not to produce evidence of how great the software is at that point. The purpose of the testing is exactly the same as the aforementioned consultant. We bring up the problem points and look at where and how we should start improving the software. But naturally, in all projects this does not get to happen.

I say that test results have a knack of getting embellished when the testers work as a part of the development team who hold the responsibility for getting results. If I don’t remember wrong, Qentinel‘s Esko Hannula proposed a question related to construction industry in an interview some time ago:

Would you let the constructor make the construction inspection for your real estate?

I sure wouldn’t. I feel like it would be ridiculous to carry out large information system projects by ordering all the services, from development to testing, from the same supplier. Do these projects actually succeed well, or is there a lack in gathering experience?

You get better results for your information system project by separating testing from the product responsibility.

Up the Feature Manager’s Alley

23. Octoberta, 2009 | Kirjoittaja: Antti Niittyviita

Large software development projects often have a large organization. The people responsible of the software development have titles that sound interesting, such as ‘feature manager’. How are these fellows related to the testing?

A typical software development organization has executives who carry the responsibility for the product, executives who carry the responsibility for individual projects and features, and then the makers. The actual progress shown up the hierarchy is often cool charts and fancy diagrams. Metrics, which measure the quality of the work.

I say that metrics portray more and more narrowly the actual reality the further up the information goes. Not unlike in the children’s game ‘Chinese telephone’.

It so often happens that in the projects that get shot down, the developers know if the product will fly long before the decision in the executive tier has even been made to abort the project. I feel that this is a strong indicator that the flow of information is lagging. That the communication is not working. One need not think long where the information regarding the project’s quality is derived nowadays? What are the metrics that always go up the ladder in the organization?

I was in a project once where my tester team was responsible for a group of individual features in the product project. According to traditional testing process, we had huge testing specs with accurate scenarios. Bugs were found and actively reported. The test reports had listings of new found bugs and issues and the basic metrics of the testing. The basic metrics were the pass percentage in relation to all the planned cases and in relation to all the test runs. In addition, it measured the number of skipped test cases and the reasons for them.

Higher up in the organization they measured the product’s progress especially according to the test pass percentages or maturity. Throughout the project, the pass percentages were weak in relation to the 95% it strived for. The actual test results were somewhere between 40% and 80% and they did not improve as the project went on, whereas in working projects they typically should have. During the project it became customary to review the test results with the ‘feature manager’.

As a result of the review, it so happened that fail-status of test cases was very often changed to pass-status with an added comment along the lines of: ‘Bug noted. Fix for the next release.’ The other option was to change the test case so that the run could not possibly fail.

The first distortion to the test results took place this early on. Do we then have a good reason to presume that such distortions would not happen the further along the information goes?

Evaluating product maturity according to the pass percentages does not work in traditional hierarchical organization. The pass percentage is as good as people agree it is. Everyone surely knows the steps that are needed to fix things.

Accept the results as they are from the testing. Communicate these openly in the entire organization. Be open and do not polish the results. Use them as a tool for improving the project and the modes of action.