Arkisto: February 2010



Do Testspecs The New Way

26. Februaryta, 2010 | Kirjoittaja: Antti Niittyviita

Jarkko’s new project had started with the principle of continuous integration. The project was first of its kind in an organization in which Jarkko acted as a test manager. At the beginning of the project, people wanted testing to function quite normal. In fact, the testing planned smoketests and a big bunch of functionality tests. Those were wrapped into pre-planned test sets.

Jarkko’s testing team used around 4000 specified assortments of test cases. The test cases were divided into different test sets according to their category in the software. Running one test set took the team three days. The problem was, that the build automatics made one new release each night. When one test cycle completed, the results were two days late. Two new releases had already come out. The test results’ expiration date was already past!

“This does not actually sound like a problem!” remarked the project manager. Jarkko calculated that in a worst case scenario, a discovered serious integration bug during the final phase of the test cycle could cause three days of work to be reverted back in the version control. Which could mean that as much as 8 developers and 3 testers had accomplished nothing in three days, equating to loss of 33 days of work by a single person! “That would be expensive” – claimed Jarkko.

Jarkko and the testing team had luckily acknowledged the risk early on and actively sought for a solution for this problem even before the project management understood the risk. The team had been equipped with a handy test control system at the beginning of the project, which allowed them to flexibility of not having to build test sets beforehand. They could be formed dynamically, according to the need. When the bottlenecks in the project were finally understood and more resources were allocated to finding a solution, the team was quick to act:

The test control system was integrated to the build automatics in a way that whenever a new build was done, the test system automatically created a new test target (release). Every test case contained the information of what release was used in the test previously. This allowed them to form an ‘expiration date’ for each test case. After this, each morning the team built dynamically a test plan for the day and the test sets. The criteria for building the test sets were as follows:

  1. Top-10 test cases, meaning the test cases that had discovered the most bugs according to the system’s up to date information.
  2. Previously failed test cases, so that people can see if there have been improvements to the earlier releases.
  3. Parts that had undergone changes in a release, meaning the risk-based test planning.
  4. The test cases that were the longest past their ‘expiration date’, and finally
  5. investigative testing

When functioning according to this, the testing team had continuously a more up to date view of the end product’s maturity and which parts of it needed the most fixes. The most critical bugs were caught one day earlier than before, on the average, when the testers were not forced to run pre-planned test sets always in the same order as before. The changes were so successful that the organization decided to change their testing behavior in other projects to the same direction.

How often has your organization run the same regression test set’s beginning for a day and then found a halting error during the last third of the test run? That is pretty annoying, I say. People often stick with the old model, mostly because someone wants the older test metrics to be comparable with the new test cycles. This is why changing a test set is sometimes a pretty damn arduous process. Forget tradition:

You inevitably get more out of your testing when you open up the field of action and test specs for changes. In the beginning, make sure that the testing catches bugs more efficiently and earlier and only after that worry about metrics. This way your end product is grateful!

Tester Is Also A Salesman

11. Februaryta, 2010 | Kirjoittaja: Antti Niittyviita

Our employee Heimo (name changed) works in an advanced software project, where there are three testers for a as many as ten developers. The project’s quality assurance functions according to the principles of investigative testing and Heimo has not had to tackle with test specs even once during the project. That is an interesting change of pace for an experienced tester guru. The newest software release has been undergoing finishing touches for a few weeks now, and next week it is meant to be released.

Heimo does know that the software has a few severe flaws, which he had reported already during the beginning of the release cycle. The flaws, for some reason, were not taken up to be fixed yet. Heimo, being a smart guy, has become familiar with the developer in charge of the flawed section in the coffee room and at his work site by just being there and sitting next to him. “So, here is where the flaw can be found”.

Even though the flaw has not yet been fixed, Heimo has not escalated the subject by taking it up the ladder to the project management. He can tell the consequences the flaw causes for the product, the client and the end user. In addition, he can eliminate the obstructions for the fix one at a time by offering enough information regarding the flaw and possibly even debug-information regarding it.

Because Heimo is a good guy and a liked person in the project, his words are taken seriously and finally the flaw gets fixed. Heimo got what he wanted, the product development did not get into a fight with the testing section and the developer in charge did not have to give an explanation to the management as to why the flaw was not fixed. All this thanks to a smart approach. But does this not sound like a marketing job?

Softanmurskausta-blog’s Teemu Vesala wrote on the subject in ‘Laatukonsultti – mikä se on?’. The list was long and full of facts. I would add to that list by saying that a testing guru is also a member of the support staff for the developer and a good salesman. High class tester can also sell his perspective on the product’s benefit to the developer and the management and still retain his ‘client relationship’ with the developers warm and fuzzy.

Tester: Give reasons well for a bug fixing need and remove obstacles for the fix one at a time. This way you act as a salesman who advances the product’s benefit and make your own life easier.

Intelligence Of A Community Always Takes Priority

3. Februaryta, 2010 | Kirjoittaja: Antti Niittyviita

Once upon time there was a convention. The main attraction in this convention was a lottery stand with amazing prices. At the centre of it all was a big jar filled with French pastilles, of which the person who guessed closest their exact amount won a prize. Most attendees noted it to be entirely up to chance to guess the right amount.

During the convention, the stand was visited by over a thousand people who made their guesses. Everyone’s guess was based on their best insight. One Jaakko who arrived to the stand was a remarkably smart fellow. He did not make his own guess at once, but paid attention to what other people were guessing and ultimately won the grand prize.

Jaakko had read James Surowieck’s book The Wisdom of Crowds. He acted by following the entire community’s guesses and formulated his own guess according to them. The community’s guess got closer than any guess made by an individual guesser. How often have you encountered Jaakko’s choice of approach?

From a tester’s perspective, the first things that spring to mind are the error-meetings of projects, where a few wise men of the project goes through the weekly list of bugs estimating, among other things, their severity, effect on the end user and the work required to fix the bug. The estimates are made by experts, but could it be possible to make use of the developer community if there was a way easy enough to do so?

Asking for opinions feels like too much work, and it is not worth it to invite everyone to the meetings to vote on them. Voicing an opinion can still be made easy, much like Taloussanomat does in their comment section. There, a reader can tell if a comment was good or bad by clicking on either of the thumbs:

<score>

A similar functionality would be perfect to rank error reports when the newest reports were to be listed on, for example, the project control’s front page or in the dashboard screen of the testing control. In a system, which is shared and actively used by the entire project staff. The same functionality is easily expanded from evaluating error reports to test case reports, test sets and even project’s requirements specifications.

How does this sound? Good or bad?

<voting stuff>