Step closer to exploratory testing

Many testers cannot imagine testing without using testcases.  In the last 10 to 15 years testing has been clearly explained by a group of testers representing themselves as the context driven testing school.  Many of them are also associated with the association of software testing.

When traditional testers are introduced to the concepts of the CDT, especially the idea of exploratory testing, it can take a long time to cross the chasm between traditional (writing testcases, getting them approved and then executing them) and exploratory testing.

I understood exploratory testing quite late.  Before I knew what was exploratory testing I worked with some good testers.

Although we didn’t know what was exploratory testing, I think we intuitively followed some of the principles.  I have described what we did so that it can be used as a stepping stone by testers who want to move towards exploratory testing.  This is a description of how the testers I worked with (in a previous job) tested.

How we tested

We hired testers who were domain experts (the domain required strong technical knowledge).  Some testers had used competitive products in their previous jobs.  Developers were strong programmers, some had knowledge in the domain.
We had very strong product designers who created great functional specifications.

QA got involved as early as they could.  If there was even the slightest scent of a discussion (much before the actual release), QA would get involved.  A lot of time was spent understanding the specs and asking for changes.

Developers would start writing code as soon as they could.  They would create detailed design docs/diagrams.  The feature team would discuss these designs.

QA created test plans in excel.  Testcases were one liners.  These were reviewed by a team and corrections were made to the doc.   It was difficult to review each others testcases since each feature was quite complex.  However, there would be some good suggestions.

The minute a build was available, QA would start using the build.  They didn’t have to refer to the testcases.  They could find defects just using the software.  When they needed ideas they would refer to the testcases.

After an initial frenzy of logging defects (say a month), developers started fixing defects.  QA would continue to verify fixes and re-testing functionality.

We would have bug bashes with the entire team testing each feature.

Testers would review all defect fixes for their feature.  Their testing was based on what was fixed.

The senior manager would sometimes use his judgement about the quality of testing of a particular feature.  In some cases we would assign another tester to ‘help find more defects’.

The managers would view defect trends.  However, not many people read too much into it.

A month or two before the release, a few experienced team members formed a committee which would control what fixes went into the product.

For the good testers most of the defects found late in the cycle were due to regression (as a result of defect fixes).  If a defect was found by someone other than the feature tester, good testers knew whether it was something they had tested or not.  In some cases they had to refer to their notes to trace when they had tested something.

Final Notes

Testcases – really didn’t mean much
Domain knowledge – this made a huge impact in this team.  Note that in many software there may not be strong domain knowledge, e.g., simple banking application, an instant messenger product.
Defect trends – nothing profound about the trends.  I can summarize as: log many defects, at some random point you find less than before, during the end game force the graph down (by not allowing fixes)
Release criteria – we made sure all important defects were fixed.  What this means is that criteria didn’t really matter.  Important defects were discussed and triaged.  What made this work is that we controlled what was fixed a month or two prior to release.

The most important factor in testing was the knowledge and skill of the tester.  Testers spent more than 95% of the time working with the software or ‘studying’ information such as developer documents and defect fixes.

What we didn’t do and could have bitten us

Looking back, if we understood exploratory testing, we would/should have done things differently.

  • We focused mainly on functionality.  We did spend some time on performance.  However, we should have thought about other non-functional tests.
  • We should have spent much more time asking what-if questions.  Some of this was done implicitly.
  • We should have designed more complex tests
  • We should have spent more time discussing tests (semi-formally)
  • We should have done more brainstorming
  • We should have used more formal techniques such as scenarios
  • We should have spent more time working with the developers to understand what was behind the functionality

Despite what we didn’t do, what worked for us is – being able to hire super smart testers and allowing them to do their best.

Advertisements

One thought on “Step closer to exploratory testing

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s