Posts in Exploratory Testing
Discipline in Exploratory Testing
Some people when you say you do exploratory testing immediately think ad-hoc testing. I suppose because there is less emphasis on obvious structure and at the end there is little tangible evidence of testing performed.

But in my view, there's a lot more to exploratory testing than wandering aimlessly through an application looking for bugs.  As well as mentally challenging, it requires a lot of self-discipline.

Here's why you need self discipline:

1) You need self-discipline to test the parts that are not as interesting to you, or not as fun.  It's easy to overlook and 'forget' them when other parts are more appealing.

2) You need self discipline to give each bug the time it deserves before racing off to find new ones. Time to analyze, examine and understand. Only then, can you go and look for new bugs.

3) You need self-discipline to write up bugs when they are found, instead of leaving them until later or when you feel like it.

In my view,  in exploratory testing, as in many other ways of testing,  its the mission and the stakeholder that count and their needs must come first.

What's different is that instead of relying on documents and reports, you need discipline to  make sure you meet those goals.
Questions to help clarify test status
Stealing from a post I did on SearchSoftwareQuality.com, here are some questions I use to clarify testing status when doing debriefs:

  • What was your mission for this session?

  • What did you test and what did you find?

  • What did you not test (and why)?

  • How does your testing affect the remaining testing for the project? Do we need to add new charters or re-prioritize the remaining work?

  • Is there anything you could have had that would have made your testing go faster or might have made your job easier?

  • How do you feel about your testing?

Understand how you're going to approach your testing
I do a lot of exploratory testing. So when I'm doing test planning, you'd think the "Approach" section of my test plan would be the shortest, right? Wrong...

I see a lot of value in thinking about, describing, and writing out how I'll approach my testing. So much so, that when I'm getting ready to execute what I think will be a particularly challenging charter, I'll take a few minutes to outline how I'm going to approach my testing. For some tasks, I might even write a short procedure so I don't mess something up if there's a factor or variable I want to control for while I'm testing.

My rule is that I should always be able to articulate how I'm approaching the problem. If I can't do that, I've got no business getting started with my testing. It means I've got some additional research to do before I'm ready. If I can outline what I'm going to do, than I'm ready.
Clarifying your charter
When I'm teaching exploratory testing, I find that one of the most difficult things to learn can be chartering. If you're not practiced at moving from the abstract to the specific it can be very difficult. It becomes even more difficult to figure out what will actually fit in your session time box.

Here are some tips for clarifying the purpose of your testing:

  • Don't feel like you need to develop all your charters in one sitting or all of them upfront. Be comfortable with charters emerging slowly over time. While you'll need some charters defined upfront so you can get started, often you'll find that you charter-base will fill in as you go.

  • Take the time to state the mission (or purpose) of the charter as clearly as possible. Don't say "Test the portal for reporting accuracy" when you can instead say "Test reports X, Y, and Z for errors related to start and end time selection criteria, summing/totally, and rounding." In my experience, the more specific you are, the better your testing will be. If you need to take an extra 30 to 120 seconds to get more specific, take them.

  • Similar to the last tip, if you can't tell one mission from another, you've not done a good enough job defining it. If you have charters to "Test feature X," "Stress test feature X," or "Performance test feature X" can you tell me what the differences are between tests? Couldn't some stress tests be a subset of  simple feature testing? Couldn't some performance tests be a subset of stress testing? If you can't compare two missions side-by-side and have clear and distinct test cases come to mind, than you might benefit from spending a little bit more time refining your missions.

  • Finally, while you're testing go back and make sure you're mission is still correct. There's two goals here. First, you want to make sure you're on mission. If you need to test for rounding errors in reporting, but you find you just can't stop testing filters and sorting, than create the charter for testing filters and sorting and execute that charter instead. You can always go back to the charter for testing for rounding errors. Second, if you find as you test that you can better clarify your original mission, add that clarity as you go. It will help you when you go back to write new charters. The more clear you can make it, the easier it will be to recall what you actually tested three days later when you're looking back and trying to remember what work you still have in front of you.

Offsite Exploratory Testing Guidelines to Bug Reporting
Because it's such a good post, I'm stealing Anne-Marie Charrett's post from yesterday on "Do your bugs only glow when its dark?" In the post she outlines some guidelines she uses to report bugs when doing offsite exploratory testing:

  • Provide the time it takes you to write up the defects in your estimate

  • Get agreement on the level of detail for defect reports

  • Write up the bugs as you find them, don't let them queue up

  • Encourage customers to use a defect tracking tool


Read her full post for the details!