In my day job as a software testing engineer, I'm trying to complete a stretch goal for my job performance review. We've gotten to the end of the release cycle as December approaches, and my tasks have become mostly easy, due to proper planning. One of my plans for 2024 was to include some "best practices" documents for training of testers and managers of test operations. The one thing I've targeted is a best practice document of our exploratory testing, or e-test, events. In writing this document up, I'm finding some useful parallels with quiz bowl practices, which I'm contemplating including in the book.
- What does an e-test look like?
An e-test is a roughly two-hour testing session involving testers, developers, documentation writers, and program team leads. It is intended as a test of a nearly completed project or feature, as a final check before release. Each participant has the latest version of the software on their desktop machine, and are all connected in a videoconference. The e-test organizer has created a spreadsheet, which contains information about the feature to be tested, aspects of it to be tested, and what is to be avoided (either because it is not yet implemented, or specifically forbidden.) As the session goes, the spreadsheet's second page is filled with all issues that anyone in the session finds.
When a bug is found in the software, the person finding the bug displays their screen to the group, and the team discusses the bug. The developers see if the bug is familiar to them from the development process, the tester who first tested the feature relates whether it was similar to something they saw previously, and whether it differs from the specifications of the feature. After the group triages the bug, it is categorized and prioritized for filing a defect after the session. After the session, the organizer of the session polls the participants to determine their confidence in the feature, and files the defects
- What purpose does it serve?
The goal of an e-test is to get as many eyes on a feature of the software, with differing methods of testing and experience with the product. The developer and the tester who worked on the project are there to see if the less experienced in the session need guidance to use the feature. The testers with less experience with this specific feature can nonetheless test the feature's User interface, interoperability and integration with other features, or the translation of the feature into other operating systems. And each tester and developer can apply their own favorite tricks to confirm functionality (e. g., a particularly involved project archive, or a sequence of error conditions that might be known to fail previous features in their development.)
There’s an idea in interactive testing of taking a tour of the software, and there are different types of touring: taking purely a UI path, and using only the UI to navigate, taking a landmark tour, where you only mark two or three important steps, and use whatever method you can to get from step to step, or a physics tour where you actually check that each step of the problem you’re solving calculates correctly and matches your hand calculation. I even invented a couple for internal use (The International Airport tour: change your language on the computer to one you can’t read, and force yourself to navigate the software by the pictograms provided and the position of UI elements onscreen. This is something an AI can’t do well because it will do comparatively easy task of translation for itself.) A tour is a good testing methodology to make your way through an e-test because it forces you to consider other aspects of the testing that must be done to make the software useful to people who aren’t you.
- What lessons do you need to be taught upon entering an e-test?
You need to have an introduction to whatever feature you will be testing during the session, you need to be able to find it and what are the necessary setup components to be able to reach it. It's actually important that you not know everything about the feature, because part of the exploration requires you to figure out how to operate it from your past experience with the product in general. If your past experience doesn't lead you to proper usage, there's a problem with how the feature is designed for the user.
But mostly you need to approach it with your mental hardware, your testing methodologies in tact, and your mental software, your experience with the program you will be testing turned off.
- What lessons to you need to know when running it?
Running an e-test requires lots of setup of the equipment, and knowing how to set up the program, so that you can teach them how to set it up on their machine. You also need to learn the art of providing the information the participants will need to operate the feature, but not so much information that they all follow the same path to use it.
At the end of the document, I gave them four guidelines of how they should approach the e-test session.
Explore! The name “e-test” stands for exploratory testing, you are asking the question “if I do this, what does the program do?” before you are asking “is this the correct behavior?” You want to go over the same territory multiple times, taking different paths through that territory. Don’t be afraid to retrace your steps, try a different tack, or insert a necessary task you didn’t know you needed to do to complete the overall task.
Don’t worry about what you don’t know, ask questions! You are here because you aren’t familiar with the feature being tested, so you shouldn’t expect to know everything that’s going on with the feature. An e-test is often the first time someone unfamiliar with the feature tests the feature. That is key because every user of the feature will be at that level of experience when they use it in the field. To get beyond that in the session, you have to ask people how they did it. Otherwise you’re going to get stopped before you start.
Observe! Watch when other participants are asking questions. They may highlight something you could use later in future testing.
Remember and Record! When you run into an issue, the steps leading to the issue are important for diagnosing the problem. Many of the features we will test are in products that record their own journals. Know where those are stored so you can go back to them. If you are doing a series of steps, write down your actions so you can see where something happened and what preceded it. Documenting the discoveries of the session, even when everything goes right, is key to fixing the problems and being confident in the product.
The two quizbowl activities which model an e-test most closely are the process of playtesting your questions, and the process of preparing a practice for multiple levels of experience. These two scenarios influenced my friendly advice to the e-test participants.
The playtesting of sets before they are used is commonplace online, building off of a tradition of in-person testing. It does require getting a base of participants who won't expose the questions to those who will be competing on them. But it's a strong method of getting things polished, that shares a lot of the pieces of an e-test. When we did guerilla tournaments at Michigan on Memorial Day weekend, we spent the rest of the weekend playtesting the HSNCT which was scheduled for the next weekend. You can see analogous structures in the e-test corresponding to the best use of a playtest.
- For best results the experience level of playtest participants must be varied. This is really the key because you want to polish the questions to benefit not only the most experienced players, but the novices. We often forget that the average player who experiences the question is a novice in at least some of the categories they will face.
- Players must be free with their feedback, and must document their feedback to the people who will change the feature (developers for software, editors for questions.)
- Since it now can be done in a videoconference, there's an opportunity to bring in larger circles of playtesters, the larger the circle, the more types of analysis are done on each question. Some playtesters may focus on the possibilities of alternate answers, some on the need for pronunciation guides, or whether the set is balanced within categories within packets. ("Did we put all the mentions of Argentina in this one round?")
As your team builds, you are going to have practices where the players who have been around the team for years are playing with the new recruits.
If you can teach the new players to explore the environment of questions, they will always have an answer to what they should do next, because the will inclined to answer that question themselves before asking you. That doesn't mean they won't need your guidance, but it means their default position in life is to learn something.
If you can teach new players to observe and remember, and record their results, they will always be improving, and will be able to understand how to improve themselves.
If you can teach your new players to ask questions about where they went wrong on an answer, and teach your experienced players to answer those questions, you'll have achieved a team dynamic with self-sufficiency built into it.