This past week at practice I didn't have an obvious immediate task revealed by the team's performance. That's ordinarily a good thing. I've got a stack of tasks of intermediate urgency I need to get to for them, but I don't really have a way of prioritizing what I need to present to the team, which will deliver the biggest bang for the buck. I have my guesses, but I don't really have a good way of figuring out what they don't need to be taught other than seeing what they're acing the first time that subject is read in practice. I found myself asking for quick diagnostic tests.
You may have noticed there's a pattern to my selections of what I have deemed 'must do' in this period after our first attempt to go to a tournament and the second. We'll take this for a ride in our final section of this, so you can guess at what I'm doing and why. It's not really a big mystery, but I noticed that I was doing something subconsciously that I wanted to do consciously, and this is actually a good way to have you look at it. The relevant pieces are the book I had been working over the past few weeks The Originals, the Philosophy I sheet I printed here, and focusing on mythology in questions. See if you can guess why they’re related while I give you a little background on diagnostics.
The gold standard of diagnostic tests 25 years ago was the Carleton Diagnostics, developed by Eric Hilleman and his players at Carleton College. This wouldn't work for this situation because: they're for college, they've not been updated in maybe 20 years, and at 500 questions in each test, they're simply too long for us to consider giving to a high school freshman facing a high school distribution.
My second idea was to consider NAQT's Power Ups, and the pre-tests freely available on site. This would work, but I am trying to do this on a zero budget, and it doesn't have enough variety of possible subjects. It may be an option in future, but right now the team's not ready for that.
I want diagnostics because they answer the question “What do you not know?” efficiently and quickly. A diagnostic test of five or six questions can cover a subcategory and indicate knowledge or unfamiliarity. Reading packets covers all subjects and doesn’t focus on the subject easily. Reading questions from a database covers specificity, but doesn’t allow both coach and player to identify the issue together, and the player kind of needs to know there’s a problem there already before they can pursue a solution through databases.
Here's the problem as I want to attack it:
I want to have diagnostic tests for whatever subjects I can give them a study guide a small series of questions. In those questions I want to determine two things: First if they know the answer, and second, how confident they are in their knowledge or ignorance of the answer.
Ideally, I'd like there to be a parallel question being asked with each question:
- Did you know this was correct from the question being asked?
- Did you guess this was the correct answer?
- Could you have identified the correct answer if the question were multiple choice?
- Have you heard of the answer before you were asked this question?
- Have you heard of any of the figures included in the question as clues to the answer?
If you can see their answers and get this level of granularity as to their confidence or doubt in their answer, you can better prioritize what you need to cover in practice.
So a diagnostic to examine alloys and metallurgy would read like this. Simple questions, not meant to be read out loud, where there's only 1-3 clues; and attached to a simple set of questions at each step, below each question there would be a measure of their confidence in their answer. On the first page the question comes without multiple choice answers, on the second page there are choices:
1. Copper and zinc combine in what alloy known for its use in musical instruments?
I knew the answer
I guessed the answer
I didn't have a guess at this point
Second page
1. Copper and zinc combine in what alloy known for its use in musical instruments?
Bronze
Brass
Steel
Invar
Amalgam
I knew the answer once the options were presented
I've heard of these clues and answers but don't know the answer
I haven't heard of these clues and answers
This could also be done with Google Forms with multiple pages, and could require the player to step through the process.
Once you have the data that a pre-test provides, you could apply this to what questions and study guides you want to provide the team for practice. And possibly by laying these pre-tests together in a single diagnostic test, you could test for a set of study guides or single subject packets to place them in order for your practices or their independent study.
I don’t know if I want to proceed with this, but it’s a way to sort of target subjects you’re worried about, and is a way to figure out how to apply your resources.
Tuesday morning I went into the office and finding I had an hour before my run of regression tests would complete, I typed out the last of my notes from The Originals. I put that book into the pile to be given away at the next tournament. I pulled the top twelve or so notes into a document, formatted it into short paragraphs, and then made it pretty-printable on a single page. That was the handout for this week's practice. I'm not putting that out the way I did the Philosophy sheet because I'm not satisfied with the first print run. It's going to work fine for the team for their first tournament, but it doesn't feel complete. I do believe that once this is polished a little better, I should submit it for You Gotta Know. But I will also be converting those notes into practice questions which I will salt into their future practices.
As I had mentioned two weeks ago, I'm salting the packets with additional mythology questions because it seems to be the hot topic for the local television program. This continued this week, both on the show and in practice. I also did something in this salting process that is a minor point that I need to include when giving a team practice before a tournament. Your team needs to understand all major directives of the answer that can preface a question in whatever format they are going to compete. The directives that are most common in quiz bowl are:
Two answers required.
Full name required.
Description acceptable.
Pencil and paper ready. (for computation)
Name's the same.
Televised quiz bowl has the occasional directive of "Look at your monitor." or "Listen to the following." but it's definitely not using "Description acceptable." It is a good idea to familiarize your team with whatever of these are common, or even uncommon in your area. And unlike familiarizing your team with prompts of "more specific," or anti-prompts, these are ones you can control for by appropriate salting of the set you're reading them. In the questions I read for this practice, I stuffed a bonus where each part had two answers required, since I hadn't seen that yet in practice rounds I had chosen to read.
It's a minor thing, but I subscribe to one of the many Murphy's laws that "any instruction that can be misunderstood, will be misunderstood." In the case of the above directives, I have proof of this. One question on the multiple people who were governors of Illinois named Adlai Stevenson was prefaced with "First and last name's the same." and resulted in a far too quick buzz from a rookie team of "Humbert Humbert."
A further small note when it comes to salting. There's plenty of stuff you can replace or remove from archived packets. You don't need to read every question from the packet in practice: old current events or things that are out of date, things that are too hard for the level of the tournaments you're facing, or things that appear to be meta humor for the participants at the time can be safely removed, and you can paste in a question that covers something you want to cover specifically.
I'm now realizing I'm at the point where I'm creating luxury bespoke goods for the purposes of using in a single practice, and that's a weird place to be.
So back to the challenge to the reader. What do The Originals, the Philosophy I study sheet, and the emphasis on mythology have in common?
The answer I have in mind: all three of these efforts cut across the category boundaries put in place as part of circuit quiz bowl.
I specifically chose several of the first column Philosophy sheet mentions because they weren't what an established veteran of the circuit would name as philosophers, but they were in their time, or were adjacent to philosophical issues.
I specifically chose to create a study sheet from The Originals because those literary characters described span lots of different disciplines. There are authors who were recast into characters, but there are also painters, politicians, scientists, and generals who were used by authors as source material.
I specifically chose mythology, which on the surface is a single category in quiz bowl, because it's one of the richest pools of answers for common name quiz bowl. Constellations, planets, chemical elements, literary and artistic works, businesses, and projects all take their names from mythology, and for every one of those they can use the mythology behind the name as a final clue of the question.
So all of these are easy ways during practice to get a variety of disciplines across to a team without much experience. It's a way to cover the territory of certain categories fast; because we're never NOT pressed for time. We've only had nine hours of practice, and will only have three more to go. It's also a way for me to test categories against the team: what do they like hearing about, what could I give them as extra study that will interest them, and what should be delayed in presentation because they won't take it seriously until they get burned.
Since I don't have a satisfactory sort of diagnostic tests, I can't really apply them to the problem at hand, but if I apply these to the problem at hand, I at least have some way of figuring out what will best help the team to be prepared for their first tournament. And that first tournament, and how they will react when I ask them: "So how was it, and how can we improve?" is going to be better data to work with.