Here's a basic sketch of what we did:
Each writer committed to strict schedule for writing 20 tossups. Those who knew that they would have scheduling conflicts negotiated alternate schedules or plans of writing. Each writer declared at the outset how that 20/20 would be broken down.
This schedule had four deadlines for submitting four batches of rough drafts of five tossups. Each mentor was also on a deadline to return feedback on those questions within a certain number of days. The revisions based on that feedback was due a certain number of days after that.
Long before each deadline, there was also a separate deadline for proposing answers. Each mentor was in charge of keeping track of the subdistribution within their categories, and if insufficient questions were being produced to create a balanced subdistribution, they could tell each writer "we're done writing questions in sub-categories X; we need questions in sub-categories Y".
Because we wanted the padawans to playtest each others' questions, we did not share with them the answers that others were writing on. If they proposed a repeat question, we simply told them that there was an overlap, and asked them to write on something else.
For me, literature was so over-subscribed with writers that I never needed to worry about subdistributional problems, because the writers were producing many more questions than could be used in the tournament. For my other categories, I committed myself or other writers to writing a certain number of editors' questions to fill out the subdistributions; these answer were chosen after all of the padawans had chosen their answers.
We did not use a Google Doc sharing system because we wanted to be able to playtest each other's questions. This caused some minor logistical headaches, when it came time to packetize, and we had trouble keeping track of who had which questions. But frankly, I preferred this system, because the playtesting was, at least in my categories, extremely beneficial, and I would gladly pay that cost of a minor scramble in order to receive more practical feedback.
However, there were some major problems that I personally experienced in working on this project.
The biggest problem is that writing detailed feedback where you patiently explain every change that needs to be made to a question and why is a massive time-sink. Now, this is obviously the very nature of this project. But the idea is that this time-sink pays off when your detailed feedback produces better questions down the line and teaches the padawans to be better writers. In many cases, this system worked precisely as it was supposed to and my writers demonstrably improved over time and/or always responded well to feedback. In other cases, though, this massive time-sink had no rewards.
In a couple of cases, this was because some writers majorly screwed me over by never sending me rewrites on their questions, entirely dropping out of contact in the final weeks. This meant that I poured hours of time down the drain in giving feedback that was never acted upon, and that what I was left with, a couple of weeks before the tournament, was a pile of unusably poor questions that I then had to rewrite nearly from scratch, acting on my own feedback, as it were. I'm not going to name names publicly, but I would not support these writers working on future projects of this kind. Unfortunately, there is no way to predict in advance whom these writers might be.
In other cases, the problem was that though certain writers nominally acted on feedback, they were simply incapable of actually writing the kinds of clues I was asking them to write. Matt Weiner has often said that you can't teach people how to be good quizbowl writers. I don't think this is quite true. I'd phrase it like this: being a good quizbowl writer involves a large set of skills, some of which can be taught and some of which cannot be taught. The success stories in this process were those who lacked the skills that can be taught, and the problem cases were those who lacked the skills that cannot.
To give a broad example, so as not to single anyone out, I had a much easier time with the music questions than I did with either the literature, philosophy, or social science. This surprised me, as we traditionally treat music as the hardest of these categories to write well. The reason for this was that my music writers all happened to be people who were capable of understanding the material they were researching. Most of them wrote clues that demonstrated a good command of music theory, history, etc. My job with them was primarily to teach them how to harness this knowledge in phrasing clues well and creating structurally sound tossup. Some of my literature, philosophy, and social science writers also understood what they were writing, and underwent a similar process. But many of them either were unable to understand the theoretical concepts or literary scenes they were trying to describe, or were unable to translate their understandings into clear and detailed descriptions. And there is only so much one can do in these situations. I cannot, in the span of a few e-mails, teach someone how to read a novel, how to process a philosophy article, etc., and my attempts to do so or to redirect them to sources that I thought made things easier were not always successful.
The only solution I have to this is to suggest that future leaders of these projects either need to screen their potential writers ahead of time to make sure they're working with a team that are prepared for writing at this level or they need to prepare themselves for the fact that they are going to spend time teaching lessons that some of their mentees aren't capable of learning, and that in these latter cases, they will end up having to rewrite the questions themselves.
I don't want to sound glum about this. I would say that about half of my writers visibly improved over the course of this project, and are people who while not ready yet to subject-edit a tournament, are certainly people who should keep writing for housewrites under the guidance of experienced editors. I haven't yet received feedback on any of my categories in this set from any of the people who played it, so I shouldn't make grand proclamations about the strength of the final product. But on the whole, I felt like my team of writers ended up producing an above-average-quality set of questions, and that this is no mean feat for many writers who previously had no serious writing experience. So, I hope that projects like this continue in future, and writers like these have an opportunity to do this kind of good work.
I think some of the logistics might be improved if this sort of project were attempted with fewer writers, and there weren't so many categories where editors were dependent on receiving questions from free-lancers to fill out the distribution.
Yale University '12
King's College London '13
University of Chicago '19
“I am not absentminded. It is the presence of mind that makes me unaware of everything else.” - G.K. Chesterton