Ike wrote: On the other hand however, I think John's list suffers from blind spots. For example, science is routinely not even considered for tournaments on this list, presumably because John isn't a science player. In my opinion, MO 2010 had more consistent science than 2011: the latter had questions on Emil Post, metallicity, and the FT theorem that don't make for the best tossups. As another example, I personally thought the science at CO 2013 was much better than at CO 2011, even if it wasn't perfect. I personally liked ACF Fall 2008 over any of the other ACF Falls as well, but that's also because in recent years the science at ACF Fall has been laced with so many errors, ambiguous clues and just downright lies that cause a player not to buzz.
I'll cop to the complete blind spot on science. For that aspect, I tried to incorporate the community's reactions to the science in coming up with my overall assessments.
I also think hard tournaments have different agendas, and John's list doesn't seem to incorporate that. MO and CO are two entirely different beasts. John, I read through your post about CO 2013. While I probably agree that Marivaux and Trevor are too hard to come up in one packet, I think a lot of the reason why I didn't knock the tournament for having those clumping of questions is that I played on a team that was able to get them: We didn't pick up Arden of Faversham or Marivaux, but we did get Trevor, Cavalcanti, Swamplandia!, Kramer and many of the other hardest lit tossups in the set. To me, one of the great things about playing CO is that my teammates would always surprise me with some obscure thing they would know that I would have no chance of getting - while I did the same on different questions. I don't know how many of those questions you were able to get, but I think that if you were playing with a team that did get them, you wouldn't notice them as much, which leads me to believe that saying one tournament is better than the other is largely dependent upon your tournament experience and what the tournament is trying to achieve.
In all tournament discussion posts, I try to step back from my experience when assessing what's good. I regularly praise good questions that I failed to convert or criticize overly-difficult questions even if I happen to convert them. I'm not saying that I successfully transform myself into an objective critic, but please do give me some credit. You will notice that this list contains tournaments I personally performed terribly at (ACF Nationals 2010) and tournaments I didn't attend myself (VCU Open 2011), and it omits my personal greatest victories (CO 2012 & 2013, Nats 2011 & 2012). I included CO 2011 in the fourth slot even though I did not enjoy the music questions, and my loss in the finals felt anticlimactic because the two finals packets were not of equivalent difficulty. I recognize that both of these are circumstances that disproportionately affected my personal experiences vis-a-vis an average player's general experience of the tournament, and I have consequently reduced how much I count them against the tournament as a whole.
I don't regard the fact that, with different teammates, my team would have converted certain tossups at CO 2013 as a strong argument against my criticism of those questions; and I assure you that I would be voicing the same criticisms even had I converted them. Likewise, my reaction to people complaining about some of the difficult questions at Nats 2011 has not been to just say "Well, I converted those at the end, and they helped me win the tournament, so they must be fine…". Also, as I tried to make clear in my post on CO 2013, those methodological points I made were constructive criticisms of a largely solid tournament, that I regard as a success and a credit to Bollinger and the editing team. My Top 10 list is not a list of "the only 10 good tournaments of the past five years".
Lastly, I think these lists are predicated on so much on what one person says, and then everyone else just follows suit rather than try to think for themselves. I think MAGNI needs to be ranked on this list. John, it seems to me that you object because of some of the painting criticism, which I think is mostly invalid. For example, talking about the tossup on Masaccio that Ted claimed had a worthless lead-in, I actually buzzed on that clue (or led it bleed into the next clue,) because it described a Madonna with a baby that is ugly. In class we talked about that exact painting and how Masaccio was notoriously bad at drawing babies, so it was buzzable to me. Many of the other painting tossups that I recall were fine, and that ultimately, the painting was very good in my eyes.
Also, there are so many things in science that MAGNI got right: in the physics, all of the clues were stuff that people actually study, rarely wrong and easy to parse. Speaking as a generalist for bio and chem Auroni did a fantastic job. Very few tossups clobbered you with clues that are only useful to an expert, I was actively engaged on almost every tossup. I can't think of any tournament in recent memory that actually also has that quality. The computer science was really well done, as was the rest of the other science as far as I can tell. If you notice, some of the complaints that were discussed about MAGNI's science was Will Butler claiming erroneously that no one studies quantum computers in class and Eric saying some clues were too easy in his topics. Even if you accept Eric's arguments about how the particle-in-the-box leadin was too easy, this set's science got so many things right, and it's a shame that no one holds it up more.
I mean, as the writer of a considerable portion of MAGNI, I personally agree with you. Matt Jackson and I were both disappointed at what we perceived at the lukewarm reception to MAGNI, because we both thought going in that we had written a great set.
I was really proud of my distribution of my categories (as you surely know by now, sub-distribution is one of my big hang-ups). I thought that I avoided the quizbowl traps of either writing my pet categories or writing on only things I don't know well in order to improve. For literature, I purposely explored works that might reward the different reasons why people we read (by exploring both works primarily in an academic environment and those that are read primarily outside of the classroom) and what kind of hard facts are valuable to us (supporting characters and plot details from major works, minor works of major authors, culturally or historically significant less-read works and authors, literary criticism, etc.). And I was very conscientious with my clue selection for literature, reading every work carefully to find good clues. For visual art, I sub-distributed by genre as well as by geography and chronology, to make sure portraiture, landscape, still-life, etc. were all represented. I know Matt Jackson put a lot of care into balancing the "real"ness and accessibility of the social science, especially the linguistics.
I don't know if other people feel you did about it, though. [EDIT: Apparently Matt Bollinger at least partially does! Yay!] Perhaps Matt Jackson and I took the criticism too much to heart. This line on the QB Wiki page for MAGNI: "Though the set was largely well-received among the 104 teams that played it at ten sites, major critiques of the set included a reticence with pronouns that confused players, frequent grammatical errors, and hard bonus parts that were systematically very hard" seems to be a rueful, self-critical acknowledgment of community opinion by Matt Jackson. I don't think either of us feels comfortable belligerently saying: "Why didn't you love our tournament? You should really value all these secondary concerns that we put a lot of thought into!", and I don't feel the final assessment of that tournament is up to me.
But I think I learned a lot from it. (For example, since Andrew Hart's denunciation of my pronoun use, I have been very careful about that aspect in all future editing work; I kept Ted's criticisms of the painting in mind that year when editing Fine Arts for ACF Regionals 2012; etc.) My portions of ACF Regionals 2013 were my attempt to retain all the things I personally valued most about MAGNI while nonetheless responding to the criticisms of my work on MAGNI (variability of hard parts, insufficiently accessible music, etc.). Because this was very difficult to do in a packet submission tournament, I am very proud of my work. Cane Ridge Revival will be my attempt to apply these values in a high-difficulty set, and across the set (which I hopefully can control somewhat as head editor).
Last thought: Most of us are incapable of completely judging a set for its full value, and that ultimately, what a set offers to you completely determines what metrics you are using to evaluate a tournament. I personally don't think there is value in ranking one tournament over the other, except for an intellectual exercise for social discussion, I will always treasure what I have learned from Sun N Funs and the notes I took about various things I learned about in the various Andrwe Yaphe edited ICTs, moreso than what is objectively good about any of the tournaments on the list.
I don't know what you mean by "judging a set for its full value". But yes, I think many not-overall-great sets offer many valuable contributions on the level of individual questions or categories. I tried to recognize that in my Honorable Mentions column. I should emphasize that I posted my Top 10 in response to Matt Jackson's call for just such a post and in the hopes of inviting further discussion, and not as a proclamation of any sorts. And it's precisely the kinds of re-evaluations of undervalued parts of previously dismissed tournaments like what you do in your post that I think makes this kind of discussion thread a valuable exercise.