For those unfamiliar with the workings of the contest, each book is scored by five judges who rate it from 1 to 9, with 9 being a perfect score and 1 meaning (not that RWA actually says this in its judging guidelines) that the novel was written by a moron, edited by a chimpanzee, and printed by an inebriated individual on a broken press that left out every third page. Books that score in the top ten percent of their categories--up to a maximum of eight books in each category--move on to the final round of judging.
Yesterday I received my RITA scores in the mail. Here's how the five judges scored my book (bear with me--this will get really interesting in a minute):
As you can see, my book made a good showing. Three judges thought it was pretty darn amazing, one it was fairly amazing, and one thought it was better than average. Because there were eight finalists in my category (Inspirational Romance), we know there were at least 80 books entered. According to my score sheet, the distribution of total scores for the books entered in my category was as follows:
Lower half (about 40 entrants): total score of 35.7 or below.
Second quarter (about 20 entrants): 35.8 - 38.6.
Top quarter (about 12 entrants plus the 8 who finaled): 38.7 and higher.
My final score was 41.3. RWA doesn't reveal what I'd have needed to displace the lowest-scoring finalist, but if the judge who awarded my book 6.5 points had instead given it a 1 or a 2 (or possibly even a 3--I stink at probability and statistics), there's an excellent chance I would now be one of the happy eight.
See? I told you this was going to get interesting.
Here's what the RITA score sheet says:
RWA applies a standard deviation method for determining finalists, which means that if the lowest score for this work was found to be outside the range limits, the lowest score was replaced by the average score to determine the final score for this manuscript.
Let's imagine that my 6.5 was a 1 or a 2. Because that score would be found outside the range limits, it would be tossed out and the other scores would be averaged. The low score would be then be replaced by a tidy 8.7, bringing my total score up to 43.5--which, again, might very well have tossed me into the finalists' circle.
I've been thinking about this since exchanging e-mails yesterday with an author friend who mentioned that one RITA judge had given her book a 1. Because my friend's other scores were much higher, the 1 was thrown out and replaced, as described above.
I'd like to believe that giving the lowest possible score to a book by an author of my friend's calibre was some kind of mistake, but my friend thinks that judge hoped to scuttle her chances of becoming a RITA finalist. If that's true, the judge wasn't terribly clever. Rather than dragging down my friend's total score, the nasty judge boosted it.
I know a lot of very good writers who have found ones and twos on their books' score sheets. It has happened to me, as well, although my first book didn't receive high enough scores from the other four judges to have finaled even after the "nasty" score was dropped. But in a scoring system of 1-9, shouldn't a 1 be reserved for those rare cases when a book is so amazingly awful that you wonder who in the world made the unfortunate decision to publish it? Sure, judging books is a subjective business, but there's a difference between a boring or disappointing book and one that shocks us because it is so poorly written.
Why are so many RITA entrants receiving these low scores? Could some RITA judges be attempting to covertly punish authors they hate? I'd like to think we're all behaving like professionals, but the judges remain anonymous, so who knows?