Our First Training Exercise, and What We Found

by Greg Forrester

This past week, we decided to carry out a training exercise with our submission readers. It’s the first time we’ve done anything like this before; it’s the first time we’ve had guidelines for reading (and scoring) submissions before, and so we wanted to share the results of what we found with you. Hopefully you’ll be able to draw some of the same conclusions and notice the same trends we did, but if you notice anything that we didn’t spot, please let us know.

But first.

How Do We Score Submissions?

The way we score submissions, ultimately deciding whether they’re right for us to publish or not, is painfully absolute and impersonal. As a writer myself, there’s something a little uneasy about the process, but having now worked with Bandit for over 3 years, I can saw with some confidence that it’s a fair method, and accounts for a variety of tastes and reading preferences.

All submissions we receive are assigned randomly to 2 readers. Each reader (obviously) reads the submission, and then scores it between 1 and 10. Technically, I guess they could score it between 0 and 10, but I don’t think we’ve had any 0 scores. It’s then the average of these scores which is taken, and all submissions (poetry and prose) have to score above a certain threshold to be accepted. Anything below that threshold, no matter how high the average score, is rejected.

Guidelines for how to read and how to score submissions have been minimal prior to this training exercise. Each reader when they come on board as part of the team has been given the simple instruction – score it as a reader. Imagine you’ve come across the story online, or as part of a paperback collection; imagine you’re in a reading group or university class and this is the story you have to read. How would you rate it?

The Training Exercise

Over the last few months we started noticing a few emerging trends after we recruited a large number of new volunteers – the disparity in scores was increasing. In the past instances of serious differences in scores for the same piece were uncommon, but we noticed this was increasing, and we wanted to try and understand why. To try and understand the reasons behind this, Alisdair (our Editor-in-Chief) and Tom (newly appointed Deputy Editor-in-Chief for Prose) created a short guide on reading and scoring submissions, looking at aspects such as the quality of the writing style, originality, and the mastery of grammar, so that readers could have some direction through the thought processes required for scoring submissions. But we wanted to do more than that. We wanted to have an idea of how people scored and their own thought processes behind this, and so that’s where the training exercise came in.

The exercise itself was simple. All prose readers were given the same 3 stories to read. I won’t name the stories, because that would clearly be unfair, but these were stories which had previously been submitted to us. When first submitted, one of them had scored very highly, one had narrowly fallen below our publishing threshold, and the other had received middling scores (though we told our readers none of this). Readers were asked to pay special consideration to our new guide when scoring, and to send scores directly to me (so as not to be influenced by the scores of others).

What We Found

We found some interesting similarities and patterns between the scores (see below images for the full breakdown of our scores), as well as some interesting trends which we hadn’t anticipated. Alongside the images below, which are simply the scores of the pieces, some of our readers gave reasons behind their scores. This in itself was very useful for us; it provided some detail behind the scores which initially appeared as outliers. But now to the fun bit: bullet points and statistics.

Story A: the story which had received middling scores
Story B: the story which we accepted for publication
Story C: the story which narrowly missed out on publication

  • Overall, taking an average of all the scores from our readers, Story B would still have been published, while Story A and Story C would have still missed out on publication.
  • Interestingly, the majority of readers ranked the stories in the same order of quality as was initially the case: Story B, then Story C, then Story A.
  • Where Story B was scored lower than it’s average score, the feedback we received from our readers was that this score related to content rather than quality – it was a story with potentially divisive content, and this showed in some of our readers.
What We’ve Learned

For us, this task revolved around getting an indication of how our readers score submissions, and so in that sense, we’ve emerged with very little concerns. Some readers scored pieces higher on average than we would have expected, and some lower than we would have expected, but none of this was too far outside of what we expected – after all, reading is subjective, and we have a varied team of readers for just this reason. It was reassuring to see that the ultimate decisions made on all three pieces didn’t change, as it was seeing that readers could identify and quantify signs of quality writing accurately.

Ultimately, what I’d like anyone reading this to take away from it is this: writing is subjective, and people’s opinions on stories will differ based on factors a reader couldn’t possibly anticipate, but that signs of quality and narrative strength do shine through.


About the Writer

Greg (he/him) is Managing Director and a Co-Founder of Bandit Fiction. He has BA and MA degrees in Creative Writing from the University of Sunderland and Newcastle University respectively, and is currently working towards a PhD in Creative Writing. He writes, mostly magical realism, and has been published by Fairlight Books. He offers writing development and editing services through his website – www.gregforrester.co.uk – as well as a few free quizzes, because who doesn’t like quizzes?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s