We’ve talked before about how the Flesch–Kincaid readability tests provide unique, data-based feedback for writers. These tests are precise, but they aren’t terribly versatile. If we want to know more about our writing, and we’re tired of the same vague accolades we get from Reddit alpha-readers, we might want to stash a few more tools in our belts.
The readability tools we’ll discuss today do things Flesh–Kincaid tests don’t, rounding out the efficacy of readability data. Use them in conjunction to get a broader assessment of your writing.
Cloze tests are unique among readability metrics because they involve third-party reader feedback. A sample is taken from the text in question, roughly 250 words, and words are strategically deleted and replaced with blank spaces. Readers are asked to fill in the blanks. Traditionally, every fifth word is deleted, and a score is calculated by dividing the number of correct answers by the number of blanks in the sample. An ideal score is 60% or above. 40–59% is deemed to be potentially difficult, and 39% or below should – according to the original formula – be rewritten. The principle behind this readability test is the Gestalt theory of closure: the idea that the brain tries to fill in missing pieces. The brain is pretty good at this, as it turns out. If a word ‘belongs’ in a text, the average human brain should be able to recognize it as belonging there.
Use cloze tests in their traditional form to gauge the basic readability of your work. There is a certain comfort in reading something that has a familiar cadence to it, like it’s written the way it’s ‘supposed to be.’ It’s like listening to a song for the first time, humming along, and soon being able to anticipate the next notes. The plot and characters in a story probably shouldn’t be predictable, but to some extent, the language should. Readers will feel at home, like you are speaking their language.
However, you can tweak cloze tests to serve your purposes. Perhaps eliminate prepositions and pronouns as a way of testing whether your readers’ sense of direction aligns with yours. Or remove adjectives, nouns, and verbs (not all at once!) to see whether your descriptions are perhaps too predictable.
In these variations, you’ll be looking less at the percentage of correct answers and more at the specific blanks readers were able to fill in or not. To paint a powerful scene, strong verbs are important. So you might aim for a familiar rhythm for the better part of the scene, but contrive it so that test readers cannot guess the key verbs in your scene. In fact, test-readers’ answers might generate the verbs you need, especially if a suggestion arises that fits better than what you had intended. I’d advise caution if you use cloze tests in this way: resist the temptation to use the thesaurus to replace words that are deemed too commonplace. Only use a thesaurus to help you remember the word you were trying to think of anyway. But if you’re going for comfortable or unpredictable, cloze tests can help you determine if you’ve hit the mark.
There are online sources for implementing a traditional cloze test, but a better option for genre writers might be to recruit friends, family members, or beta readers who would be among your target readers. Online tests are inherently generic and could work well for books intended to be accessible to anyone. For many books, though, choosing your own readers will ensure more relevant feedback.
The Fry readability formula
If you’re a visual person, you’re going to love this one. The collection of data is similar to other readability tests: randomly select three 100-word passages. Count the syllables in each 100-word passage (total, not per word) and the number of sentences in each sample passage. Add the totals and divide by 300 for the average syllables per 100 words and average sentence length per 100 words. Plot the results on a Fry readability graph, readily accessible online through a quick search, for an idea of how your writing stacks up to the norm.
As with other tools, this process will give you a grade-level assessment of readability. As with the cloze test, however, a little creative tweaking can give you more advanced feedback. Choosing passages from certain types of scenes would be one way to do this. When you have a high-action scene, you want your words and sentences to be, on average, crisp and quick. The efficient, clipped pace will aid the intensity of the scene. Plotting an action scene – or, conversely, a leisurely, setting-driven scene – on a Fry graph will yield more precise feedback.
Another option is to find an online syllable and sentence calculator (to save yourself a lot of counting) and put far more than three sample passages on the graph. The results will give you an immediate visual on the linguistic coherence of your project. If your results are all over the graph, you’ll want to iron out some of the inconsistencies. If there’s very little variety, try the above scene-specific version of the test and try to tighten the language in fast-paced scenes and embellish it in the slower scenes. Use graphs across a series, especially, since these are often written at wide intervals but read close together, so consistency is preferable. Gradually increasing complexity over the course of a series can be effective as well, as in the Harry Potter series, where readers grow in maturity in tandem with the books’ release.
There are a number of other useful readability tests for authors, all similar in nature, and some quite easy to implement on your own or online.
A SMOG grade can be found without even closing your word processor. Copy-paste thirty sentences taken from throughout the manuscript into a new document. Count every word with three or more syllables. Solve for the square root of the total (okay, you might need to open your calculator). Round to the nearest ten. Add three. The result is the grade level corresponding to your text.
The Gunning fog index works the same way, but eliminates proper nouns and compound words from its calculation, so it’s probably easier to use an online version (unless you love counting your own words).
The automated readability index (ARI) and Coleman–Liau tests operate under a similar principle, but they check number of characters instead of syllables. These are easy to perform through a quick Google search for automated versions.
There are also a few stand-alone principles you can bear in mind that will add another, more subjective layer to your readability data. First, don’t use words in your writing that you wouldn’t use in real life. This is a general principle, so there will be exceptions. For the most part, though, creative writing is less about using unique words and more about using common words in unique ways. There might be a place in your manuscript for ‘viscous’ and ‘gossamer,’ but chances are ‘scarlet-hued’ and ‘vainglorious’ are going to feel contrived.
Then, with the exception of flashbacks and stories that take place outside the space-time continuum, write chronologically. Sentences that are written backwards are more difficult to follow, and constant forays into the past are distracting.
Finally, sentence variety is not only important to reader interest, it also aids in readability. A slew of sentences all following the same structure will be difficult to digest and, worse, irritating to the reader. Take a sample page or chapter and scrutinize your sentence structure on that page. If you notice a heavy pattern – always starting with a gerund phrase, for instance, or tagging every instance of dialogue with s/he-said-plus-adjective – you’ll want to watch for ways to restructure some of the sentences on that page and throughout the manuscript.
To that end, MS Word’s word-count feature provides simple data with which site-specific or book-long calculations can be made. How many paragraphs do I have per page? How many sentences per paragraph? How many words in this sentence? In this chapter? Checking sentence and paragraph length for variety and readability (run-on sentences are a thing of the past) is one way to collect your own data. Consider a simple table or graph to record your findings.
Where to go from here
If you’re interested in using readability data to get a different angle on your work, I’d suggest starting with Flesch–Kincaid, cloze, and the Fry formula. The feedback you receive will dictate next steps. As you self-edit, watch for the pitfalls outlined in the section immediately preceding this one. When you’ve made some revisions, run your passages through the tests again to see if you’ve achieved what you set out to, then report back here with your findings!
If you’ve tried readability tests, or would like to, I’d love to hear from you in the comments and, for more great advice, make sure to check out Here’s How To Vary Your Sentence Structure, 5 Ways Your Paragraphs Are Broken (That You Can Fix), and What Are The Flesch–Kincaid Readability Tests?