Menu Close

Testing, testing! Don’t send a survey without pretesting

Kim here, writing once again with my survey design BFF Sheila, of Custom Professional Learning, LLC

Every book published contains a typo. OK, maybe not every book, but we would wager that most probably do. Every song recorded probably has a mistake or two as well. But you wouldn’t imagine that an author’s first draft was published with no editing or that the recording artist recorded their song without practicing it first. We’re not perfect beings, and we don’t (often) produce perfect work, but we make an effort to eliminate errors or problems before we hit send. 

So why would we send a survey out into the world the minute the ink is dry on that last question without taking steps to ensure that it is actually ready for responses?

What is pretesting?

Pretesting is essentially getting eyes and brains on our surveys before they’re administered. Pretesting can look any number of ways, and can involve limited time, cost, and effort, or a great deal of time, cost, and effort. 

Why pretest?

Pretesting allows us to catch not only typos but much more. Pretesting provides a window into whether we’re asking the right questions, whether questions make sense to respondents, whether they interpret the question in the way we meant it, whether we’ve offered the right response options, and more. Pretesting strategies that engage a sample of our desired respondents can also help us better understand and empathize with them in ways that will help us communicate and encourage responses. Pretesting strategies in which we capture actual survey responses can also help us pre-plan for how we will analyze survey results—or catch potential analysis challenges before it is too late to adjust if needed. 

We share advice for pretesting in our book, Designing Quality Survey Questions:

Several rounds of pretesting may be needed to revise and finalize a survey, depending on the nature of the survey. A survey with more complex questions that will necessitate deeper thinking and be much more difficult for respondents to answer may require more robust pretesting than one that is simpler and more straightforward. Researchers should always anticipate the possibility that employing pretesting methods could shift their thinking about how a question is fundamentally framed or deepen their understanding of what information is knowable in such a way as to require that some draft questions are tossed out entirely in favor of new, better questions. This is not necessarily a failure of the question drafting process but an argument for considering pretesting strategies an integral part of the survey design process. (Robinson & Leonard, 2019, p. 161)

A few types of pretesting

Below are just a few of the possible ways to pretest draft surveys; these can be used alone or in combination, depending on needs, budget, timeline, and other contextual factors.

  • Peer or expert feedback: We can ask our peers or those with more expertise (either in survey methods or about our desired respondents) for their feedback on our survey draft. This is by far the easiest pretesting method for its relative convenience. Often, this can happen quickly and can provide a very helpful, fresh perspective, especially if we’ve spent too much time with our draft and are no longer able to see what might be obvious to others. The convenience of getting feedback means it is one of the most common strategies we use; this also means we can’t overlook its potential flaws. Our peers, or experts we have access to, are also humans subject to overlooking things that respondents may struggle with as well. 
  • Think aloud: We can ask a few potential respondents (or people closely matched with the respondent population) to work through the survey and share their thoughts as they go. They would read the question aloud and then continue sharing their thoughts as they proceed through the response options and select one, or formulate their thoughts in preparation for answering an open ended question. Along the way, we can ask questions about the survey items, such as “What does ‘a field’ mean to you in this question?” or “How did you interpret the phrase ‘read a magazine’?” We can ask what strategy they used to determine how many times they shopped for groceries in the last month. Doing this in person also allows us to glean information from their faces and voices—their emotional reactions to questions. 
  • Pilot testing: Finally, we can engage a group of people—ideally a strategic sample of our desired survey respondents—to test the survey for us by taking it before we finalize and launch it to the full population. Pilot testing is especially helpful in giving us insight into how respondents will actually answer our questions and whether their answers surprise us in ways that indicate something has gone awry with our questions. Pilot data can also help us anticipate analysis challenges, as we have ‘real’ data to work with to try out or refine our plans. When we pilot test a survey, we offer incentives or seek other ways to express gratitude to our testers (as appropriate given the context). We also often include a few questions at the end or via follow up to ask explicitly for testers’ feedback. Questions like “Were any of the questions particularly confusing?” and “Is there anything you expected us to ask but we didn’t?” or “What else should we be asking to understand this topic?” can help us understand respondent perspectives and edit our survey accordingly. 

Getting “concrete” with language in think aloud interviews

Sheila had a survey design client—a concrete association—whose business she knew nothing about. The survey was to be focused on job satisfaction, drilling down into what specific aspects of their jobs were most and least appealing to cement mixer truck operators. After having discussed the survey purpose and goals with the client, Sheila created a draft and sought feedback from the client. But before considering launching the survey with the mixer truck operators, she engaged a few of them via phone to run through the survey items (an informal “think aloud” interview) and refine them.

Through these interactions, Sheila learned that asking some questions likely wouldn’t sit well with operators and that other questions were irrelevant to them and wouldn’t likely yield useful data. She learned about the more colloquial language operators used to describe certain aspects of the job—different from the technical names for certain tasks. All of this feedback informed a much stronger instrument that yielded more meaningful data for the client than would otherwise be achieved without pretesting. 

There’s no perfect survey, but pretesting gets us closer!

We’re not saying that a survey needs to be perfect to launch; in fact, there is no such thing as a perfect survey tool, and we don’t want “perfect to get in the way of good” in survey research either. But skipping any effort to test your survey is a quick way to ensure that it maintains a typo or two, fails to include an important response option, features incorrect or insufficient instructions for completion, or worse, misses something that might frustrate or confuse respondents, causing them to skip items or even quit the survey part way due to survey fatigue.

To learn more about pretesting and other strategies for testing your surveys, check out Chapter 7 of our text, Designing Quality Survey Questions.