Kim here, writing again with my first-rate collaborator, Sheila B Robinson.
In this post we share two examples of poorly worded survey questions that we have used in our conference presentations, talks, and workshops on survey design. Participants frequently ask us “what are some GOOD survey questions?” So we were especially delighted to find that two organizations responsible for poorly crafted questions updated and improved them. (We can’t help but wonder if someone from one of our workshops clued them in!)
We’ve said it before, and we’ll say it again: words matter! You may read these questions and say, “but Sheila and Kim, I KNEW what they meant by the question, so it’s not a big deal, right?” Wrong! It IS a big deal.
Every time our respondents have to work a little harder than they should to answer a question, it adds to the overall cognitive load – the mental effort they need to exert to complete the survey. This may tire them out quicker, resulting in fatigue, non-response, or satisficing.
Using complex and difficult to comprehend questions can lead to respondents providing nonsubstantive or “satisficing” answers, responses that satisfy the requirements of the question but do not help us understand the phenomenon of interest. Satisficing is when respondents give us minimally acceptable answers, rather than optimal ones that more accurately depict their thoughts or behaviors.
Robinson & Leonard, 2018, p. 64
Question makeover #1
There are two significant problems with this question. Let’s start with the question stem. Here, our first instinct is to answer in our heads “Well I am NOW!” We know they meant “before this moment, were you aware…” but unfortunately, the question wasn’t composed that way.
The second problem lies in the response options. They clearly wanted a 5-point scale (that every survey question should feature a 5-point scale or that five is necessarily the best number of scale points are pieces of bad survey advice we encounter quite frequently). But, does awareness really come in degrees? Can you be a little aware or very aware of something? And what is the difference between “a little aware’ and “somewhat aware” or between “very aware” and “extremely aware?”
We were so pleased to see this revision of that same question.
Here, the question stem is much more clear, asking whether respondents were “previously aware…” The construct “awareness” is now presented as dichotomous – something that requires a simple YES/NO or similar binary response. Another advantage of refining the question in this way is that they are easily able to measure 4 items in one brief, easy-to-respond-to matrix.
Question makeover #2
In this example, they tried to force all items onto the same scale, and as a result, the way these questions are phrased doesn’t quite make sense. “Were you able to…” implies an answer of yes or no, not a rating of excellent, good, etc. In addition, (and you probably caught this too!), this question is double-barreled. It’s asking two questions at once – whether the appointment process was quick AND easy. It’s entirely possible the process was easy to navigate, but not quick.
Again, we were so pleased to see an updated version of this question.
In this version they split up the two constructs – ability to schedule an appointment and ease of scheduling an appointment. Presenting them in phrases leads more naturally to a rating response though they also could have used a complete sentence for the stem too: “Please rate your ability to schedule an appointment.”
What can we learn from these makeover examples? Let’s return to where we started: Words matter. The way in which we ask questions matters – to respondents, and to us as survey researchers. Let’s ask clear, precise questions that align with our purpose for using a survey and what we really need to measure. And let’s take inspiration from these examples that it is worthwhile to improve survey questions when they aren’t working well.