Kim here, writing with the wonderful Sheila Robinson of the Evaluaspheric Perceptions blog.
1. You can’t fly without a pilot!
One of my [Sheila] graduate school advisors gave me simple advice, but some of the best coaching I’ve ever received on one of my first surveys. Of each of my questions she would ask, “Now, what do you REALLY want to know here?” and would challenge me with, “So, what if someone answers ‘yes’ to this? So what? Does that really tell you anything?” From this I learned the importance of piloting surveys (and other protocols, for that matter), at the very least with one person, before sending out to all respondents.
Likewise, I [Kim] often give the advice (and try to remember it myself) to think carefully about what you are trying to learn AND DO with the resulting information when composing a question. It’s sort of like reverse engineering questions. If I want to learn and report on how satisfied students are with a particular aspect of a program, then I’m going to mirror that language in the question and response options. Seems simple, right? But it’s easy to write questions that aren’t this direct without realizing it if you haven’t thought it through. See the difference in the (perhaps over-simplified) questions below? The key is in being very clear about what it is that you want to learn.
How sufficient was the instruction you received in X program?
How satisfied are you with the instruction you received in X program?
- Very satisfied
- Very unsatisfied
And because when we write survey questions we easily become “too close” to the material, using others as a sounding board is vital. No matter how well I think I’ve crafted a survey I will always want others’ feedback, and ideally that includes feedback from members of whatever group the survey is intended for!
2. Ask a silly question, get a silly answer!
Selecting the right type of question to get the kind of answers you’re looking for is a related challenge. Sometimes the simplest sounding questions are the most difficult for respondents to answer. I [Sheila] mostly evaluate professional development. Open-ended questions such as, “What did you learn?” and “How will you measure the impact of your learning?” sound terrific, but you might be surprised at how difficult they are to answer for some people, and how challenging they are to analyze. Often, people aren’t able to clearly articulate their learning, or how they will measure its impact in a neatly packaged paragraph. This is the case whether we ask the question immediately after the learning opportunity, or after participants have been given time to process and apply the learning in their own contexts.
Ask people only questions they are likely to know the answers to, ask about things relevant to them, and be clear in what you’re asking. – Earl Babbie, Survey Research Methods
Sometimes it’s best to be more direct – think carefully about the construct you are trying to measure and break it down into more easily observable and reportable indicators, in order to get the richness of data you’re after. (For a fabulous resource on evaluation terminology, check out Kylie Hutchinson’s free Evaluation Glossary for mobile phones and tablets.) In other words, (returning to tip #1 here), the evaluator needs to think, “What do I REALLY need to know from a potential respondent?” So, “What have you learned?” might become several questions, such as “How have you used [example of the content] in your practice?” “How has your thinking changed with regard to [example of the content]” “What changes have you observed in your [students, patients, clients…]?” These questions come either after you have determined that people have used the content, or their thinking has changed, OR if you include a response option for those who have not used the content, or feel that their thinking has not changed.
3. Everyone should have a chance to play! Ensure an appropriate response option for everyone! Survey researchers often concern themselves with including appropriate response options for demographics – race/ethnicity, gender identification, etc., but in our experience, the other questions often sorely need the same attention. We’ve both been given too many surveys (especially online) where there aren’t sufficient or appropriate response options for a required question, leaving us with two choices (neither of which are appealing) – 1) to forgo completing the survey and deny the organization or individual our input, or 2) to answer in a way that does not honestly reflect our opinion, attribute, status, experience, etc.
Additional rules of thumb for response options:
- Avoid too many. Or, use just the number you need and no more. No need to turn what might as well be a 5-point scale into a 7- or 9-point scale if you’re likely to end up collapsing the responses anyway. Use enough options to give folks a chance to express nuanced experiences, but no more than necessary (we find that 5 points are usually enough).
- If you’re trying to get folks to rate something as ultimately ‘passing muster’ or not, you might consider an even number of response options — ideally two above the ‘muster’ point (positive options) and two below (negative options). Something like: excellent, meets standard, developing but below standard, & poor
- In contrast, if what you’re after is a rating of satisfaction or the like, then a 5 point Likert or Likert-like scale may be your best bet: Strongly agree, Agree, Neutral, Disagree, Strongly Disagree. Disclaimer: We’re not necessarily advocating for the 5 point scale vs a 4 point (or any other number for that matter). We’re well aware of the never-ending debates and are of the opinion that one is not necessarily superior to the other, and each has its place and use in evaluation. (For a great and humorous take on this topic, check out Patricia Rogers’ and Jane Davidson’s post Boxers or briefs? Why having a favorite response scale makes no sense).
- Aim for visually appealing lists – don’t make it difficult on the eye by creating matrices that are too big, or response options running across the page rather than down.
- If you’re using scales throughout a survey (especially the same scale) always run them in the same direction (positive to negative or vise versa). Seems like a no-brainer, right? But it’s worth attending to because it’s easy to forget.
4. Avoid these pitfalls!
- Leading questions — is it extremely obvious how you want respondents to answer? If so, social desirability bias can come into play. Have you given the respondents a ‘lead’ in your framing of the question – which might look like this: “Are you happy with the instructor?” as opposed to “How satisfied or dissatisfied are you with the instructor?”
- Double-barreled questions — these are questions that cram too much into one question and often indicate that either two questions are needed where one is written, or you’re not clear enough about exactly what you’re trying to learn yet. Example: “How timely and helpful was the feedback?” Do you want to know whether feedback was received quickly? Or that it was helpful? Or both (separately)?
- Over-use of open-ended questions — if what you need to ask isn’t easily condensed into a simple set of response options, maybe it shouldn’t be part of a survey. You are likely better off getting these types of responses from interviews, focus groups, or other qualitative mechanisms. One or two open-ended questions isn’t going to foul things, but you may not capture information that’s as exhaustive or nuanced as you would if you were speaking to someone over the phone or in person.
(image credit: wallyg via Flickr)
5. Not for your eyes only!
Keep the survey respondent in mind as you write questions, and consult Universal Design principles and tips that are thankfully, widely available now. The American Evaluation Association held a webinar with Jennifer Sulewski back in 2010 that covered Universal Design tips for evaluators. The handout (available from the American Evaluation Association’s public eLibrary) includes great nuggets of wisdom, many of which boil down to making sure things are as simple as possible, and are written with the survey respondent in mind, such as, “Make sure surveys are easy to understand and responses are intuitive, even if people can’t or won’t read the instructions closely.”
What are YOUR favorite survey construction recommendations? Please add them in the comments!
(NOTE: We relied on our go-to texts on survey design and management for this post, as with others: Babbie’s Survey Research Methods, Dillman’s Internet, Mail, and Mixed-Mode Surveys, and Fink’s How to Conduct Surveys: A Step-by-Step Guide 3rd Ed. See our last post for more on these and other survey design resources).