Menu Close

Balancing tensions in Survey Design: Check all that apply vs Forced Choice

Kim here, writing again with co-author and co-conspirator, Sheila B Robinson. Lately, we’ve been thinking about the many balancing tensions we often face in designing quality survey questions – whether we’re developing our own surveys or helping others design and refine theirs. 

What do we mean by balancing tensions?

On the one hand, we like it when surveys are easy to design and include questions about everything we’d like to know. On the other hand, we want to respect our respondents’ time and effort, and we want our surveys to feel engaging and interesting to them, not tiring and burdensome. That’s one of the most common tensions we deal with. 

“Check all that apply” vs “Forced choice” survey questions

Another tension – one that can seem particularly finicky – is when to use check all that apply (aka CATA) questions, where respondents must read a list of options and respond only to those items that pertain to them, and when to use forced-choice questions which require respondents to choose an answer (e.g., a yes or no) to each option in the list. 

CATA questions: Pros and Cons

CATA questions come with benefits:

  • They are relatively easy to compose – just make sure response options are mutually exclusive and as exhaustive a list as needed to capture respondents’ experiences.

  • They tend not to require a great deal of cognitive effort for respondents, if the list of response options isn’t too long.  

But, CATA questions can also be problematic. 

  • Respondents may move too quickly through the list of response options not carefully considering whether each one applies to them, resulting in possible underreporting.

  • We may not necessarily know why a respondent does not check a box. A number of authors have posited that: “the difficulty lies in interpreting a non-checked box, since four explanations are possible:

    • the response option does not apply and therefore is a ‘no’

    • the respondent might have missed that entry in the list

    • the respondent was not sure, or

    • the respondent did not want to answer the question”

(Callegaro, Murakami, Tepman, & Henderson, 2015). 

Check all that apply questions can also be complex to deal with in analysis. We know we’re not the only ones to have set up what we thought was a great survey question only to find that the resulting data was messier than anticipated, requiring more time and manipulation than we planned (and budgeted) for. When most online platforms produce a spreadsheet for us to analyze the data from, all of the checked response to the CATA question from each respondent typically end up inside one cell:

While not tremendously difficult with a short list like this, it can get tricky when lists of response options are longer.

Forced Choice questions: Pros and Cons

Forced choice questions come with benefits:

  • Using a forced choice question instead of a CATA question may result in increased accuracy (Ericson & Nelson, 2007; Lau & Kennedy, 2019). Why? The thinking is that respondents will need to slow down in order to consider their response to each item as they read it, as opposed to scanning a list quickly. 
  • Unlike CATA questions, forced choice questions are usually much easier to deal with in analysis. You may have more individual items to work with, but at least with most online survey tools the organization of responses will generally be simpler – e.g., yes/no responses. 

  1.  

But, forced choice questions can also be problematic. 

  • The trade off with forced choice is that slowing respondents down and asking many smaller questions requires them to exert a little more cognitive effort. Asking more of respondents can lead us to worry about the potential for survey fatigue and non-response. That said, some authors have found that forced choice is not likely to lead to item nonresponse (Smyth, Dillman, Christian, & Stern, 2006).  
  • Forced choice questions can potentially take a little longer to complete, whether this is because it takes more time to process each response option, or more time to click on a response for each one (or possibly both).

  1.  

Which is the right choice? It depends. 

Once again, we advise that you put yourself in the shoes of your desired respondents, to the best of your ability. Consider your survey purpose, your specific information needs, and balance the desire for accuracy with the desire to ease the burden for respondents. Perhaps you opt to use a CATA format and take the brunt of the difficulty in the analysis phase. Or consider that the forced choice format may produce more accurate (and thus more useful) data as a result, and you can consider cutting other survey items to reduce completion time..  

What would we do? 

For the most part, we will lean toward using forced choice as a first choice when it comes to designing quality survey questions. 

Our book, Designing Quality Survey Questions, is full of more detailed advice on developing survey questions that will take you through our purposeful survey design process. Or, contact us to help with your survey, or ask about a survey design workshop for your organization!

Check out my other articles on survey question design.

REFERENCES:

Callegaro, M., Murakami, M.H., Tepman, Z., & Henderson, V. (2015). Yes–no answers versus check-all in self-administered modes: A systematic review and analyses. International Journal of Market Research, Vol. 57 Issue 2, 2015. Available: https://journals.sagepub.com/doi/10.2501/IJMR-2015-014a 

Ericson, L., & Nelson, C.D. (2007). A Comparison of Forced-Choice and Mark-All-That-Apply Formats for Gathering Information on Health Insurance in the 2006 American Community Survey Content Test 1. Available: https://nces.ed.gov/FCSM/pdf/2007FCSM_Ericson-VI-A.pdf 

Lau, A. & Kennedy, C. (2019). When Online Survey Respondents Select Only ‘Select Some That Apply’: Forced-choice yield more accurate data than select-all-that-apply lists. Pew Research Center. May 19, 2019.  Available: https://www.pewresearch.org/methods/2019/05/09/when-online-survey-respondents-only-select-some-that-apply/ 

Smyth, J.D., Dillman, D.A., Christian, L.M., & Stern, M.J. (2006). Comparing check-all and forced-choice question formats in web surveys. Public Opinion Quarterly, Vol. 70, No. 1, Spring 2006, pp. 66–77. Available: https://academic.oup.com/poq/article/70/1/66/1891521