Impact of Question Design on Research

Part
01
of one
Part
01

Impact of Question Design on Research (2)

Three additional case studies of instances where data has been skewed due to poor question design are provided below.

Case Study 1

Question Wording

  • The choice of words used in presenting a research question is important in communicating the intent and meaning of the question to respondents in order to ensure that all the respondents understand the question in the same way. Even little differences in wording can significantly affect the answers provided by respondents.
  • A case study of a difference in wording that had a substantial impact on the responses provided by respondents was from a Pew Research Center survey conducted in January 2003.
  • Respondents were asked if they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule”.

Results and Impact

  • In response, 68% stated that they favored taking military action, while 25% opposed it.
  • However, when the wording of the question was changed to if they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties,” it produced dramatically different responses. Only 43% stated that they favored taking military action, while 48% opposed it.
  • The introduction of the phrase "even if it meant that U.S. forces might suffer thousands of casualties" provided more details and changed the question context, leading to a reduction in the percentage of respondents who favored taking military action.

Case Study 2

Acquiescence Bias

  • The “agree-disagree” format is one of the most popular formats used in presenting survey questions. Research has revealed that respondents who are less informed and less educated tend to agree more with the statements compared with those who are better informed and better educated. This is sometimes known as an “acquiescence bias”. A better way of presenting the questions is to offer alternative statements for respondents to choose from (forced-choice format).
  • A case study is a Pew Research Center survey conducted in 1999. Respondents were presented with the statement, "The best way to ensure peace is through military strength" and asked to either agree or disagree.

Results and Impact

  • In response to the statement, 55% of the respondents agreed while 42% disagreed.
  • When the forced-choice format was used, two statements were presented to the respondents and they were asked to choose one: "The best way to ensure peace is through military strength." OR "Diplomacy is the best way to ensure peace."
  • The response was significantly different as only 33% chose the military option while 55% chose the diplomacy option.
  • The introduction of the diplomacy option forced the respondents to make a choice between both options rather than being limited to either agreeing or disagreeing with only one statement. This changed the context of the question and resulted in different responses.

Case Study 3

Response Option Order and Question Order

  • Three sociology researchers conducted a study to evaluate the "impact of response option order and question order on the distribution of responses to the self-rated health (SRH) question and the relationship between SRH and other health-related measures".
  • They designed an online survey, and manipulated the order of response choices from “excellent” to “poor” against from “poor” to “excellent”. They also manipulated the order of SRH question and presented them either before or after domain-specific health measures were administered.

Results and Impact

  • They found that mean SRH is higher and the proportion of “fair” or “poor” health is lower when the response choices are ordered from “excellent” to “poor”.
  • The mean SRH was 3.39 when the response choices are ordered from “excellent” to “poor” compared with 3.30 when the response choices are ordered from “poor” to “excellent”.
  • Similarly, the proportion of “fair” or “poor” health was 0.16 (16%) when the response choices are ordered from “excellent” to “poor” compared with 0.18 (18%) when the response choices are ordered from “poor” to “excellent”.
  • They also found that SRH is worse if it is presented after domain-specific health measures compared with if it is presented before domain-specific health measures.
  • The mean SRH was 3.43 when the response choices are ordered from “excellent” to “poor” and SRH is presented before health items compared with 3.30 when the response choices are ordered from “poor” to “excellent” and SRH is presented after health items.
  • Similarly, the proportion of “fair” or “poor” health was 0.14 (14%) when the response choices are ordered from “excellent” to “poor” and SRH is presented before health items compared with 0.19 (19%) when the response choices are ordered from “poor” to “excellent” and SRH is presented after health items.
  • The researchers concluded by suggesting that SRH should be presented before domain-specific health measures to increase comparability across surveys since different surveys will include different domain-specific health measures.


Sources
Sources