Credit Up

Part
01
of four
Part
01

DIGITAL QUESTIONNAIRE DESIGN - Psychometric's and Traditional

Best practices for designing online questionnaires require attention to format, medium, layout, and cadence. Our findings indicate that while the cadence of questions and layout of the questionnaire are similar for both traditional and psychometric testing, best practices for format and medium differ. Specifically, the most widely used and best format for psychometric questionnaires is to use multi-item psychometric scales, while best practices for cadence include grouping questions by topic, using images when appropriate, being specific, not including jargon, avoiding leading questions, and employing positive wording. Mobile technology is becoming the preferred medium for many psychometric questionnaires, and the layout of questionnaires should be short. Below you will find an overview of this topic, as well as specific findings.

OVERVIEW OF TESTING

When finding employees, training employees, or finding the best role for employees, including promotion opportunities, companies employ a variety of methods. While the process starts with an application submission and usually includes vetting and human interviews, companies are increasingly using tests and questionnaires to evaluate employees.

Traditional tests and questionnaires are used to get a sense of an employees abilities in comparison to other employees or applicants. These tests are normative and the outcomes depend on the whole cohort of those tested, i.e. a great score for an individual in one group of testers may become a low score when compared to another group of testers.

Psychometric testing is part of Quantitative Psychology and objectively measures, “skills and knowledge, abilities, attitudes, personality traits, and educational achievement.” Essentially, psychometric tests do not pit people against each other but instead analyze people within themselves.

Therefore, the cadence of questions and best practices for layouts are similar for traditional and psychometric testing, although the best practices for format and medium differ.

FORMAT

The most widely used and best format for psychometric questionnaires is to use multi-item psychometric scales. Psychometric scales use their own rating scale and that is applied to multiple questions. While a rating scale measures a participant’s response to a single question, a psychometric scale uses a single focal variable to measure participants over multiple questions or items. This format often asks testers to rate responses on a scale of, for example, 1-5, with each numeral corresponding to a degree of feeling about the answer.

MEDIUM

There have been huge developments in the best practices for designing and administering psychometric tests in the past 30 years largely due to technological advances and changes in society. Pen and paper was the favored method before computers gained ground, but now mobile technology is becoming the preferred format for many psychometric questionnaires.

While mobile may not always be the medium used, it has become best practices to design questionnaires for mobile-first as this will guarantee the greatest consistency across mediums. According to PwC, millennials will make up 50% of the workforce by 2020 and for them mobile is the preferred device, so companies trying to cater to millennials should take that into consideration.

- Mobile first design is the best practice as it can be easily adapted to computer or converted into pen and paper as needed.

- Using mobile for testing is quicker, candidate-centric, and more inclusive (some candidates may not have access to a computer but almost everyone has a smartphone now).

- Mobile lends itself to gamification a popular way to administer psychometric questionnaires.

Whether mobile or computer based, gamification is a popular method and among the best practices used by many companies. Among the benefits of gaming are the fact that it lowers test anxiety, reduces the effects of a lack of self awareness, and it minimizes the likelihood that candidates will respond based on what they believe the employer wants them to say or do.

- CodeFight pits applicants against Company Bots to test them on coding challenges and evaluate their skill level.

- Or for example, the global marketing firm Ketchum's app LaunchPad, which gamifies the application process to measure, written, digital, creative, and communications skills.

- In the US, Deloitte created a 20-minute customized “game” that puts potential employees into real work situations. Deloitte explains, “It's highly competitive, and we feel this is not innovating for the sake of innovating, but we do feel that it's reflective of our environment. Having a boring, lengthy process that has no real connectivity to what they'll be doing at the firm is not ideal. We absolutely wanted to address that.”

LAYOUT

Layout is how the tester interacts with the test and it is important to make sure the layout is conducive to the tester and encourages them to complete the entire test. As such, keeping questionnaires short and simple with only necessary details used to clarify complicated questions or concepts. Using images can help boost completion rates, one company, Traitify, which uses images in their testing claims a 98% completion rate.

CADENCE

Cadence, the manner in which the questions are presented to the tester, is another important element to consider when designing psychometric tests. Among the best practices for structuring the cadence of a psychometric questionnaire:

- Questions should be grouped by topic and format
- Use images where possible and appropriate
- Use specificity and avoid ambiguity
- Do not use jargon or specialist terminology unless you need to
- Avoid bias and leading questions
- Use positive wording as negatively worded questions can be confusing

EXAMPLES

Many companies (and almost all the top companies) use psychometric testing, and many companies also use the same test providers. The most popular assessment companies include SHL, Kenexa, Saville, Talent Q, Cut-e, and Cubiks. Many of these tests can be found online with links to examples of their tests. For example, JobTestPrep offers access to many of the psychometric tests used by top companies for a fee.

Among the companies for which you can locate sample tests are:

Deloitte
Accenture
Barclays
J P Morgan
Microsoft
PwC
Ernst & Young
Morgan Stanley
Bank of America (Merrill Lynch)
Citi
KPMG

CONCLUSION

In summary, while the cadence of questions and layout of the questionnaire are similar for both traditional and psychometric testing, best practices for format and medium differ
Part
02
of four
Part
02

DESIGNING FOR TRUST - Questionnaires

Best practices toward increasing trustworthiness in online questionnaires include designing a quality survey with a strong introduction and exemplary questions that avoid any inclination toward bias, employing proper audience-targeting methodologies and safety practices, and performing a test-run on the survey prior to launch. Employing these strategies and methodologies will ensure your questionnaire is viewed as trustworthy by potential respondents, and will improve response rates dramatically.

Additionally, understanding that a higher number of respondents will more likely take your survey via their mobile phones than via their computers or other devices, and specifically designing your questionnaire to meet best practices for mobile-optimized surveys will significantly decrease survey abandon rates. This holds true even though questionnaire dropout rates are nearly twice as high for mobile users as for web-based survey takers.

METHODOLOGY & FINDINGS

To best answer your question, we started by researching as many best practices for online surveys and questionnaires as we could find. These included items from Pew Research Center, Survey Monkey, Qualtrics, Research Now, Harvard University, Oxford University, SmartSurvey, Fluid Surveys, Survey Anyplace, and a textbook. Additionally, we found information on best practices for ensuring online survey privacy (1, 2), how even the smallest amount of bias can lead to major issues, and information about how many people (and which types of people) tend to lie or provide inaccurate information on surveys because of a lack of trust. We focused the findings across all these articles into the seven best practices for designing trustworthy questionnaires found in this response. Additionally, as requested, we have included research on the differences in completion rates for mobile and web-based surveys/questionnaires.
It is important to note that, although you stated we should most-consider information related to long questionnaires, the research we found did not differentiate between short or long surveys, so we made the assumption that the best practices applied to both. Interestingly, most sources noted the importance of keeping online surveys (and especially those that are built to be answered via mobile devices) short (or at least shorter than 30+ questions). In fact, Harvard notes that, “Respondents are less likely to answer a long questionnaire than a short one, and often pay less attention to questionnaires which seem long, monotonous, or boring”. SmartSurvey states that the majority of online survey participants “are willing to spend on average up to ten minutes completing a survey”. Considering these findings, you might want to opt for using multiple, shorter surveys that build on previous short surveys to get the best collection of data you can.
The Independent found that two-thirds of individuals have given (or are inclined to give) incorrect information on online forms; this is largely attributed to a lack of trust in the companies asking for the information. These people are more worried about handing over the data because they suspect their information might be passed on to other companies that they’re not informed about, or that they might receive unsolicited contacts from other businesses who have bought lists on which they’re now included. Their research findings notes that around “67 percent of people think companies should be more open about what they’re up to, and 94 percent believe that they should be told more about how their data is going to be used”. Those most likely to lie or provide false information include 18-to-24-year-olds and people over the age of 65. This lack of trust is problematic for businesses across every industry, and especially for survey companies wishing to obtain valid data for their research.
Now that we understand the landscape with which we’re working, let’s look at methods and strategies surveyors can employ to ensure that their potential respondents have the greatest amount of trust possible – leading to better and more accurate survey results as a whole.

BEST PRACTICES FOR DESIGNING TRUSTWORTHY ONLINE QUESTIONNAIRES / SURVEYS

Best practices include: designing a quality survey, writing a strong introduction, writing exemplary questions, avoiding bias, employing the best methodologies toward reaching the intended audience, employing safety practices, and performing a trial run prior to launching the survey, each of which are explained thoroughly below. Although each of these best practices is essential to creating the most trustworthy survey, employing the combination of them will ensure your questionnaire will be the most trusted it can possibly be.
Please note that, due to the tremendous amount of specific information found for Best Practice #3, it has been presented in a bullet-point format rather than in paragraph form. Each item will be linked to its source so you can easily read everything we found that’s relevant, if you’re so inclined.

BEST PRACTICE #1: DESIGNING A QUALITY SURVEY

FAO reports that “questionnaire design is more of an art than a science,” with following recommended best practices being the optimal way to create as solid a survey as possible. Alternatively, Pew Research counters with “substantial research over the past thirty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire”.
Multiple sources note the importance of using the appropriate types of questions to elicit the information you are looking for, and we’ll get more into this in Best Practice #3. FAO also reports that questionnaire designers need to ensure the questions are fully understandable by the target audience, or else a number of those individuals will be more likely to refuse to answer or to lie in order “to conceal their attitudes”. The best questionnaires are organized with wording encouraging respondents “to provide accurate, unbiased, and complete information”.
Research Now notes that the optimal length of a survey is 10 – 15 minutes, with dropout rates increasing as respondents experience survey-fatigue. Additionally, they note that twice as many survey respondents drop out when they’re on mobile devices (rather than computers), which we will discuss in more depth later in this response.
Smart Survey discusses the importance of spending quality effort toward coming up with the best presentation design for your questionnaire. They recommend applying a similar layout, fonts, colors, and design elements as your website does, offering greater cohesiveness across the avenues of contact with potential respondents, thereby increasing their belief that the survey has been disseminated by the company represented in those elements (which should even include your logo). They state that, “An online survey that looks good and is presented well will have a higher response rate than one that looks like it has been thrown together quickly”. A higher response rates means a higher level of trust from respondents, which is what we’re looking for here.

BEST PRACTICE #2: WRITING A STRONG INTRODUCTION

Fluid Surveys explain the importance of creating a solid introduction for your survey, and how that can lead to greater levels of trust in respondents. They acknowledge that this piece is considered by many professional survey developers as “the most important part of developing a survey” because this is the point where “the majority of potential respondents will decide whether or not to drop out of the questionnaire”. An exemplary and professional introduction is hook to keep respondents going further and completing the questionnaire.
They also eport that the tone of survey will either make respondents uncomfortable and suspicious, or happy to be taking part in the endeavor. They recommend analyzing the target audience to predict any concerns potential respondents might have in taking the survey, and addressing those within the introduction. According to Fluid Surveys, an excellent introductory page includes a thank you statement – to make potential respondents feel welcome and encourage their participation; a clear, thorough description of the study – to demonstrate transparency in goals, purpose, and how the data will be used, which will “build respondent trust and encourage honest, truthful survey answers”; the time expectations of the questionnaire, as respondents are more likely to complete a survey when they know how much time they will spend doing so; and a statement of confidentiality and anonymity (if the latter is within the scope of the research) – which will further prove that the surveyors are to be trusted with their information.
Lastly, they recommend that if your questionnaire requires the use of external information, links, or documents, that you provide information toward this – and the link – within the body of the introduction (rather than cutting/pasting large chunks into the survey itself). Employing each of these suggestions within your questionnaire introduction will add to the element of transparency, and increase potential respondents’ level of trust in the overall process. Please note that this article is from 2013 (more-dated than the standard Wonder scope), but provides information that is very relevant to this request and still appropriate for the current market.

BEST PRACTICE #3: WRITING EXEMPLARY QUESTIONS

Best practices to employ in writing exemplary question include the following:
Develop questions through a collaborative and iterative process that includes detailed reviews and pre-testing of questions.
Determine whether you will use open-ended, closed-ended questions, or a combination of the two. You may want to conduct pre-tests with open-ended questions to establish baseline answer choices for closed-ended questions to be used in later versions of the questionnaire. Limit open-ended questions on the final version of the survey to no more than a few.
The order in which questions are presented is pivotal to the success of the survey, and in reducing possible contrast effects from the questions. They note this is most important if a survey will be used repeatedly to track data over time. Pew Research notes that a survey should be organized like a conversation would flow, in a logical order that keeps the interest of respondents and increases the likelihood they will continue to answer - and answer honestly. Place the most important questions near the earlier part of the survey (think “front-middle”). Ensure that the questions are presented with some level of variety, or the respondents will get bored and lose interest. “When you’re smart about survey format and question flow, you avoid sacrificers (people who don’t think carefully about their answer choices, rush through your survey, or misrepresent themselves”.
Question wording is of primary importance in ensuring the survey is interpreted by respondents in the same (or similar) ways, is collecting the intended information, and is likely more-free from bias. Asking direct questions is also important, so surveyors should strive for clear and precise language that makes questions easy for respondents to answer. Avoid the use of ambiguous or vague wording or phrasing whenever possible. Too-long questions may mean that respondents miss out on important details or skip over reading to finish faster.
• It is important to ensure that every question is necessary – with no redundancies. Start by outlining which data you wish to collect first – then create your questions using those points. It’s also important to ask questions one at a time avoiding any double questions and multiple questions.
Use response scales when possible, but avoid agree/disagree statements as “some people are biased toward agreeing with statements, and this can result in invalid and unreliable data”. Avoid using matrices or grids, however, are these are not mobile-friendly; separate questions instead. Limit response scales to a maximum of five points to eliminate horizontal scrolling requirements, and limit answers choices to numbers between one and eight to avoid vertical scrolling requirements. Limit grids to a maximum of four rows, three columns, and a few words of text only, again to avoid unnecessary scrolling.
• Leave sensitive questions – or those collecting personal information – until the end of the survey. “After answering other questions, participants will be open to sharing more sensitive or personal information, like age or education level.”
Harvard notes that “the ideal question accomplishes three goals: It measures the underlying concept it is intended to tap; It doesn’t measure other concepts; [and] It means the same thing to all respondents”. In their Questionnaire Tip Sheet, they offer extensive and detailed information about the best ways to use each type of question, how to best write survey questions, and other suggestions on creating the most solid question-set for your survey to ensure it meets the highest standards of trustworthiness in potential respondents.
Additionally, for an even more in-depth breakdown of survey question types, check out this item from Survey Anyplace.

BEST PRACTICE #4: AVOIDING BIAS

Survey Monkey discusses how even the smallest hint of bias in a questionnaire can cause big problems for surveyors because bias within a question will influence survey-takers’ responses to that question. Bias will cause the biggest hit to surveyors’ credibility, their perceived level of professionalism, and respondents overall impressions of the company (which are now much more negative), directly impacting the level of trust potential respondents might have for that company.
They recommend three suggestions for ensuring your questionnaire is free of bias. The first is to remove extraneous or unnecessary information, and to keep all questions simple and direct. The second is to present a balanced view of the question, which will signal to respondents that whatever they say, they are not being judged. The third is to produce a balanced survey overall, ensuring that nothing in the introduction, questions, or follow-up to the questionnaire displays either a positive or negative image or gives either type of impression. If you must have some positively- or negatively-phrased items, make sure you offer both – and that these questions are otherwise free of other types of bias. Along these lines, Harvard notes that avoiding questions that include “leading, emotional, or evocative language” will also help keep your questionnaire free from bias. Avoiding bias is a sure way to evoke trust in your potential respondents, just as displaying bias is the surest way to prove your untrustworthiness.
Additionally, Qualtrics discusses the importance of avoiding descriptive words or phrases, as well as scrutinizing any adjectives or adverbs found in your questions, though it’s best to leave these out altogether. They also note that ensuring any question-scales “cover the whole range of possible reactions to the question” while also ensuring these scales have a valid, definitive midpoint as vital to this process of avoiding bias and creating trust.

BEST PRACTICE #5: EMPLOYING THE BEST METHODS IN REACHING TARGET AUDIENCE

SurveyMonkey notes that identifying your target population and basing your questionnaire “language, examples, definitions (and more) [to be] specific to your population’s knowledge and needs”. Qualtrics adds that using language and terminology that is at the average-respondent’s level of understanding is important, though it’s important to not oversimplify questions so much that they can be misinterpreted.
Survey Monkey also states that in order to best-reach your target audience, you should choose the mode of delivery that best-suits those individuals. Research Now notes that a report from eMarketer identifies 11.7% of US internet users as accessing the internet only via a mobile phone. Millennials access the internet via mobile phones more often than any other generation. Difficult-to-reach (via mobile) segments include “Hispanics, African Americans, and lower income households”. For other specific data about which segments will likely answer your questionnaire via their mobile devices, see this article from Research Now.
Additionally, a separate article from Research Now notes that your survey should include responsive design so that the survey questions, font, and images are auto-sized by the screens on which they are being viewed. This optimization of the participant’s experience eliminates the hassle of scrolling, zooming, or resizing for navigation, and alerts them to the fact that the company has considered this need (or at least doesn’t irritate them at the survey company for not addressing it). They also note the importance of avoiding flash-based animations, rich media, and audio/video streaming in online questionnaires.
Using the appropriate language, terminology, and examples via the device they most often use (and will likely use to complete your survey) will make respondents feel like the questionnaire is more-closely based on them personally, leading to increased trust in the surveyors.

BEST PRACTICE #6: EMPLOYING SAFETY PRACTICES

Survey Expression describes their top four best practices to employ for ensuring survey privacy (and thereby increasing respondent trust). These can be applied to a questionnaire in ways that will ensure respondents understand that every precaution was taken to maintain their privacy – thereby increasing their level of trust in surveyors. They encourage surveyors to collect results anonymously, if that is possible within the scope of the research. To do this, ensure IP-address-collectors and email-address-collectors are turned off in the system holding the questionnaire. Express to respondents that surveys are anonymous in multiple places, including the introduction, instructions, and again at the end as a reminder. They also discuss the importance of linking to your privacy policy disclosure notice at the start of your survey.
Additionally, Survey Expression tells surveyors to provide information on who it is (you) that’s conducting this survey – within the survey – and within any results you publish once the questionnaire analysis has been conducted. If you can (or if your questionnaire will allow by design), provide a link to survey respondents (at the end of the questionnaire) where they can view the final results / analysis (in X amount of days). Then, be sure to publish your findings there – which will vastly increase respondents’ trust levels!
Industry leader Survey Monkey adds three best practices. The first is informing respondents how the responses and data analysis will be handled, which includes detailing for them (typically within the introduction and again at the end) what personal information the questionnaire will collect, how surveyors plan on utilizing the data once it’s collected, how respondents can access their own responses later if they wish, and how they can contact you (the surveyor) if they have questions or concerns. The second is limiting the amount of personal information the questionnaire collects – to only be the base pieces of info required to achieve the survey’s purposes. The last recommend is to use SSL encryption to ensure all data transfers are secure and safe. Smart Survey adds that including statements about the safety of your online questionnaire “can increase the response rate by inspiring trust”.
Ensuring the safety and privacy of all data collected, collecting only essential personal data, and being transparent about the survey company collecting the data are prime ways to make your respondents feel like they can trust you and your questionnaire.

BEST PRACTICE #7: PERFORMING A TRIAL RUN PRIOR TO LAUNCH

Pew Research states that conducting pilot tests or running focus groups on the earliest versions of the questionnaire is helpful in better understanding how potential respondents think about issues. They recommend conducting pilot tests for each new version of the questionnaire, and focus groups on end versions. Conversely, Qualtrics notes that you should “ask at least five people to test your survey to help you catch and correct problems before you distribute,” although other sources indicate that providing your survey to a small sampling of your target audience – for testing purposes only – is a better idea.
FAO notes that trial runs allow surveyors to determine “whether the questions as they are worded will achieve the desired results, whether the questions have been placed in the best order, whether the questions are understood by all classes of respondent, whether additional or specifying questions are needed or whether some questions should be eliminated, [and] whether the instructions … are adequate”. Each of these is important in preparing – and presenting – the best survey possible, leading toward increased trust in respondents (of both the final version of the survey, and of the results after they are presented).

FINAL TIP

One final tip regarding the presentation of your findings comes from industry expert Survey Monkey. Presenting data analysis reports that are accurate, well-informed, and that avoid generalizations and data misrepresentations are key to gaining the trust and respect of a more-global audience, not just those to whom the survey was given.

COMPLETION RATES: MOBILE VS WEB-BASED SURVEYS/QUESTIONNAIRES

Due to the extensive nature of this request and to keep it more within scope, this section will be truncated and only include essential findings. If you’d like more details about this segment of your query, please let us know and we’ll be happy to extend the research further.
Research from Apptentive notes that there should no longer be any debate over whether to use a web-based survey or mobile-based survey since data shows that “all online surveys are now [considered to be] mobile surveys”. Other than a spike in 2011, web-only based surveys have taken a dive; alternatively, mobile surveys have experienced “an upward trajectory closely correlated with the rise of smartphones”. Their research further shows that surveyors can “expect approximately 23% of … survey responses to come from mobile devices,” with this percentage continuing to rise since that percentage was reported in late 2015. However, it is important to note that, as stated previously in this response, Research Now indicates that dropout rates for mobile survey-takers are two times as high as those using their computers.
If you’d like more information on what mobile-unfriendly surveys look like, as well as what mobile-friendly and mobile-optimized surveys look like, this item from Apptentive should pique your interest. [14] Their research shows that “smartphone users are about half as likely to abandon a mobile-optimized survey (11% abandonment rate) than they are a mobile-unfriendly survey (21% abandonment rate)”.
For a research study devoted entirely to this aspect of your question, check out this whitepaper from Oxford Academic’s Public Opinion Quarterly. You’re sure to find a wealth of information to further examine which one of these survey-types would produce the highest amount of trust in respondents and would be less likely to be abandoned.

SUMMARY

Best practices to follow to ensure your survey is seen as trustworthy by potential respondents include using a high-quality design, ensuring research-based questioning strategies are followed, removing all instances of bias, appropriately targeting your audience, and doing trial runs on your surveys prior to launching them. Mobile survey-takers are twice as likely to abandon surveys partway through completion, so ensuring that your online questionnaire follows best practices in mobile-optimization is key to reducing respondent abandonment.

Part
03
of four
Part
03

PSYCHOMETRIC EXAMPLES - Questionnaires

The Myers-Briggs Type Indicator, Jung Personality Test, DISC, Sixteen Personality Factor Questionnaire, Verbal Reasoning Assessment, Situational Judgment Test, Logical Reasoning Assessment, and Numerical Reasoning Assessment are the 8 best practice examples for psychometric questionnaires online.

METHODOLOGY
In creating the list, I found the top 5 most commonly used psychometric tests used online. I then evaluated other types of lists to add additional tests industry experts reported as both reliable and accurate. I have included a spreadsheet that contains the list with the information describing them and the direct links to the tests online.

To wrap up, the top 8 psychometric tests online are Myers-Briggs Type Indicator, Jung Personality Test, DISC, Sixteen Personality Factor Questionnaire, Verbal Reasoning Assessment, Situational Judgment Test, and Numerical Reasoning Assessment.
Part
04
of four
Part
04

RESPONSE ACCURACY - Questionnaires

No known correlation between survey drop-off rates and participant honesty were found in peer-reviewed or market research. A correlation between participant drop-off rates and survey accuracy was found to be relative to the non-response bias for that survey. In studies presented below, it was found that survey accuracy and participant honesty are two different metrics. There were no correlative studies found concerning the propensity of a participant to achieve any particular end in a survey and their answers. Evidence was found that survey participants are likely to “game” survey qualifications to gain an incentive given for taking a particular survey.
INTRODUCTION
Most of the research around survey administration and taking likes in the verbal and mailer survey types. There has not been a large amount of research especially around the accuracy of answers in online surveys. The main reason for this is that online surveys are more or less anonymous and harder to examine. With an in person survey one would have to produce a fake ID, in a mailer survey one would have to pose as another and open their mail, in an online survey, all one has to do is type in a fake name. There are no repercussions for lying in that case. Survey accuracy responses appear to lie within the incentive system offered (usually in the qualifications portion) to the respondents rather than the other way around.

In 2016 a research group based at Utah State University found that Drop-Off/Pick-Up surveys elicited the best responses because the act of showing up on someone’s doorstep was more likely to instill urgency and not overly pressure them at the same time. Also, in these cases, incentives worked well.
SURVEY ACCURACY AND HONESTY
Survey accuracy depends greatly on how the population is selected. For many internet surveys obtaining a probability sampling is hard to do unless the survey is administered with pre-screening to be sure the sample is not homogeneous in any one way or another. Research showed that non-probability sample survey measurements were less accurate than probability telephone surveys. This was done through administering benchmark questions to which survey results could be aggregated. The major difference between a phone survey and internet survey being the surveyor has the power to randomize their own population.
Accuracy
The above research shows that phone surveys done by known randomized populations elicit the most accurate answers, accurate as in, there is less variation in survey results as far as being in line with benchmark questions.
In a paper published in the Journal of Public Health it is noted that online survey takers tend to be young, highly educated females. Online surveys often have low response rates compared to the population sizes they are exposed to in comparison with telephone surveys. The number one way to gain more respondents to gather a more probabilistic pool in the research read thus far has been to give incentives to survey takers.
Honesty
In the paper published in the Journal of Public Health research into telephone surveys and how incomplete responses tend to be correlated with a “lack of perceived anonymity” and “concerns with social desirability.” Whereas online responses appear to somewhat resolve these specific concerns.
Studies were done on surveys administered to the same participants repeatedly online and found that participants almost always answered the questions the same from survey to survey. From this, it was inferred by the researchers that online surveys tend to elicit honest answers regardless of what the incentives are. The caveat to this being qualification into the sample population reviewed below.

Accuracy and Honesty
Taking these pieces of research, one can conclude that accuracy and honesty are two totally different things when assessing survey responses. Online surveys with incentives are likely to draw a wider range of participants, then ones without incentives. These particular surveys may elicit more honest answers because there is less influence from a survey administrator in the equation. These answers will appear to be less accurate when compared to benchmarks. This was concluded by researchers to be the result of social pressure applied by the involvement of a survey administrator. Read on for more detail on this.
Drop-Off Rates Accuracy and Honesty
Peer-reviewed research does not show a correlation for-drop off rates in survey taking and survey answer truthfulness. The reason being, as people drop out of a survey the sample size for each question decreases. The initial sample size of the survey will tell if these questions can be used at all in an analysis. Since the sample size changes as people drop out calculations for sample accuracy in comparison to benchmark questions becomes more obtuse. The wider the difference in sample sizes the larger the response error is for that question.
There are no existing empirical data readily found concerning survey respondent drop-off rates and honesty. The calculation for non-response bias as it contributes to survey accuracy is as follows:
For person i, a survey variable Yi with true values Ti , the joint effect of nonresponse and measurement error on the respondent mean is Bias (yr ) = σpT/p + N ∑ i=1 (pi εi / p ), where a simple additive error model pertains, εi = Yi Ti , and σpT is the covariance of the true values and the response propensity, p.”
To briefly explain, the covariance of the true values, εi = Yi Ti will equal zero when a participant answers all of the questions in the survey. Also, the propensity to answer, little p, the smaller that number gets the larger the bias related to unanswered questions.
This includes non-response bias and measurement accuracy and as mentioned above, there is no bias if the sample size is the same throughout.

The correlation of survey drop-off rates and survey accuracy is relative to the non-response bias and has nothing to do with the honesty of a participant’s answers as demonstrated above.
From all of this, we gather that online survey responses are likely to be more honest due to the nature of administration. As far as research has been done there is no information on the comparison of honesty and survey drop-off rates. The only information readily available here is that drop-off rates negatively affect survey accuracy in terms of a non-response bias as in the calculation shown above.
Psychometric versus Traditional Questioning in Survey Drop-Off Rates and Honesty
Survey drop-off rates are highly correlated to survey length across many studies, both psychometric studies and “traditionalstudies. In addition to this, drop-off rates were found to be affected by incentives, perceived relevance, and understandability of the material. There was no indication found that as questions become too “personal” participants drop out. One piece to consider here is the relation of this factor to the form of survey administration as mentioned above.

Also note, that initial survey participation is widely affected by the drive to take it due to incentivization and subject matter. There are no direct studies readily available on how survey questions personally touch participants and what their reactions to this are as far as dropping-out or overall honesty.
GAMING SURVEYS FOR OUTCOMES
As far as survey participants gaming online surveys to achieve an end result there were no peer-reviewed research results available. In the professional online survey world, surveys are administered by survey companies to populations they have acquired in one way or another. For example, there are websites dedicated to informing survey takers on how to qualify for incentivized online surveys when they technically do not.

Often times to take an online survey for incentive a participant must qualify. The reason for qualification is so that the survey population is as close to the population type needed for that survey. If a participant fits the population requirements they are granted access to the survey and the incentive for that survey. Studies were not found concerning honesty in survey population qualification.
Again, from research presented above, it was found that online survey takers tend to be more honest due to the way the survey has been administered.
CONCLUSION
In conclusion, there is no known correlation between survey participant honesty and drop-off rates. There is, however, a correlation between drop-off rates and survey accuracy. The accuracy of a survey is compromised as the sample sizes change from question to question. There are no readily available studies on the differences in question type, “traditional” or psychometric and how that affects honesty or drop-off rates. Studies show that participants in online surveys, once selected as part of the incentivized population, are more honest than not in their answers. Studies also show that digitally administered surveys are more likely to elicit honest answers than survey types where an administrator is present.

Sources
Sources

From Part 04