When you’ve finally finished writing your questionnaire, after you’ve written, rewritten, and edited it ten times and the client has rewritten and edited it another ten times, you’re ready to test it in field with a small percentage of your entire sample. No matter how careful and attentive you’ve been, some little error of logic, spelling, or bias snuck in without you noticing. To help you along, here are nine pesky problems to solve before you launch to the entire sample.
- Unanswered questions: Look for questions that no one has answered, or that only specific segments of people have answered. A major problem that can go unnoticed until too late is skip patterns that aren’t quite right. Make sure that both men and women, brand loyal and brand switchers, high volume and low volume people have shared their opinions to questions determined by logic.
- Unselected answers: Sometimes, it makes perfect sense that no one has selected one of the answer options. In such a case, reconsider why that answer is even there. It’s taking up time and space that might be put to better use with an answer option that people would be more likely to select. At the same time, consider whether the answer should be revised because it doesn’t match the question, it’s socially desirable, it reflects a cognitive bias, or it doesn’t mean what you thought it meant.
- Over-selected answers: Researchers who genuinely care about data quality will regularly include data quality questions such as red herrings. Check to see if these answer options are over-selected as this could be an indicator that the product or brand you made up actually does exist. Fix it now so that the question remains usable once the questionnaire is launched to everyone. In the same vein, if some answer options are being selected in certain questions much more than others, check that the answers are being presented in a randomized fashion – if they ought to be at all.
- Straightlining: Ah, the dreaded data quality problem! In many cases, the appearance of straightlining is a clear indication of a poorly designed question. As much as it hurts to incorporate answer options that say negative things about a product or service, it is absolutely essential in a top quality questionnaire. Make sure at least a third of the answer options in your grid questions are phrased in a negative way, just as you’d make sure that at least a third are phrased in a positive way.
- Extreme scale answers: Have you got a case where all the answers to a grid style question sit on one extreme of the scale? Reconsider what needs to be revised so that you can create more variation among the answers. You might need to edit the question, the answer, or even the scale labels themselves.
- Non-substantive answers: For the sanity of questionnaire participants, it’s essential to include non-substantive answers like ‘don’t know,’ ‘none of the above,’ and ‘not applicable.’ However, if you see that these answers are being heavily used within a pilot test, it’s a sure sign that either the questions or answers are unclear or missing relevant answers. Fix the answer options before you lose a huge chunk of your valuable sample size to often useless responses.
- Sensitive questions: If you happen to have any questions that are obviously socially desirable or sensitive (e.g., illegal or unethical behaviours, sexual preferences or tendencies), check whether the answer options are appropriate. If certain answers seem over or underused, consider whether you can revise either the question or answers to be more respectful and understanding of the difficult nature of the topic. Also consider whether you ought to revise the question or the introduction to the question to better relate to participants why their answer are important.
- Timed pages: If you have data for how long people spend on each page, pay close attention to it. Pages that are completed either extremely quickly or extremely slowly could be indicators of a problem. Quick completions could indicate that people didn’t read the page, perhaps because you made it too wordy. Slow completions could mean that the question was poorly worded, confusing, or difficult to understand.
- Read the open ends: Often ignored, open ends are the path to your respondent’s heart. Most people will probably respond with nope, nothing, or thanks, but a few people will share golden nuggets of insight. Use their valuable feedback to fix unclear questions, add missing answer options, improve disrespectful wording, or correct the typo that six professional researchers missed! While you’re at it, check how long the answers to each open end are. If it seems that people want to provide much longer answers, make sure they have the room to do so.
If you need help reviewing your questionnaire, please get in touch with us! We love hunting out those hidden problems. Good luck!
You might like to read these:
- How to ask gender, age, employment, and income questions on self-completion surveys
- Top 5 tips for writing a questionnaire people will want to answer
- What is a GenPop market research study?
Canadian Viewpoint is an MRIA Gold Seal field and data collection company that specializes in English and French Canada. Our offline and online services include sample, programming, hosting, mall intercepts, pre-recruits, central location recruitment, mystery shopping, site interviews, IHUTs, sensory testing, discussion boards, CATI, facial coding, and other innovative technologies. Learn more about our services on our website.