Poorly designed survey instruments will yield less-than-reliable data. There. I said it, despite all trends to the contrary: DIY platforms springing up allowing any Tom, Dick, or Harry to throw a survey to the ether to collect data to inform their business decision. What a thrill it is to collect this data for pennies per respondent. However, unless your questionnaire is designed well (which I’ll explain in a moment), the data you collect could be next to useless. Or worse, it could be just plain wrong. We all follow our own nature, and for many, the occupation represents what’s in one’s own nature to do:
- For a marketer, their job is to educate and inform the customer or prospect of their company’s value proposition. Hence, when commissioning or conducting research, it’s in their nature to successfully represent the value proposition of the product or service they’re marketing, regardless of the goal of the research.
- For product designers, it’s in their nature to create based on the input that informs their inner muse. So when commissioning or conducting research, it’s in their nature to collect data that supports their own muse, regardless of the goal of the research.
- For product managers, it’s in their nature to shepherd their product to market, managing costs and processes to get them to market as efficiently as possible. So when commissioning or conducting research, it’s in their nature to minimize impediments to their process, regardless of the goal of the research.
Market research, by comparison, is guided by the scientific method. It’s in a researcher’s nature to ask questions in a detail-oriented, scientific fashion. As we know from middle school science class, the scientific method is a system by which scientific curiosity is organized through experimentation to disprove a null hypothesis. In so doing, the researcher follows a methodology to ensure that the experiment is repeatable with the same subjects and reproducible with a new set of subjects.
- Repeatable: The case wherein if the same subject is asked the same question 6, 24, or 48 hours later, the answer will be the same.
- Reproducible: The case wherein the same survey instrument asked of a different population, though with the same sample parameters, provides the same proportion of responses.
Thus, it’s in the researcher’s nature to ask questions and record answers using a methodology to ensure valid data—data that’s not overly pedagogical for the marketer, data that represents what the market thinks of a product’s design, regardless of whether it fits with the product designer’s musings, and data that may disrupt the processes of the product manager. It’s representative of a given market or audience; it’s unbiased and objective; and it is repeatable and reproducible to demonstrate its validity. To ensure these qualities in the data, researchers put great emphasis in the questionnaire design. Why? We have a saying: junk in, junk out. Without a quality design that follows best practices, we can’t ensure the quality of the data on the back-end of the study. Here are 5 best practices that we follow when designing questionnaires:
1. Don’t confuse your respondents. This seems like a no-brainer, but you’d be surprised at how many non-researchers do this effortlessly. For instance, an easy way to confuse respondents is by forcing them to pick a single response when more than 1 response describes them or their experience. It’s called cognitive dissonance, penned by Leon Festinger in the 1950s in the field of social psychology, wherein a person experiences the mental stress and discomfort experienced when they hold two or more contradictory beliefs, ideas, and/or values at the same time. In the area of survey science, the result of cognitive dissonance is usually either that: 1) they get frustrated and quit the survey, lowering your response rate and risking unmeasured bias of your results, or worse 2) they get frustrated, angry, and populate your survey with bogus answers. Therefore, great care is required to create response lists that are mutually exclusive and represent options that describe the experiences of 80% of your respondents. The other 20% is typically reserved for “Other, specify” write-in responses.
2. Know what you’re measuring. Like muddling response lists, knowing what you’re measuring also entails avoiding double-barreled questions. When you’re asking a question that incorporates 2 or more phenomena to be measured, which one is represented by their response? A good rule of thumb is 1:1—1 question, 1 metric.
3. Ground behavioral questions in a distinct space of time. Prior to the emergence of ‘Big Data’, which measures behaviors within a given sphere (credit card transactions, phone calls, interactions with healthcare professionals, etc.) much of our behaviors required asking questions in a survey. The pitfall of this can be our notoriously faulty memories. Numerous fields have pontificated on the personalization of memory: as soon as we see or do something, that action gets interpreted by our brains, and it’s this interpretation that makes up our memory—not the action itself. This process is particularly noticeable in actions that extend further and further away in time. Hence when asking about a behavior, it helps to ground the question in a timeframe that’s as immediate as possible, while balancing the probability that they’ve done enough of those behaviors in that time frame to collect useful data. For small behaviors, a day, a few days, or a week may be a suitable amount of time. For bigger behaviors, 1, 3, or 6 months may be more appropriate. Avoid asking about “average” behaviors like you’d avoid the Zombie Apocalypse.
4. Ask questions that your respondents can answer. By that, I mean, if they’ve indicated they’ve never used a product, don’t follow up with a question about their satisfaction with said product. Most, if not all, Internet survey platforms come with the capability of filtering. Filter out respondents who shouldn’t be asked a question given their previous responses. You’ll minimize frustration, and maximize the validity of the data you collect as a result.
5. Seek opportunities NOT to bias your respondents. Biases, both measured and unmeasured, can be the bane of your survey data. One source of bias that’s easily accounted for and rectified can be found in the way you phrase your questions. Rating questions, for instance, are easily susceptible to being asked in a biased way. As a rule of thumb, for instance, always mention both ends of the scale in the way you phrase the question so that, even unconsciously, you permit the respondent to consider both sides. By only mentioning 1 side, it’s almost as if you control their eyes: they immediately seek the side of the scale you mentioned and select their preferred answer. Just as each occupation follows from each person’s nature, it’s also part of our shared DNA that we respond positively to content that resonates with us. That is, we seek to understand the world in our own image or experience. When presented with a question, we seek to find our own answer in that question. It’s how we have survived for millennia―by finding a common language by which to create community. We learned early on that there’s power in numbers. These best practices will help you collect repeatable and reproducible numbers you need to make the decisions you have to make. Or just hire us to do it for you.