Quality data is your No. 1 priority when leading a research survey. After all, if you can’t rely on the information you’re collecting, there’s no sense in conducting the research. You need to be sure of your data, so you can make informed decisions based on the insights.
But not everyone has your best interests in mind. In fact, there are respondents who can ruin your survey results (knowingly or not) with a few clicks. Your primary protection against these culprits is understanding who they are and how they can impact your results.
Armed with that knowledge, you can put preventive measures in place to mitigate risk and ensure data quality.
The Uninformed
These respondents may be well-meaning people, but they lack the expertise you’re seeking from your survey results. If those with less knowledge tend to answer your survey, the data doesn’t represent the larger population you need to reach. The data, and any insights you derive from it, will be skewed.
For example, you want to reach IT security managers, but you’re getting responses from everyone who works in IT.
The first line of defense against the Uninformed is a well-defined respondent profile. It should be narrow enough to target the specific segment you want to study but wide enough to collect a significant amount of data. Finding the balance is integral to survey success. In this instance, you want to be clear you’re looking for IT security managers.
The second safeguard is a strong screener section. These must be able to successfully filter out respondents you’re not interested in studying, like a gatekeeper. They are at the beginning of the survey and only let qualified respondents answer the remaining questions. You’d want to avoid asking, “Do you work in IT?” Instead, try “Are you responsible for IT security at your organization?”
Three tips for ironclad screener sections:
- Ask about behaviors, not just demographics and firmographics.
- Gauge knowledge of products and brands.
- Plant red herrings, like a product that doesn’t exist, and disqualify those who claim to use it.
The Inattentive
These people may not be paying attention because they’ve been overcome by survey fatigue or are otherwise distracted. They may end up abandoning the survey altogether, wasting time and money.
For example, you ask purchasing managers to rate the importance of 20 key purchasing criteria. Because the respondent is overwhelmed, they straight-line their response to advance to the next question quickly or abandon the survey completely at that point.
How do you manage the Inattentive? Don’t give them opportunities to lose interest. Vary the type of questions strategically, alternating between multiple choice, rating scales, and short open-ended questions. However, avoid questions that appear complicated, such as long matrix questions, or anything that requires significant scrolling on a mobile device.
Three tips to keep respondents engaged without overloading them:
- Keep surveys short, focused, and relevant.
- Avoid jargon, and stick with clear, simple language.
- Use logic to skip sections that do not pertain to a specific respondent.
The Frauds
These respondents may be the most difficult to spot because there’s usually a reason they’re attempting to fraudulently complete your survey. It may be for a financial incentive or another self-serving motivation.
The bottom line is that they don’t represent your target audience, and you don’t want their responses making it into your dataset.
Say you’re doing a survey for Fortune 500 marketing leaders and offering a $100 gift card for those who complete it, but an independent consultant pretends to work for a large company to collect the reward. In this scenario, you might ask specific questions such as “What was your company’s marketing budget last year?” or “Which marketing automation platforms does your company use?”
Many of the same tactics used to catch Imposters will help identify Frauds: screeners, red herrings, and cross-referencing responses.
Three telltale signs the data from a respondent shouldn’t make the cut:
- Straight-lining or patterned responses
- Finishing the survey too quickly
- Agreeing with all statements
The Bots
Bots are becoming an increasingly important subject when it comes to data quality because they can provide believable responses and deceive researchers. They may be programmed to exploit financial incentives offered to human participants or influence research outcomes.
Say you’re offering a gift card or panel reward points for anyone who completes your survey. A bot farm uses a script to complete the survey multiple times, accumulating rewards yet not providing you with any information of value.
Four ways to defend against bots:
- Implement time-based response limits.
- Employ automated QC flags to identify bot-like behavior, like irregular response patterns.
- Use IP address tracking.
- Add honeypot (or hidden) questions, like white text on a white background, that humans can’t see and only bots will answer.
How to Protect Yourself
Your most valuable defense against these data quality threats is proactive awareness and preparation. While survey programming tools and technology offer helpful safeguards, they’re only as effective as the person who implements them.
By understanding the four types of respondents that threaten your data quality, you can integrate protections into how your survey is designed—both how it’s worded and how it’s coded—and what types of responses to watch out for.