<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >Two Data Quality Challenges in Survey Research (And How to Fix Them)</span>
01/29/2026

Two Data Quality Challenges in Survey Research (And How to Fix Them)

Data quality issues cost researchers time, money, and confidence in their findings. No matter what type of research you’re conducting, understanding the root causes of poor quality is the first step toward prevention.

Most issues can be traced back to these two factors:

  • Unqualified or inattentive respondents are taking the survey

  • Qualified respondents are providing incomplete, inconsistent, or inaccurate data

Addressing both issues requires more than a single safeguard. It requires thoughtful panel sourcing, intentional survey design, and ongoing quality checks all working together.

Below, we’ll look at the two most common threats to data quality, and discuss practical ways to minimize or mitigate their impact.

Challenge 1: Unqualified Respondents Are Getting Into Your Survey

The first way to ensure accurate, useful results is to filter out as many poor-quality respondents as you can. These techniques can help you achieve that.

Panel Strategy and Respondent Sourcing

A strong panel provider plays a critical role in reaching target respondents and maintaining high survey data quality. Many teams choose to work with more than one panel provider.

Using multiple panels can broaden your reach, speed up fielding, and manage costs. It also helps you spot inconsistencies in the results across providers, which may indicate issues with sourcing respondents.

Survey Screener Best Practices

One of the most effective ways to improve your data quality from the outset is to present your audience with a strong screener that filters out respondents who do not have the knowledge, experience, authority, or characteristics to provide meaningful information.

Not only do strong screener questions help ensure you collect accurate and useful data, but they also save you all the tedious and time-consuming work of manual data cleaning later in the process.

For more information on writing an ironclad screener, see Want Better Survey Data Quality? Start With a Stronger Screener.

Identifying Low-Quality Survey Responses

Despite your formidable efforts, some poor-quality respondents might make it into your survey. But you still have a chance to identify and remove them by putting strict quality control procedures in place.

The optimal approach to quality checks includes a combination of behavioral signals, tech-enabled indicators, and survey-specific contextual flags. Measure behaviors like typing cadence, scroll velocity, and engagement patterns to weed out problematic respondents. Leverage automated methods of filtering like duplicate IP addresses (especially known bot IPs) and location and time zone mismatches.

Challenge 2: Good Respondents Are Giving You Bad Data

You might think that survey-takers who match your target respondent and pass the screener will automatically provide high-quality data.  But this isn’t always the case.

Good respondents may have the best intentions, but there are obstacles that can get in the way. Here’s how you can overcome them.

Poorly Written Survey Questions

If questions aren’t clear and concise, respondents may provide the wrong answer and skew your data. This is why it’s important to avoid industry jargon and complex wording that can be easily misinterpreted.

Some things to avoid:

  • Words that mean different things to different people, such as “occasionally” or “often.” Instead, be specific about frequency, such as “once a month” or “once a week.

  • Double-barreled questions, such as one asking about both pricing and customer service in a single question. Each question should only address one issue.

  • Leading questions that may influence people to answer a certain way. Remember, you’re conducting the survey to get honesty, so this approach defeats the purpose. Instead of “How much are you enjoying the new exciting chat feature on our website?” try “How do you feel about the chat feature on our website?”

Incomplete Answer Options

Without the correct option available to them, respondents may select another option to advance the survey. This is why “other,” “not sure,” “not applicable” and similar options are essential, especially for certain types of surveys, like due diligence.

Each question should have mutually exclusive and collectively exhaustive options.

Survey Fatigue

Long surveys that are too repetitive are going to lose your respondents' attention. When survey-takers become bored and disengaged, they may click through quickly or provide the easiest answer to get to the end. This is even true for respondents who are reliable and well-intentioned.

Keeping the survey concise and focused is a good way to ensure your respondent sticks around for its entirety. Only ask what you need to know from them and eliminate nice-to-have questions that increase the time needed to complete the survey.

Start with the most important questions so you capture essential data first, and use a variety of questions to ensure the respondent stays alert and engaged throughout. A progress bar that lets them know where they are in the survey is a nice addition.

Faulty Logic

Even the most willing participant can get frustrated if the survey is confusing or illogical. If respondents are asked questions not relevant to their experience, they tend to disengage. This is especially true if they’ve answered a previous question in a specific way and then the next question disregards their answer. Make sure your pathing makes sense, and you’re only asking relevant questions.

Recall Bias

Even if respondents are qualified, they may not accurately recall everything you’re asking. Be mindful of this when you’re asking them to remember specific events or purchases over long timeframes. You also must be reasonable when asking them to quantify frequency or spending, especially when it’s something they may not have historically calculated.

Unfortunately, this is one of the harder survey challenges to overcome because it’s rooted in memory, not motivation. You can try to design around it by shortening the recall period or anchoring memories with time cues (like “the last time” or “since the start of”). Instead of asking respondents to recall, consider using recognition, like listing all the brands they’ve purchased, for example.

The Bottom Line

Strong survey data depends on both who participates and how they respond.
Screening out unqualified respondents helps protect the integrity of your data, but it’s only half the equation. Clear questions, well-designed questions, and respondent-friendly design also play a critical role.

Data quality shouldn’t be something you evaluate after the fact. When it is built into the survey from the start, and maintained all throughout fielding, you can have greater confidence not just in the results, but also the decisions you make with them.