A survey is a powerful tool. It can capture everything you need to know to make confident, well-informed decisions. But a survey’s power depends on how well it’s designed.
Without a clear research objective, well-defined audience, unbiased questions, and consistent scales, your results may fall short. Here are the dos and don’ts of designing a survey.
Right up front, there are a few things you should share with participants:
Start with a strong research objective that ties directly to the question you’re trying to answer. Ask yourself: Is each question in the survey related and aligned to your research goals?
If you go off course, survey respondents may drop out because they think:
It’s not worth collecting data that you’re not going to use for the sake of it.
Pro Tip! When ordering your survey, put the most important questions at the beginning, and save the nice-to-have ones for later.
Begin by acknowledging your target market, then further narrow this pool to those who fit your specific research criteria. Your research objective will:
Aim to write concise and clear questions that don’t leave room for interpretation.
Pro tip! Avoid double-barreled questions that ask for feedback on two or more issues, like, “How confident are you in the company’s sales and marketing strategies?” When you include two topics in one, it’s hard to answer accurately. For clarity, separate them into two questions.
When you use consistent scales throughout your survey, you reduce participants’ mental effort and minimize respondent error.
Unbalanced scales may not only distort results but also confuse participants, leading to frustration. Use a predefined numerical scale that is the same for each question.
For example, for a due diligence survey, you may ask:
On a scale of 1 to 5 (where 1 is very low, 2 is low, 3 is moderate, 4 is high and 5 is very high), how effective is the leadership team at executing strategy?
When you ask about the effectiveness of another department, you’ll want to use the same numerical range and predefined options:
On a scale of 1 to 5 (where 1 is very low, 2 is low, 3 is moderate, 4 is high and 5 is very high), how effective is the human resources team at communicating changes in the workplace?
This helps simplify your analysis and allows you to make direct comparisons across multiple responses.
Pro tip! Where appropriate, offer “other” and “not applicable” answer options. This ensures your answer options are inclusive and exhaustive, which improves data quality. When a respondent doesn’t know how to answer a question, they may abandon the survey or select an option at random to move to the next question, which increases fielding time or leads to skewed data.
You’ll have a hypothesis (and maybe even a hope) for what your data will show you, but you still want to conduct the survey in a neutral manner.
Avoid leading questions, like, “How satisfied are you with the company performance incentive system?” Instead, ask respondents to rate the effectiveness of the company performance incentive system, on a defined scale.
Be sure you’re not using words like “always” or “never” within the questions, like, “Do you always trust the financial reports?” You’re looking for nuance in the answers, but you won’t be able to capture those details if you’re using absolute language.
Some biases are hard to catch. You can ask a colleague—or even AI—to look for any presumptive questions you may have missed.
Pro tip! Keep an eye on the responses during your soft launch. If the results are out of line with your expectations, wording may be to blame.
Be mindful of cost, time, and human capital resources during the design phase of the survey. Determine if you’ll need to offer any incentives for completing the survey, in addition to technology and analysis costs. This may mean right-sizing your survey and adjusting the scope depending on your budget.
You’ll also want to be clear about the timeframe of the survey. Consider these questions:
You’ll also want to be explicit about the roles people will play in the survey. Make sure they understand their responsibilities within the project and when they’ll need to step in.
A soft launch typically consists of 5% to 10% of respondents and is how you can figure out if you’ve got biased wording, poor quality controls, or faulty logic. In this phase, you may even realize that your audience needs to be further defined or widened.
At every stage of the survey, you should build in quality checks to flag suspicious responses. These can come from people who don’t have the qualifications to provide meaningful answers, people who aren’t paying attention to the questions, and people—or bots—who are simply interested in getting the reward at the end of your survey.
Quality checks and soft launches are essential to ensure your survey will be successful before going live. After all, you don’t want to waste resources collecting data that ends up being unreliable.
Once you’re ready to start writing your survey questions, download our easy-to-use guide for practical tips and examples.