When constructing any kind of survey it is important to keep a few tenets of human behavior in mind. People may have an emotional investment in your success, but this sentiment only goes so far. If you make it too cumbersome or difficult the desire to help will fade quickly. Similarly, asking too much of a person such as needing to ask follow-up questions, or asking for repeated participation may create greater resistance. There are also word usage and emotional charged phrasing that skew the results of your survey and should be avoided. If your target audience is captive, like a group of employees at your company, you may have somewhat greater latitude, but even then, a poorly constructed or cumbersome survey will result in low participation. In part one of this series we took a look at the basics of the survey construction process, in this part we’ll take a closer look at some of the psychological factors that can help you create better questions.
While you are drafting your questions
Providing a limited number of answer choices for a question risks survey abandonment and unreliable conclusions. For instance, if you ask “What is your favorite fruit?” as a closed-ended question and provide only “Apple, Orange, Banana” as possible answers, you are going to get either no answer or an unreliable answer from all the strawberry and kiwi lovers in your audience. If you’re basing your next flavor of widget on this customer survey, you may be disappointed in its success.
When you are using rating scales be sure to keep them consistent in structure between questions. If 1 is the low score for questions 5 and 6, it should also be the low score for questions 12 and 17 – once you’ve trained your audience how to answer, don’t accidentally trick them into giving you an invalid response.
As you refine and edit
Look at your questions for wording that may trigger psychological resistance in your audience. You want to allow people to give both negative and positive feedback, but you do not want to lead their answers to either a negative or positive place as it could alienate survey takers who feel differently. If you signal the answers you are hoping for, you’re likely to make the response data less valid.
When you beta test your questions
Even if your beta testers don’t ask questions about the survey questions, be sure to evaluate their answers. If the beta test answers were typical of the kinds of answers you receive from all the survey responses, would you have achieved your goals? If not, consider refining your vocabulary or further focusing your question to better ask for the kind of information you need.
We’ve taken you through the thinking of process of developing the survey questions, now we’ll show some examples of how to write and refine those questions for the best results.
First round: What do you think of Company A’s customer service?
Second round (open-ended): When you think of Company A’s customer service, what three words come to mind?
Second round (closed-ended): Please rank Company A’s overall customer service on a five point scale, where 1 is the lowest score and 5 is the highest score.
For the following attributes of customer service departments, please rank Company A’s customer service on a five point scale, where 1 is the lowest score and 5 is the highest score:
Ability to answer questions
First round: What does Company A do best?
Second round: Is there a special attribute, service area or other differentiating characteristic for which Company A is known?
First round: What is Company A’s greatest weakness?
Second round: Is there an area in which Company A should look to improve?
Keep your own psychology in mind as you analyze the information you gather. Negative feedback can be difficult to hear but it often necessary in order to properly plan how to move forward. Similarly, too much positive feedback can stifle your opportunity to think creatively about how to make use of the information. It is difficult to solve problems if you don’t know what they are, and you probably wouldn’t need to issue a survey if you weren’t at least partially looking to improve in some way.
Analyzing qualitative information from open-ended questions can be much more challenging than aggregating percentages from your closed-ended questions. Be sure to look for patterns of responses and consider creating your own ranking scale based on what you identify as trends. For example, if you’ve asked for three words that identify Company A’s customer service you can create a word cloud of the responses, but you can also assign each response a positive or negative value and create a percentage positive vs percentage negative graph, making perception easier to visualize.
Good luck with your survey creation. Let us know if you’d like some help.