Do you ever “give away” answers in your surveys? I’m talking about subtle (and not-so-subtle) signals that can lead to bias. Here are a few errors to avoid:
Several weeks ago I refinanced my house using an online lender. All ended well, but there were a few glitches along the way – a key email with documents attached was apparently lost and I had to prompt the company to follow up with the underwriter.
The day after closing I received the following survey invitation from the mortgage processor:
Subject: I so appreciate YOU! Please help if you can I am so close to being # 1 in the company having “GREATs”…
Thank you so much for being an amazing customer to work with. I greatly appreciate all your help to get your loan taken care of. I hope that you feel I have given you “GREAT” customer service. My managers would love to hear what you think about my performance as your processor. If you do not mind, please take 1 minute to fill out the 1 question survey to help me out. We are always looking for “GREATs.”
Apparently customer-service ratings at that company are used in compensating or rewarding mortgage officers. That’s fine. But the question it raises is: Why would the company – which cares enough about satisfaction to tie it to rewards – let the person being evaluated pander for good ratings in the survey invitation?
You may have seen a more subtle form of this:
Thanks for coming to the SuperDuper Missions Conference. Weren’t the speakers and worship music great? Plus, over 300 people responded to the challenge to give or go. Hopefully you were as blessed as I was.
Say, I would love to get your feedback to help us make future conferences even better! Here’s a link to the survey…
It can be hard to contain enthusiasm when asking for post-event feedback – especially if you sent out several enthusiastic pre-event emails. But if you want honest input, commit to avoiding remarks that suggest how the event should be evaluated (or how you would evaluate the event).
It Must Be Important Because They’re Asking About It
Most people have encountered surveys with leading questions, designed to confirm and publicize high levels of support for a position on an issue. Like this:
Are you in favor of programs that offer microloans to lift women in developing countries out of the cycle of poverty with dignity through sustainable small businesses, with local peer-accountability networks to ensure loan repayment?
Even if you have read articles about recent studies suggesting that the link between microfinance and poverty reduction is tenuous or non-existent, you might be hard-pressed to answer “no” to the question as worded.
But there are other, more subtle ways that organizations can “suggest” certain responses. Telling people in the survey invitation that the survey is about microloans can encourage people to overstate their interest in that topic (as well as leading to response bias in which interested people are more likely to respond at all). Better to say that the survey is about strategies for poverty reduction or (broader still) addressing key areas of human need in the developing world.
This lets you gauge interest in your issue by mixing it in with several related issues, like this:
From the following list, please select up to three programs that you have been involved in, or would consider becoming involved in:
__ Well-digging programs to help provide a consistent healthy water supply
__ Community health education programs to teach villagers basic hygiene
__ Microloan programs to help women grow sustainable small businesses
__ Literacy programs to help kids and adults gain life and career skills
__ Legal advocacy and awareness to stem human trafficking
__ Theological education programs to equip first-generation church leaders
__ Sponsorship programs to sustain the education and nurture of at-risk kids
The rest of the survey can be about microloans. But before tipping your hand, you learn about interest in that issue relative to other issues — and even the correlation of interest among issues. Plus, you can use survey logic to excuse non-interested people from follow-up questions that don’t apply to them.
You can go even further to mask your interest in the survey issue, even while asking several questions specific to that issue. Before starting the battery of questions about microloans, include a statement like this:
“Next, one of the above topic areas will be selected for a series of follow-up questions.”
The statement is truthful and adheres to research ethics — it does not say that the topic will be randomly selected. But it leaves open the possibility that those who sponsored the survey may be interested in several types of programs, not just microloans, encouraging greater honesty in responses.
Unnecessary Survey Branding
However, these approaches still won’t work if the survey invitation is sent from someone at “Microcredit Charitable Enterprises” and the survey is emblazoned with the charity’s logo. There are many good reasons to brand a survey to your constituents, starting with an improved response rate. But sometimes, branding can be counterproductive.
If objective input is key, consider using an outside research provider in order to avoid tipping your hand, especially since research ethics require researchers to identify themselves about who is collecting the data.
Allowing Everything to Be “Extremely Important”
Another way that researchers can “give away” answers is by letting people rate the importance of various items independently. Take this question, for instance:
In selecting a child-sponsorship program, how important to you are the following items? Please answer on a scale of 1 to 5, where 1 is “Not at All Important” and 5 is “Extremely Important”:
1 2 3 4 5 Sponsor’s ability to write to and visit the child
1 2 3 4 5 Receiving regular updates from the child
1 2 3 4 5 On-site monitoring of the child’s care/progress
1 2 3 4 5 Written policies regarding how children are selected
1 2 3 4 5 Annual reporting of how your money was used
All of those are important! The question practically begs respondents to give each item a 5. Will that information help the agency? Maybe for external communication, but not in deciding which areas to promote or strengthen.
Instead, consider this alternative:
In selecting a child-sponsorship programs, how would you prioritize the following items? Distribute a total of 100 points across the five items.
Please order the following five elements of a child-sponsorship program according to their relative importance, from 1 “most important” to 5 “least important.” You can use each number only once.
In most cases, relative-value questions will produce much more useful data.
Are there other ways that you have seen surveys “give away” answers to respondents? Or avoid doing so? Let us know about your experiences and ideas.