Simple Survey Idea #2: Send a Reminder

I talk with lots of people who design and field their own web surveys.  It amazes me how many have never considered sending a reminder out to those they have invited — even to people who are known well by the person doing the survey.

People are often very willing to help, but they are busy and working through lots of messages, and survey invitations are easy to set aside until later.  One reminder is often helpful.  I almost always send at least one reminder out to survey invitees.  In some cases, I will send out a second reminder.  In rare cases, a third.

Why send a reminder at all?  Perhaps it goes without saying, but more data usually equals better-quality information.  Better statistical accuracy is part of that: most people understand that a sample of 300 yields a tighter margin of error than a sample of 100.

But in most cases, response bias will be a bigger threat to the quality of your data than statistical error from sample size.  Consider your sample of 300 responses.  Did you generate those from 400 invitations (a 75% response rate) or 4,000 invitations (a 7.5% response rate)?  The former would give you much greater confidence that those you heard from accurately reflect the larger group that you invited.

What is a “good” response rate?  It can vary widely depending on your relationship to the people invited (as well as how interesting and long the survey is, but that’s a topic for another post).  Domestic staff/employee surveys often generate a response of 85 percent or more.  However, for internationally distributed missionary staff, a response of 60 percent is healthy.  For audiences with an established interest in your work (event attenders, network members), a 35-percent response is decent.  For other audiences, expect something lower.  One online survey supplier’s analysis of nearly 200 surveys indicated a median respose rate of 26 percent.

So, do reminders substantially increase response to surveys?  Absolutely.  Online survey provider Vovici blogs, “Following up survey invitations with reminders is the most dramatic way to improve your response rate.”  They show results from one survey where the response rate rose from 14 percent to 23, 28 and 33 percent after subsequent reminders.

My experience has been similar.  I find that survey invitations and reminders have something like a “half-life” effect.  If your initial invitation generates X responses, you can expect a first reminder to produce an additional .50X responses, a second reminder .25X responses, and so on.

I disagree with survey provider Zoomerang’s suggestion of sending a series of three reminders — especially if the audience is people you know — but I do agree with their statement, “Think of your first email reminder as a favor, not an annoyance.”  I recommend sending at least one reminder for virtually any survey, with a second reminder only if you feel that your response rate is troublesome and you need that extra .25X of input.

At least Zoomerang provides a sample reminder template you can use.  I agree that you should keep reminders short — shorter than the original invitation.  With any invitation or reminder, you will do well to keep the survey link “above the fold” (to use a phrase from old-time print journalism), meaning that it should be visible to readers without their having to scroll down through your message.

I also find that it very helpful to use list managers in sending survey reminders.  Most online providers will have an option where you can send only to those members of your invitation list who haven’t responded.  Not only does this keep from annoying those who already did respond, but you can word the reminder much more directly (and personally, with customized name fields).  So, instead of saying:

“Dear friend — If you haven’t already responded to our survey, please do so today.”

You can say:

“Dear Zach — I notice that you haven’t responded to our survey yet.  No problem, I’m sure you’re busy.  But it would be great to get your input today.  Here’s the link.”

Take care in using the above approach — if you have promised anonymity (not just confidentiality), as in an employee survey, opt for the generic reminder.

When to send a reminder?  If your schedule is not pressing, send a reminder out 5-10 days after the previous contact.  I recommend varying the time of day and week in order to connect with different kinds of people.  If I sent the initial invitation on a Monday morning, I might send the reminder the following Wednesday afternoon.

 

Simple Survey Idea #1: Keep survey language simple

I am working on a web survey for a group of people in India.  Smart folks, many of them technology savvy.  And they speak English — but often not as their first language.

Some surveys should be translated or fielded in multiple languages.  For many surveys, though, English is sufficient.  But what kind of English?

My default mode is to use more complicated English than is needed.  The more I work with multilingual people around the world, the more I realize the value of keeping language simple, especially with surveys and interview questions.

The good news is that there are tools out there that can help.  Here is a site that lets you paste in text and compare it to one of many collections of simple English words.  It shows which words are not considered simple.

http://www.online-utility.org/english/simple_basic_helper.jsp 

With many international audiences it is a good idea to test your language before sending out a survey.

I put the above portion of this post into the site and found out that the following words were not included in a somewhat large collection of 15,000 simple words: web, savvy, and multilingual.  With smaller collections, many more words miss the cut, including realizecompare and survey.

Try it out — you have nothing to lose but complexity.

It’s Winter: Time to Make Snowballs!

A lot of mission researchers are interested in studying people who aren’t easy to get to.  They may be unknown in number, difficult to access, suspicious of outsiders, etc.

This makes random sampling virtually impossible.  Unfortunately, a random sample is an assumption or requirement of many statistical tests.

So, if you’re doing research with underground believers or people exploited in human trafficking, you can’t just go to SSI and rent a sample of 1500 people to call or email.

When you need a sample from a hard-to-reach population, make a snowball!

Snowball sampling, a more memorable name for the formal term, respondent-driven sampling, is a means of getting to a reasonably large sample through referrals.  You find some people who meet your criteria and who trust you enough to answer your questions, then ask them if there are other people like them that they could introduce you to.

In each interview, you ask for referrals – and pretty soon the snowball effect kicks in and you have a large sample.

For years this approach was avoided by “serious” researchers because, well, the sample it produces just isn’t random.  Your friends are probably more like you than the average person, so talking to you and your friends isn’t a great way to get a handle on your community.

But, like six-degrees of separation, the further you go from your original “seeds,” the broader the perspective.  And in recent years, formulas have been developed that virtually remove the bias inherent in snowball samples – opening up this method to “respectable” researchers.

How to do it?  Some researchers simply throw out the first two or three generations of data, then keep everything else, relying on three degrees of separation.  Not a bad rule of thumb.

For more serious researchers, there is free software available to help you weight the data and prevent you from having to discard the input of the nice people who got your snowball started.  Douglas Heckathorn is a Cornell professor who developed the algorithm (while doing research among drug users to help combat the spread of HIV) and helped bring snowball sampling back from the hinterlands of researcher scorn.  You can read more about his method here and download the software here.

Suddenly, you need not settle for a handful of isolated snowflakes, nor for a skewed snowdrift of opinion (via an unscientific poll of your social media friends).  Instead, you can craft their referrals into a statistically representative snowman.

Meanwhile, if the sample you need is one of North American field missionaries or North Americans seriously considering long-term cross-cultural service, you should consider renting one of GMI’s mission research panels.  Email us for details.

Analyzing open-ended questions: A bright future for word clouds

Commercial survey research firms usually charge clients significantly extra to include “open-ended” questions in a survey.  They tend to be messy and time-consuming.  Traditionally, analysts would read through a selection of responses to create categories of frequent or typical responses, then read back through all of the responses to categorize them.

For publishing in a peer-reviewed journal, multiple people would categorize responses independently, then work together to create a synthesized coding scheme to limit bias.

Most qualitative text-analysis software still requires you to manually “code” responses.

With all that work, open-ended questions are still important in exploratory and qualitative research – and frequently satisfying for survey respondents looking for an opportunity to say what is on their mind, unhindered by structured response categories.

But the tag-cloud age has been a blessing to those without the time and money to do full, traditional analysis of text responses.   Graphics with words sized by frequency of use enables analysts to quickly get a sense of the nature of open-ended responses.

New editions of survey software – even budget packages like Survey Monkey – include cloud-creating tools to help users understand open-ended responses at a glance, without all the coding work.

Even those doing traditional coding enjoy working with clouds, which help analysts to quickly create an initial set of codes.

If your survey package doesn’t have cloud-generating capacity, no problem.  Worldle is a free site that lets you create art-like word clouds.  The clouds in the previous post were created using Worldle.  It’s a terrific, easy-to-use site that lets you paste in your text – our data came straight from a spreadsheet – and generate a word cloud with one click.  It automatically removes common words, allows you to choose the relative cloud shape, color scheme, font and orientation of the words.  We chose to illustrate the top 100 terms for each question.  Wordle lets you save and use your clouds however you want to.

I really like the tool’s artistic quality.  Wordle clouds almost beg to be shown to others.  Then they become motivated, too.  My daughter, upon first seeing Wordle, immediately had a vision about making a sign to promote a bake sale.  A few descriptive terms later, she had created a beautiful graphic to draw people’s attention.

This is where research moves from information to influence.  Imagine asking your constituents about their needs – or your organization’s impact – then printing a graphic of their responses to hang in your office as a reminder and motivator to staff.  Unlike a research report, which may or may not get read before being filed away (or worse!), word cloud art can keep research right in front of your team.  The graphic format makes the information more memorable as well.

Researchers, meanwhile, can compare and contrast different audience segments, as I did in the word cloud below.

What applications can you think of for word clouds?