Listening well…and why it matters

 

Does your mission organization listen well?

How would you know?

One of the more famous mission research studies since the turn of the millennium was the ReMAP II study of missionary retention, done by the Mission Commission of the World Evangelical Alliance.

Fieldwork, conducted in 2002-03, involved 600 agencies across 22 countries, representing some 40,000 missionaries.

GMI associates played a prominent role in the research and analysis, as well as in the creation of the book that reported the results, Worth Keeping.  The first half of the book is available free from WEA Resources.

It is an important book and well worth having on your shelf if you are involved in recruiting, assessing, training or leading field missionaries.  The book provides a helpful formula for calculating retention rate that every agency should apply.  Beyond that, its insights include:

    • Some agencies retain missionaries much better than do others.  The average (mean) tenure of those serving in high-retention agencies was 17 years—compared to 7 years in low-retention agencies (p. 3).  That is especially important for certain ministries, for the time between the seventh and 17th year is, according to Patrick Johnstone, “The period most likely to prove fruitful in cross-cultural church-planting ministry” (Future of the Global Church, p. 227).
    • Large agencies offer a decided advantage in retention over smaller agencies (pp. 39-41).
    • Setting a high bar in missionary selection correlates strongly with retention—the more criteria an agency considers in selection (character references, physical health, local-church ministry experience, affirmation of a doctrinal statement), the more likely it is to have strong retention (pp. 69-71).
    • The greater the percentage of an agency’s budget spent on member care—and especially preventative member care—the more likely it is to have strong retention.  In newer sending countries (Majority World), high-retention agencies spend twice as much as low-retention agencies (as a percentage of budget) and twice as much on preventative care (pp. 182-183).

All of these findings are meaningful and credible.  They come from the portions of the survey questionnaire that ask agency administrators to report on facts: What is your agency’s size?  Its retention rate?  The average tenure of departed field staff?  What criteria does it consider?  How much does it spend on member care?  These are facts that would be reported similarly, regardless of who completed the survey on behalf of the agency.

However, a large chunk of the survey instructed agency administrators as follows:

“Please evaluate your mission agency’s practices in the following areas (as evidenced by time, effort and effectiveness).”  Items were listed on a six-point scale ranging from “Not well done” to “Very well done” (p. 413).

Among the 49 items in this section:

  • Missionaries are assigned roles according to their gifting and experience.
  • Good on-field supervision is provided (quantity and quality).
  • Missionaries are generally not overloaded in the amount of work they do.
  • Effective pastoral care exists at a field level (preventative and in crises).
  • Missionaries are included in major decisions related to the field.

During the analysis phase, Jim Van Meter, who led the U.S. analysis, noticed that several items in this section did not significantly correlate with retention rates—and some significant correlations were counter-intuitive.  He asked GMI for a second opinion about why.

Our response: The problem isn’t the questions.  It’s the person answering them!

Administrators can reliably answer factual questions about their agency’s practices, but they cannot reliably answer evaluative questions related to their support of field staff.  The field staff has to answer those questions!

That’s why we launched the Engage Survey in 2006—so that field missionaries could give their input on issues like these.  It is also why we sought a grant to again promote Engage—with a substantial discount to agencies—in 2014-2015.

Consider the last item in that list above: Missionaries are included in major decisions related to the field.  In ReMAP II, agency administrators, both Western and Non-Western, indicated this as an area of strength for agencies.  Further, the item was not linked to retention.

But when we surveyed 1,700-plus fieldworkers, a completely different picture emerged.  “My organization involves employees in decisions that affect them” was one of the 10 lowest-rated items (out of 68).  When combined with related items like “My organization’s management explains the reasons behind major decisions” and “My organization acts on the suggestions of employees,” the factor we titled “Involvement in Decisions” was the lowest rated of 11 themes (principal component factors) in the survey.

 

What is more, the factor was significantly correlated with agency retention.

When we did follow-up depth interviews with current and former missionaries, inclusion in decision-making was one of five encouraging themes related to continuing service.  Exclusion from decision-making was one of six discouraging themes.

In short, everything we hear from field staff says, “This issue is important, and most missions have significant room for improvement.”

So, back to the original questions:

  • Does your mission organization listen well?
  • How would you know?

One clue is your agency’s annual retention rate for long-term cross-cultural workers.  If it is 97 percent or above, you probably listen well relative to other agencies.  If it is below 94 percent, you very likely have room for improvement.

To be sure, I would strongly recommend surveying your field staff.  Use a survey that assures anonymity for respondents, ideally administered through a third party.  Even better would be to do it collaboratively with other agencies, so you could learn how well you are doing compared to like-minded organizations with globally distributed staff.  And if you could find an experienced researcher to walk you through the results and make sure you make an action plan, so much the better.

That’s Engage.  Pricing is reasonable (less than $1,000 for many agencies) and is graded by the number of missionaries on staff.  Those signing up by November 30 save 25 percent on registration (via a $125 check from GMI, courtesy of a foundation grant) and 20 percent off the per-missionary graded rate.  Bu the way, none of the registration fees comes to GMI—our involvement is funded fully through the grant.

Count the hours that it would take you to do this on your own, without comparative benchmarks or a professional-grade survey instrument and follow-up consultation.

Pardon the shameless plug, but Engage is one of the best deals I know of in mission research.  Everyone wins:  Leadership teams get to celebrate successes and identify priorities.  Boards receive meaningful measures and see how leaders are taking initiative.  Field staff gets a chance to be heard and offer ideas.

 

The pitfalls of self-assessment

 

This week, the eminently useful Brigada Today newsletter—in addition to drawing attention to GMI’s Agency Web Review—also posted an item from someone looking a self-assessment survey for local church missions programs, recalling that ACMC used to offer one.

 

Responses were subsequently posted by Marti Wade (who does great work with Missions Catalyst) noting that the tool is still available via Pioneers, which assumed ACMC assets upon its folding; and by David M of Propempo International, which also offers a version of the tool.  A snapshot of an excerpt from the ACMC/Pioneers version appears above.

Seeing the ACMC survey brought back a memory from a 2003 project that GMI did for ACMC.  We surveyed 189 ACMC member churches to understand the status of church mission programs as well as their needs and goals.  The survey included each of the twelve questions from the self-assessment grid.

Subsequently, we did statistical modeling to determine if/which/to what degree various church missions program elements were associated with growth in missions sending and with missions budget as a proportion of overall church budget.

Unfortunately, most of the correlations were not statistically significant, and those that were significant were negatively correlated—meaning churches that rated their mission program highly (placing greater priority on the various dimensions) tended to demonstrate less growth in sending or lower relative financial commitment.

How could this be?

Turns out that this is a fairly common outcome of self-assessment exercises.  In short, teams with excellent performance also tend to have high standards—and their vision for growth frequently leads them to be more self-critical than lower-performing teams, which often have lower standards.

So, am I discouraging local churches to use the Mission Assessment Tool?  Not at all.  I encourage churches to download it and use it as the basis for discussion—it can be a great discussion starter for vision building, clarifying core values and identifying priorities for development.  For the reason described above, you may find out that some team members differ on where the program stands—or where the opportunities are for growth.

But when program evaluation is the goal, it helps to have outside eyes providing the perspective.  Those well equipped to offer feedback on a church’s mission program are:

1. Those served by or in partnership with the mission team, such as missionaries who may have other supporting churches (these must be given anonymity in response) and/or

2. Outside consultants who work with many church mission programs and have a valid basis of comparison.

Meanwhile, at the 30,000-foot level, researchers, missiologists and consultants are eager to discover the key differences between high-performing church mission teams and others.  The statistical modeling sought to answer the question: What program elements are the most common outflows (or drivers) of increased financial/sending commitment: Better mission education?  Better worker training?  Greater emphasis on strategy?  More local mission involvement?  This is where self-assessment bias—seen across a sample of 189 churches—becomes a problem.

One helpful approach is to focus on relative data.  Were we to re-examine the analysis today, I would be inclined to transform the raw data into relative performance rankings (each church’s perception of its relative strengths and weaknesses).  This compensates for differing standards of excellence by looking at each church’s priorities.

Self-evaluation bias can also be reduced by developing assessment tools with response scales/categories that are so observably objective that they cannot easily be fudged.  The ACMC tool uses descriptions for each commitment level that are intended to be objectively observable—but in some cases they are subject to interpretation, or to cases where a higher-level condition may be met while a lower-level condition is unfulfilled.  In the 2003 study we gave specific instructions to respondents that they should proceed through each scale a step at a time, stopping at the lowest unmet condition.  However, such an instruction may not have been enough to overcome the need of some respondents to affirm their church’s mission program with high marks.

This issue also points to the importance of testing assessment tools for validity among a pilot sample of respondents—with results compared to an objective measure of excellence.

Take care with self-assessment.  After all, scripture itself warns us in Jeremiah 17:8-10 that “the heart is more deceitful than all else and is desperately sick; who can understand it?”

 

Simple Survey Idea 4: Don’t give the answers away

Do you ever “give away” answers in your surveys?  I’m talking about subtle (and not-so-subtle) signals that can lead to bias.  Here are a few errors to avoid:

Pandering

Several weeks ago I refinanced my house using an online lender.  All ended well, but there were a few glitches along the way – a key email with documents attached was apparently lost and I had to prompt the company to follow up with the underwriter.

The day after closing I received the following survey invitation from the mortgage processor:

Subject: I so appreciate YOU! Please help if you can I am so close to being # 1 in the company having “GREATs”…

Thank you so much for being an amazing customer to work with. I greatly appreciate all your help to get your loan taken care of. I hope that you feel I have given you “GREAT” customer service. My managers would love to hear what you think about my performance as your processor. If you do not mind, please take 1 minute to fill out the 1 question survey to help me out. We are always looking for “GREATs.”

Apparently customer-service ratings at that company are used in compensating or rewarding mortgage officers.  That’s fine.  But the question it raises is: Why would the company – which cares enough about satisfaction to tie it to rewards – let the person being evaluated pander for good ratings in the survey invitation?

You may have seen a more subtle form of this:

Thanks for coming to the SuperDuper Missions Conference.  Weren’t the speakers and worship music great?  Plus, over 300 people responded to the challenge to give or go.  Hopefully you were as blessed as I was.

Say, I would love to get your feedback to help us make future conferences even better!  Here’s a link to the survey…

It can be hard to contain enthusiasm when asking for post-event feedback – especially if you sent out several enthusiastic pre-event emails.  But if you want honest input, commit to avoiding remarks that suggest how the event should be evaluated (or how you would evaluate the event).

It Must Be Important Because They’re Asking About It

Most people have encountered surveys with leading questions, designed to confirm and publicize high levels of support for a position on an issue.  Like this:

Are you in favor of programs that offer microloans to lift women in developing countries out of the cycle of poverty with dignity through sustainable small businesses, with local peer-accountability networks to ensure loan repayment?

Even if you have read articles about recent studies suggesting that the link between microfinance and poverty reduction is tenuous or non-existent, you might be hard-pressed to answer “no” to the question as worded.

But there are other, more subtle ways that organizations can “suggest” certain responses.  Telling people in the survey invitation that the survey is about microloans can encourage people to overstate their interest in that topic (as well as leading to response bias in which interested people are more likely to respond at all).  Better to say that the survey is about strategies for poverty reduction or (broader still) addressing key areas of human need in the developing world.

This lets you gauge interest in your issue by mixing it in with several related issues, like this:

From the following list, please select up to three programs that you have been involved in, or would consider becoming involved in:

__ Well-digging programs to help provide a consistent healthy water supply

__ Community health education programs to teach villagers basic hygiene

__ Microloan programs to help women grow sustainable small businesses

__ Literacy programs to help kids and adults gain life and career skills

__ Legal advocacy and awareness to stem human trafficking

__ Theological education programs to equip first-generation church leaders

__ Sponsorship programs to sustain the education and nurture of at-risk kids

The rest of the survey can be about microloans.  But before tipping your hand, you learn about interest in that issue relative to other issues — and even the correlation of interest among issues.  Plus, you can use survey logic to excuse non-interested people from follow-up questions that don’t apply to them.

You can go even further to mask your interest in the survey issue, even while asking several questions specific to that issue.  Before starting the battery of questions about microloans, include a statement like this:

“Next, one of the above topic areas will be selected for a series of follow-up questions.”

The statement is truthful and adheres to research ethics — it does not say that the topic will be randomly selected. But it leaves open the possibility that those who sponsored the survey may be interested in several types of programs, not just microloans, encouraging greater honesty in responses.

Unnecessary Survey Branding

However, these approaches still won’t work if the survey invitation is sent from someone at “Microcredit Charitable Enterprises” and the survey is emblazoned with the charity’s logo.  There are many good reasons to brand a survey to your constituents, starting with an improved response rate.  But sometimes, branding can be counterproductive.

If objective input is key, consider using an outside research provider in order to avoid tipping your hand, especially since research ethics require researchers to identify themselves about who is collecting the data.

Allowing Everything to Be “Extremely Important”

Another way that researchers can “give away” answers is by letting people rate the importance of various items independently.  Take this question, for instance:

In selecting a child-sponsorship program, how important to you are the following items?  Please answer on a scale of 1 to 5, where 1 is “Not at All Important” and 5 is “Extremely Important”:

1    2    3    4    5   Sponsor’s ability to write to and visit the child

1    2    3    4    5   Receiving regular updates from the child

1    2    3    4    5   On-site monitoring of the child’s care/progress

1    2    3    4    5   Written policies regarding how children are selected

1    2    3    4    5   Annual reporting of how your money was used

All of those are important!  The question practically begs respondents to give each item a 5.  Will that information help the agency?  Maybe for external communication, but not in deciding which areas to promote or strengthen.

Instead, consider this alternative:

In selecting a child-sponsorship programs, how would you prioritize the following items?  Distribute a total of 100 points across the five items.

Or

Please order the following five elements of a child-sponsorship program according to their relative importance, from 1 “most important” to 5 “least important.”  You can use each number only once.

In most cases, relative-value questions will produce much more useful data.

Are there other ways that you have seen surveys “give away” answers to respondents?   Or avoid doing so?  Let us know about your experiences and ideas.

Simple Survey Idea #2: Send a Reminder

I talk with lots of people who design and field their own web surveys.  It amazes me how many have never considered sending a reminder out to those they have invited — even to people who are known well by the person doing the survey.

People are often very willing to help, but they are busy and working through lots of messages, and survey invitations are easy to set aside until later.  One reminder is often helpful.  I almost always send at least one reminder out to survey invitees.  In some cases, I will send out a second reminder.  In rare cases, a third.

Why send a reminder at all?  Perhaps it goes without saying, but more data usually equals better-quality information.  Better statistical accuracy is part of that: most people understand that a sample of 300 yields a tighter margin of error than a sample of 100.

But in most cases, response bias will be a bigger threat to the quality of your data than statistical error from sample size.  Consider your sample of 300 responses.  Did you generate those from 400 invitations (a 75% response rate) or 4,000 invitations (a 7.5% response rate)?  The former would give you much greater confidence that those you heard from accurately reflect the larger group that you invited.

What is a “good” response rate?  It can vary widely depending on your relationship to the people invited (as well as how interesting and long the survey is, but that’s a topic for another post).  Domestic staff/employee surveys often generate a response of 85 percent or more.  However, for internationally distributed missionary staff, a response of 60 percent is healthy.  For audiences with an established interest in your work (event attenders, network members), a 35-percent response is decent.  For other audiences, expect something lower.  One online survey supplier’s analysis of nearly 200 surveys indicated a median respose rate of 26 percent.

So, do reminders substantially increase response to surveys?  Absolutely.  Online survey provider Vovici blogs, “Following up survey invitations with reminders is the most dramatic way to improve your response rate.”  They show results from one survey where the response rate rose from 14 percent to 23, 28 and 33 percent after subsequent reminders.

My experience has been similar.  I find that survey invitations and reminders have something like a “half-life” effect.  If your initial invitation generates X responses, you can expect a first reminder to produce an additional .50X responses, a second reminder .25X responses, and so on.

I disagree with survey provider Zoomerang’s suggestion of sending a series of three reminders — especially if the audience is people you know — but I do agree with their statement, “Think of your first email reminder as a favor, not an annoyance.”  I recommend sending at least one reminder for virtually any survey, with a second reminder only if you feel that your response rate is troublesome and you need that extra .25X of input.

At least Zoomerang provides a sample reminder template you can use.  I agree that you should keep reminders short — shorter than the original invitation.  With any invitation or reminder, you will do well to keep the survey link “above the fold” (to use a phrase from old-time print journalism), meaning that it should be visible to readers without their having to scroll down through your message.

I also find that it very helpful to use list managers in sending survey reminders.  Most online providers will have an option where you can send only to those members of your invitation list who haven’t responded.  Not only does this keep from annoying those who already did respond, but you can word the reminder much more directly (and personally, with customized name fields).  So, instead of saying:

“Dear friend — If you haven’t already responded to our survey, please do so today.”

You can say:

“Dear Zach — I notice that you haven’t responded to our survey yet.  No problem, I’m sure you’re busy.  But it would be great to get your input today.  Here’s the link.”

Take care in using the above approach — if you have promised anonymity (not just confidentiality), as in an employee survey, opt for the generic reminder.

When to send a reminder?  If your schedule is not pressing, send a reminder out 5-10 days after the previous contact.  I recommend varying the time of day and week in order to connect with different kinds of people.  If I sent the initial invitation on a Monday morning, I might send the reminder the following Wednesday afternoon.

 

It’s Winter: Time to Make Snowballs!

A lot of mission researchers are interested in studying people who aren’t easy to get to.  They may be unknown in number, difficult to access, suspicious of outsiders, etc.

This makes random sampling virtually impossible.  Unfortunately, a random sample is an assumption or requirement of many statistical tests.

So, if you’re doing research with underground believers or people exploited in human trafficking, you can’t just go to SSI and rent a sample of 1500 people to call or email.

When you need a sample from a hard-to-reach population, make a snowball!

Snowball sampling, a more memorable name for the formal term, respondent-driven sampling, is a means of getting to a reasonably large sample through referrals.  You find some people who meet your criteria and who trust you enough to answer your questions, then ask them if there are other people like them that they could introduce you to.

In each interview, you ask for referrals – and pretty soon the snowball effect kicks in and you have a large sample.

For years this approach was avoided by “serious” researchers because, well, the sample it produces just isn’t random.  Your friends are probably more like you than the average person, so talking to you and your friends isn’t a great way to get a handle on your community.

But, like six-degrees of separation, the further you go from your original “seeds,” the broader the perspective.  And in recent years, formulas have been developed that virtually remove the bias inherent in snowball samples – opening up this method to “respectable” researchers.

How to do it?  Some researchers simply throw out the first two or three generations of data, then keep everything else, relying on three degrees of separation.  Not a bad rule of thumb.

For more serious researchers, there is free software available to help you weight the data and prevent you from having to discard the input of the nice people who got your snowball started.  Douglas Heckathorn is a Cornell professor who developed the algorithm (while doing research among drug users to help combat the spread of HIV) and helped bring snowball sampling back from the hinterlands of researcher scorn.  You can read more about his method here and download the software here.

Suddenly, you need not settle for a handful of isolated snowflakes, nor for a skewed snowdrift of opinion (via an unscientific poll of your social media friends).  Instead, you can craft their referrals into a statistically representative snowman.

Meanwhile, if the sample you need is one of North American field missionaries or North Americans seriously considering long-term cross-cultural service, you should consider renting one of GMI’s mission research panels.  Email us for details.