Listening well…and why it matters


Does your mission organization listen well?

How would you know?

One of the more famous mission research studies since the turn of the millennium was the ReMAP II study of missionary retention, done by the Mission Commission of the World Evangelical Alliance.

Fieldwork, conducted in 2002-03, involved 600 agencies across 22 countries, representing some 40,000 missionaries.

GMI associates played a prominent role in the research and analysis, as well as in the creation of the book that reported the results, Worth Keeping.  The first half of the book is available free from WEA Resources.

It is an important book and well worth having on your shelf if you are involved in recruiting, assessing, training or leading field missionaries.  The book provides a helpful formula for calculating retention rate that every agency should apply.  Beyond that, its insights include:

    • Some agencies retain missionaries much better than do others.  The average (mean) tenure of those serving in high-retention agencies was 17 years—compared to 7 years in low-retention agencies (p. 3).  That is especially important for certain ministries, for the time between the seventh and 17th year is, according to Patrick Johnstone, “The period most likely to prove fruitful in cross-cultural church-planting ministry” (Future of the Global Church, p. 227).
    • Large agencies offer a decided advantage in retention over smaller agencies (pp. 39-41).
    • Setting a high bar in missionary selection correlates strongly with retention—the more criteria an agency considers in selection (character references, physical health, local-church ministry experience, affirmation of a doctrinal statement), the more likely it is to have strong retention (pp. 69-71).
    • The greater the percentage of an agency’s budget spent on member care—and especially preventative member care—the more likely it is to have strong retention.  In newer sending countries (Majority World), high-retention agencies spend twice as much as low-retention agencies (as a percentage of budget) and twice as much on preventative care (pp. 182-183).

All of these findings are meaningful and credible.  They come from the portions of the survey questionnaire that ask agency administrators to report on facts: What is your agency’s size?  Its retention rate?  The average tenure of departed field staff?  What criteria does it consider?  How much does it spend on member care?  These are facts that would be reported similarly, regardless of who completed the survey on behalf of the agency.

However, a large chunk of the survey instructed agency administrators as follows:

“Please evaluate your mission agency’s practices in the following areas (as evidenced by time, effort and effectiveness).”  Items were listed on a six-point scale ranging from “Not well done” to “Very well done” (p. 413).

Among the 49 items in this section:

  • Missionaries are assigned roles according to their gifting and experience.
  • Good on-field supervision is provided (quantity and quality).
  • Missionaries are generally not overloaded in the amount of work they do.
  • Effective pastoral care exists at a field level (preventative and in crises).
  • Missionaries are included in major decisions related to the field.

During the analysis phase, Jim Van Meter, who led the U.S. analysis, noticed that several items in this section did not significantly correlate with retention rates—and some significant correlations were counter-intuitive.  He asked GMI for a second opinion about why.

Our response: The problem isn’t the questions.  It’s the person answering them!

Administrators can reliably answer factual questions about their agency’s practices, but they cannot reliably answer evaluative questions related to their support of field staff.  The field staff has to answer those questions!

That’s why we launched the Engage Survey in 2006—so that field missionaries could give their input on issues like these.  It is also why we sought a grant to again promote Engage—with a substantial discount to agencies—in 2014-2015.

Consider the last item in that list above: Missionaries are included in major decisions related to the field.  In ReMAP II, agency administrators, both Western and Non-Western, indicated this as an area of strength for agencies.  Further, the item was not linked to retention.

But when we surveyed 1,700-plus fieldworkers, a completely different picture emerged.  “My organization involves employees in decisions that affect them” was one of the 10 lowest-rated items (out of 68).  When combined with related items like “My organization’s management explains the reasons behind major decisions” and “My organization acts on the suggestions of employees,” the factor we titled “Involvement in Decisions” was the lowest rated of 11 themes (principal component factors) in the survey.


What is more, the factor was significantly correlated with agency retention.

When we did follow-up depth interviews with current and former missionaries, inclusion in decision-making was one of five encouraging themes related to continuing service.  Exclusion from decision-making was one of six discouraging themes.

In short, everything we hear from field staff says, “This issue is important, and most missions have significant room for improvement.”

So, back to the original questions:

  • Does your mission organization listen well?
  • How would you know?

One clue is your agency’s annual retention rate for long-term cross-cultural workers.  If it is 97 percent or above, you probably listen well relative to other agencies.  If it is below 94 percent, you very likely have room for improvement.

To be sure, I would strongly recommend surveying your field staff.  Use a survey that assures anonymity for respondents, ideally administered through a third party.  Even better would be to do it collaboratively with other agencies, so you could learn how well you are doing compared to like-minded organizations with globally distributed staff.  And if you could find an experienced researcher to walk you through the results and make sure you make an action plan, so much the better.

That’s Engage.  Pricing is reasonable (less than $1,000 for many agencies) and is graded by the number of missionaries on staff.  Those signing up by November 30 save 25 percent on registration (via a $125 check from GMI, courtesy of a foundation grant) and 20 percent off the per-missionary graded rate.  Bu the way, none of the registration fees comes to GMI—our involvement is funded fully through the grant.

Count the hours that it would take you to do this on your own, without comparative benchmarks or a professional-grade survey instrument and follow-up consultation.

Pardon the shameless plug, but Engage is one of the best deals I know of in mission research.  Everyone wins:  Leadership teams get to celebrate successes and identify priorities.  Boards receive meaningful measures and see how leaders are taking initiative.  Field staff gets a chance to be heard and offer ideas.


What future missionaries are reading

A few months ago we did a survey with prospective missionaries and asked about what they are reading.  Along with a lot of David Platt and John Piper, we noted the following titles (write-in responses from a total of about 160 respondents):

A Gleam of Light  by Ila Marie Davis

Thriving in Cross Cultural Ministry by Carissa Alma

Why Jesus Crossed the Road by Bruce Main

Cross-Cultural Connections by Duane Elmer

Daws: A Man Who Trusted God by Betty Skinner

Do What Jesus Did by Robby Dawkins

Dreams and Visions by Tom Doyle

Engaging Islam by Georges Houssney

Following Jesus Through the Eye of the Needle by Kent Annan

Go and Do by Don Everts

Hudson Taylor’s Spiritual Secret by Dr. & Mrs. Howard Taylor

Kingdom Matrix by Jeff Christopherson

Kisses from Katie by Katie Davis

Many Colors by Song Cha Rah

Real Life by James Choung

Students of the Word by John Stott

The Mark of a Christian by Francis Schaeffer

The New Friars by Scott Bessenecker

To Repair the World by Paul Farmer

We’d like to respectfully add a title to the list. Crossing Cultures with Ruth by James Nelson is the first book GMI has produced specifically for those considering cross-cultural service. While accessing research is one of the “Fruitful Practices” for effective mission, not every new missionary has a bent for data and reports.  So we have weaved lessons from a decade of research into a Bible study.  The result offers memory hooks that connect current research with the timeless biblical narrative of Ruth, a cross-cultural servant.

Recipe for constituent pie

















It’s Thanksgiving!  Time to think about pie—or at least about ways to slice your constituents into meaningful chunks.

Going wonky in this post with some of the detailed ingredients and cooking directions for segmentation.

But first, a bit about the danger of segmenting audiences, which involves grouping and dividing people.  A warning:

“Dividing” and the Body of Christ don’t go together particularly well.
Our default posture should be one of unity.

 In the church (and in church planting), the “homogeneous unit principle” acknowledges that growth tends to happen more quickly among groups whose members share a common language, ethnicity and socio-economic class.

This principle has been used wisely to develop indigenous churches that show the incarnational nature of the Good News in understandable forms—relevant and transformative.  The principle has also been used unwisely to segregate and isolate people within the Body of Christ on the basis of race, class, age, etc., preventing the Church from experiencing unity in the midst of God-ordained diversity.

The challenge in applying the concept wisely is well described in the very first Lausanne Occasional Paper from 1978.

In local congregations, intentionally segmenting people can be fraught with difficulty.  Christianity Today’s Andy Crouch addressed this issue eloquently in his 1999 essay For People Like Me from re:generation quarterly, a magazine whose demise I still mourn a decade after it folded.  I encourage you to click through and read the whole piece, but here’s the punchline:

For surely one of the scandalous things about the gospel—indicated by Jesus’ own practices of welcoming sinners and eating with them, calling tax collectors along with fishermen to be his disciples, and praying for the forgiveness of his executioners—is that it does not fit the marketer’s (or the Pharisee’s) formula “for people like me.” It is in fact for people not like me—unless they are “a wretch like me,” and wretchedness was never the basis of a successful marketing campaign. Christianity is not a product that can be added seamlessly into the lives of consumers like one more lifestyle-enhancing appliance. It is instead a call to a completely different way of viewing the world, one in which the one who looks least like me is at a minimum my “neighbor” (Luke 10:29-37) and could well be Jesus himself (Matt. 25).

So, before undertaking the task of segmenting an audience, be sure to check your conscience.  Segmentation can acknowledge God-given variation in giftedness and experience.  That is the beauty of the Myers-Briggs® types—there are no better or worse personalities—each has natural strengths as well as potential blind spots.  It can also validate multiple approaches for doing a task, blunting the arguments of those who are fixed on their method as the “one best way” to do a task.

With that addressed, on to wonkiness.

When communicating with large numbers of people (donors, staff, readers/listeners, etc.) segmentation is a strategy that reflects a middle ground between uniformity (one size fits all) and customization (every one unique).  Mass communication is often ineffective; individual customization is usually inefficient.  In segmentation, an approach is developed for each segment, but within a segment everyone is treated similarly.

Criteria for developing segments include:

  1. Meaningful subgroups really exist—there is a valid basis for segmenting an audience.
  2. The subgroups are identifiable—there is a reasonable way to segment an audience.
  3. The subgroups are actionable—there is a practical use for segmenting an audience.

The Missio Nexus CEO survey that we recently helped with was commissioned, in part, because Missio Nexus frequently heard CEOs asking how other CEOs were dealing with various challenges.  The idea was to document and share experiences among the CEO community.  The question CEOs were asking presupposes likeness among the peer group.

We thought it could be helpful to see if meaningful subgroups existed which would help focus the question or expand on it.  Profiling CEO segments could help CEOs better understand themselves and their peers.  Their question could become, “How are other CEOs like me dealing with this issue?” or “Why are CEOs dealing with this issue in different ways?”

Here are the general steps in segmentation, using some of our recent projects as examples.

Step 1: Select a basis on which to explore/generate segments.

You can segment audiences in many ways—some of the most common consider the needs, values, aspirations or behaviors of their audiences.  Consider behaviors.  If people behave according to certain patterns, those can dictate the communication channels used to reach them.  Child sponsorship agencies, for example, use several methods to sign up new sponsors that are behavior based: church partnerships, online ads, concert sponsorship, direct mail.  These are real meaningful, actionable segments.

In some of the recent segmentation work that we’ve done, the basis for segmentation has been as follows:

  • Church Planters: Behaviors (frequency of “fruitful practice” activities)
  • Mission Agency Website Visitors: Needs (information sought)
  • Mission Internship Prospects: Motivations (for considering a 1-to-3 year term of service)

In the Missio Nexus CEO survey, one objective was to look at recent progress and current or near-future challenges.  So, we developed segments based on relative priorities for organizational, staff and personal-effectiveness (combined).  We didn’t worry about why CEOs prioritized one area over another.  The shared need to address certain areas was enough.

Our hope is that priority-based segments would be actionable for Missio Nexus as an association that provides regular programming for executives such as C-Suite Webinars.  Priority segments give them a guide for how to plan relevant content that provides a balance for each type of CEO.  At an event, the CEO audience might not be large enough to justify separate tracks—but breakout sessions could be scheduled in a way that each group is likely to find something of interest.

Step 2: Select a method for creating segments.

If one quantitative measure is extremely important, such as expected lifetime donor value or likelihood of serving with your agency, segments can be driven by their impact on that variable.  This situation calls for decision-tree analysis.  Many statistical packages include such a method—CHAID and CART are traditional examples. The analyst feeds in a number of predictor variables—often combining different variable types—along with the known outcome of the key variable from a sample.  The software will identify a sequence of if-then steps involving the variables that best divide people into groups on the basis of the key measure.

This allows donor or staff prospects to be quickly qualified; responses can vary accordingly.  Those with a lower likelihood of giving or joining should not be ignored, but follow-up communication might be done a bit more frugally or infrequently.

Often segmentation will be driven not by a single measure but several measures with a common theme.  In our Agency Web Review we used a set of 16 types of information that visitors to mission agency websites might be interested in.  In that case, cluster analysis can be a great way to identify segments.  K-mean clustering is a well-regarded tool in which the analyst specifies the number of segments (clusters), assuming statistical significance.  Most analysts run and compare several variations using different numbers of clusters, selecting the one that seems most practical or intuitive.

Step 3: Create and label the segments

In our mission agency website visitor study, we chose five statistically valid segments (via cluster analysis) that also made intuitive sense to us.  Looking to name the segments, we noticed each segment’s various interests.  Among the influential variables were short-term opportunities and long-term opportunities.

One segment demonstrated relatively low interest in both kinds of opportunities.  That seemed strange—all of those responding had been screened for interest in serving cross-culturally.  We dug deeper, examining the group by its demographics.  It turned out that many people in the segment were underclass collegians (seniors and new grads were more likely to be in other segments).  Aha!  Now it made more sense.  Their low interest in service opportunities reflected the fact that they were years away from applying for career service.  They valued learning about agencies generally and exploring mission-oriented resources (perhaps for use in coursework or for their campus fellowship).  This group also included a fair number of non-students whose primary role was mobilizing others to go.  Therefore, we named the segment Scouts.  They were scouting out info for another time—or for other people.  We named the other segments through a similar process.

Step 4: Describe the segments in detail

Saving segment membership to your data set opens up a world of descriptive possibilities by cross-tabbing segment with other variables.  In our church planter segmentation, women were especially likely to be in one segment—even though none of the input variables were gender related.  The church planters were working among resistant peoples, often in cultures where women are closely protected and limited in their social mobility.  It came as no surprise, then, that women made up a large portion of the segment that emphasized prayer and judicious (not bold) sharing.  For many, that was the type of ministry that was available without severely violating cultural norms.

One way to see the relationships among segments is through segment maps.  These can be created quickly through a bit of reverse engineering.  (Warning: statistical terms coming—in case of dizziness, skip down two paragraphs.)  We use the variables from the cluster analysis as predictor variables in a discriminant-function analysis.  Cluster membership is the variable to be predicted.  We save the function coefficients as variables, and then we use the first two sets of coefficients as X-Y coordinates in a scatterplot.  When we color code by segment, results look something like this (taken from our Mission Internship Study):

Each point represents a respondent.  The segments naturally group together, and the X and Y dimensions distinguish segments from one other.  These dimensions, which reflect weighted combinations of the input variables, should be labeled to show how the segments relate to one another.

In the example above, three groups emerged with differing motivations for considering a cross-cultural internship of one to three years.  The map showed that the groups can be considered on dimensions related to the purpose of the internship (My Fit vs. Their Blessing) and their level of commitment to long-term mission (Committed vs. Exploring):

  • The Where segment is mostly committed to long-term service and want to test their fit in a particular setting or a particular agency;
  • The Whether segment is uncertain about long-term service and want to test if they should continue serving after the internship concludes.
  • The Whatever segment isn’t concerned about their long-term direction.  They simply want to meet people’s needs through the internship without considering their future career path.

Step 5: Develop a scoring model for classifying others

It isn’t easy to get everyone to take a survey, so the segments of constituents who don’t respond—and those emerging in the future—cannot be classified.  This limits the value of the segmentation.

The answer is to create a scoring model—either using non-survey data or developing a mini-survey that makes it easy to collect information, such as through a registration form.  Here is where we get to the quizzes that let people discover their “personality.”

With decision-tree segments, we simply use quiz questions based on the logic of the tree.  For cluster analysis-based segments, we use discriminant analysis.  The setup uses the same variables as in the mapping step above, but this time we use a stepwise procedure (to limit the number of variables) and select the option for “Fisher coordinates.”  This yields one equation for each segment.  When someone takes the quiz, we cross-multiply their responses with the Fisher coordinates and then compare the totals: the largest value is the “predicted” segment—which is shown to the quiz taker and/or added to the constituent database.

Quiz results usually include a thorough description of the predicted segment (and sometimes other segments as well).  Discussion questions can be added to help people think about how to maximize the strengths of their personality and to minimize or overcome the weaknesses.

This step is important because marketing research ethics statements usually indicate that participation in a survey should not influence the way the respondent is treated (compared to non-respondents).  Therefore, making an effort to classify non-respondents ensures ethical compliance.

Step 6: Develop and carry out a strategy for each segment

With segment membership assigned to constituents, it is time to put the segments into practice.  Should we emphasize some segments over others?  Should we communicate differently to each segment?  Should we develop offerings based on the needs of certain segments?  Should we organize staff responsibilities by segment?

The applications for using segmentation are many and far reaching.  Segmentation is usually strategic rather than tactical.  Since it involves high-level thinking, the segmentation process should have involvement and buy-in of senior leadership from its early stages.  In commercial research, I have seen segmentation projects aborted or shelved more frequently than any other kind of research.  It should not be undertaken lightly.

That’s the recipe for segmentation.  Wonky, yes—but underlying the fun, What kind of ____ are you?” quizzes is real science.  If you are thinking about segmentation and would like to have some help in your analysis kitchen, feel free to <a href=””>email</a> or give us a call.  We’re glad to join in the process of delivering information that supports Spirit-led decisions.


What type are you: Outfitter? Orality Overcomer? Obi-Wan Kenobi?


Do you like personality tests?  Some people repeatedly retake the Myers-Briggs Type Indicator® assessment to see if they have changed their personality.

My kids, meanwhile, love online personality quizzes like “Which Star Wars Character Are You?”

Recently they found this “infographic” which combined the two concepts.  Here’s an excerpt:









My kids, checking up on their parents’ MBTI® types, dissolved in hysterics to learn that they were the product of a union between C-3PO’s personality and Yoda’s.

Such quizzes not only make for entertainment but also for interesting insights and discussions—with application for global mission.  I can envision a church-planting team having an extended discussion on whether they have the right mix of Star Wars/MBTI personalities to overcome the strongholds of evil in their quadrant—and using the results to inform recruitment of new team members.

However, another type of segmentation might prove more relevant—such as a quiz that lets you know your church-planting personality (more on that later).

With good data and the right analyst, your ministry can develop segments (donors, workers, prospects, churches, peoples) based on specific, relevant information that is most meaningful for your ministry.  Further, you can create classifying tools (quizzes) that your people can take to better understand themselves—or their ministry environment—informing Spirit-led decision making.

Most people are familiar with simple segmentation approaches that rely on one measure (such as birth year) that does a reasonably good job of dividing a large group into meaningful subgroups (such as Gen Xers and Millenials) that reflect a set of shared traits.

The MBTI rubric uses four dimensions of personality, each with two poles.  Tests determine on which side of each spectrum a person falls.  Voila!  Sixteen possible personality combinations emerge.

Mission researchers like Patrick Johnstone and Todd Johnson have popularized geo-cultural “affinity blocs”—segments that reflect collections of people groups on the basis of shared social/religious/geographic/cultural traits.  It is much easier to remember and depict 15 affinity blocs than 12,000 people groups.

Recently, GMI has done value-based or activity-based segmentation analysis on several survey projects.  One is the subject of GMI’s featured missiographic for early November—giving an overview of five personalities of those investigating mission agency websites, based on their information needs.

One of those segments is Faith Matchers—those for whom theological alignment is of primary importance.  When Faith Matchers visit an agency website, they are looking first for to see if an agency’s beliefs align with theirs before considering strategy, location or service opportunities.

Last week we learned that one agency web designer had read the detailed website visitor profiles and related communications ideas in GMI’s Agency Web Review report and made a small adjustment to the website to make sure that Faith Matchers would be able to find the agency’s statement of faith with a single click from the home page—an easy change based on segmentation analysis.

Some of our other recent segmentation work included identifying :

  • Three mission agency CEO personalities (Outfitters, Entrepreneurs and Mobilizers) based on organizational-, staff- and personal-effectiveness priorities, as described in the Missio Nexus 2013 CEO Survey Report based on the responses of more than 150 agency leaders.
  • Three motivation-based segments (Where, Whether and Whatever) for those considering six-to-24-month mission internships, drawn from a quick survey of GMI’s panel of future missionaries.  One group is committed to long-term service and discerning where or with what agency it should serve.  One segment is discerning whether it is called/cut-out for long-term mission service.  The largest segment is eager to serve now, with little or thought given to post-internship service (whatever).  Following is a scatterplot of the 205 respondents.


  • Four Church Planter personalities (Word-based Advocates, Orality Overcomers, Trade-Language Strategists and Judicious Intercessors) based on how often they engaged in “fruitful practice” activities, from a survey of nearly 400 church planters working among resistant peoples.

For that last one, we developed a 10-question quiz that church-planting workers can take to discover the strengths and potential growth areas of their church-planting personality.  Sound interesting?  Write us for details on how to get a copy of the Church Planting Personality Profiler—it’s available to member agencies of a particular network.

In a follow-up post, we’ll discuss analysis approaches for creating segments and how scatterplots and the classification quizzes are developed.


The pitfalls of self-assessment


This week, the eminently useful Brigada Today newsletter—in addition to drawing attention to GMI’s Agency Web Review—also posted an item from someone looking a self-assessment survey for local church missions programs, recalling that ACMC used to offer one.


Responses were subsequently posted by Marti Wade (who does great work with Missions Catalyst) noting that the tool is still available via Pioneers, which assumed ACMC assets upon its folding; and by David M of Propempo International, which also offers a version of the tool.  A snapshot of an excerpt from the ACMC/Pioneers version appears above.

Seeing the ACMC survey brought back a memory from a 2003 project that GMI did for ACMC.  We surveyed 189 ACMC member churches to understand the status of church mission programs as well as their needs and goals.  The survey included each of the twelve questions from the self-assessment grid.

Subsequently, we did statistical modeling to determine if/which/to what degree various church missions program elements were associated with growth in missions sending and with missions budget as a proportion of overall church budget.

Unfortunately, most of the correlations were not statistically significant, and those that were significant were negatively correlated—meaning churches that rated their mission program highly (placing greater priority on the various dimensions) tended to demonstrate less growth in sending or lower relative financial commitment.

How could this be?

Turns out that this is a fairly common outcome of self-assessment exercises.  In short, teams with excellent performance also tend to have high standards—and their vision for growth frequently leads them to be more self-critical than lower-performing teams, which often have lower standards.

So, am I discouraging local churches to use the Mission Assessment Tool?  Not at all.  I encourage churches to download it and use it as the basis for discussion—it can be a great discussion starter for vision building, clarifying core values and identifying priorities for development.  For the reason described above, you may find out that some team members differ on where the program stands—or where the opportunities are for growth.

But when program evaluation is the goal, it helps to have outside eyes providing the perspective.  Those well equipped to offer feedback on a church’s mission program are:

1. Those served by or in partnership with the mission team, such as missionaries who may have other supporting churches (these must be given anonymity in response) and/or

2. Outside consultants who work with many church mission programs and have a valid basis of comparison.

Meanwhile, at the 30,000-foot level, researchers, missiologists and consultants are eager to discover the key differences between high-performing church mission teams and others.  The statistical modeling sought to answer the question: What program elements are the most common outflows (or drivers) of increased financial/sending commitment: Better mission education?  Better worker training?  Greater emphasis on strategy?  More local mission involvement?  This is where self-assessment bias—seen across a sample of 189 churches—becomes a problem.

One helpful approach is to focus on relative data.  Were we to re-examine the analysis today, I would be inclined to transform the raw data into relative performance rankings (each church’s perception of its relative strengths and weaknesses).  This compensates for differing standards of excellence by looking at each church’s priorities.

Self-evaluation bias can also be reduced by developing assessment tools with response scales/categories that are so observably objective that they cannot easily be fudged.  The ACMC tool uses descriptions for each commitment level that are intended to be objectively observable—but in some cases they are subject to interpretation, or to cases where a higher-level condition may be met while a lower-level condition is unfulfilled.  In the 2003 study we gave specific instructions to respondents that they should proceed through each scale a step at a time, stopping at the lowest unmet condition.  However, such an instruction may not have been enough to overcome the need of some respondents to affirm their church’s mission program with high marks.

This issue also points to the importance of testing assessment tools for validity among a pilot sample of respondents—with results compared to an objective measure of excellence.

Take care with self-assessment.  After all, scripture itself warns us in Jeremiah 17:8-10 that “the heart is more deceitful than all else and is desperately sick; who can understand it?”