Category Archives: Research Blog

Listening well…and why it matters

 

Does your mission organization listen well?

How would you know?

One of the more famous mission research studies since the turn of the millennium was the ReMAP II study of missionary retention, done by the Mission Commission of the World Evangelical Alliance.

Fieldwork, conducted in 2002-03, involved 600 agencies across 22 countries, representing some 40,000 missionaries.

GMI associates played a prominent role in the research and analysis, as well as in the creation of the book that reported the results, Worth Keeping.  The first half of the book is available free from WEA Resources.

It is an important book and well worth having on your shelf if you are involved in recruiting, assessing, training or leading field missionaries.  The book provides a helpful formula for calculating retention rate that every agency should apply.  Beyond that, its insights include:

    • Some agencies retain missionaries much better than do others.  The average (mean) tenure of those serving in high-retention agencies was 17 years—compared to 7 years in low-retention agencies (p. 3).  That is especially important for certain ministries, for the time between the seventh and 17th year is, according to Patrick Johnstone, “The period most likely to prove fruitful in cross-cultural church-planting ministry” (Future of the Global Church, p. 227).
    • Large agencies offer a decided advantage in retention over smaller agencies (pp. 39-41).
    • Setting a high bar in missionary selection correlates strongly with retention—the more criteria an agency considers in selection (character references, physical health, local-church ministry experience, affirmation of a doctrinal statement), the more likely it is to have strong retention (pp. 69-71).
    • The greater the percentage of an agency’s budget spent on member care—and especially preventative member care—the more likely it is to have strong retention.  In newer sending countries (Majority World), high-retention agencies spend twice as much as low-retention agencies (as a percentage of budget) and twice as much on preventative care (pp. 182-183).

All of these findings are meaningful and credible.  They come from the portions of the survey questionnaire that ask agency administrators to report on facts: What is your agency’s size?  Its retention rate?  The average tenure of departed field staff?  What criteria does it consider?  How much does it spend on member care?  These are facts that would be reported similarly, regardless of who completed the survey on behalf of the agency.

However, a large chunk of the survey instructed agency administrators as follows:

“Please evaluate your mission agency’s practices in the following areas (as evidenced by time, effort and effectiveness).”  Items were listed on a six-point scale ranging from “Not well done” to “Very well done” (p. 413).

Among the 49 items in this section:

  • Missionaries are assigned roles according to their gifting and experience.
  • Good on-field supervision is provided (quantity and quality).
  • Missionaries are generally not overloaded in the amount of work they do.
  • Effective pastoral care exists at a field level (preventative and in crises).
  • Missionaries are included in major decisions related to the field.

During the analysis phase, Jim Van Meter, who led the U.S. analysis, noticed that several items in this section did not significantly correlate with retention rates—and some significant correlations were counter-intuitive.  He asked GMI for a second opinion about why.

Our response: The problem isn’t the questions.  It’s the person answering them!

Administrators can reliably answer factual questions about their agency’s practices, but they cannot reliably answer evaluative questions related to their support of field staff.  The field staff has to answer those questions!

That’s why we launched the Engage Survey in 2006—so that field missionaries could give their input on issues like these.  It is also why we sought a grant to again promote Engage—with a substantial discount to agencies—in 2014-2015.

Consider the last item in that list above: Missionaries are included in major decisions related to the field.  In ReMAP II, agency administrators, both Western and Non-Western, indicated this as an area of strength for agencies.  Further, the item was not linked to retention.

But when we surveyed 1,700-plus fieldworkers, a completely different picture emerged.  “My organization involves employees in decisions that affect them” was one of the 10 lowest-rated items (out of 68).  When combined with related items like “My organization’s management explains the reasons behind major decisions” and “My organization acts on the suggestions of employees,” the factor we titled “Involvement in Decisions” was the lowest rated of 11 themes (principal component factors) in the survey.

 

What is more, the factor was significantly correlated with agency retention.

When we did follow-up depth interviews with current and former missionaries, inclusion in decision-making was one of five encouraging themes related to continuing service.  Exclusion from decision-making was one of six discouraging themes.

In short, everything we hear from field staff says, “This issue is important, and most missions have significant room for improvement.”

So, back to the original questions:

  • Does your mission organization listen well?
  • How would you know?

One clue is your agency’s annual retention rate for long-term cross-cultural workers.  If it is 97 percent or above, you probably listen well relative to other agencies.  If it is below 94 percent, you very likely have room for improvement.

To be sure, I would strongly recommend surveying your field staff.  Use a survey that assures anonymity for respondents, ideally administered through a third party.  Even better would be to do it collaboratively with other agencies, so you could learn how well you are doing compared to like-minded organizations with globally distributed staff.  And if you could find an experienced researcher to walk you through the results and make sure you make an action plan, so much the better.

That’s Engage.  Pricing is reasonable (less than $1,000 for many agencies) and is graded by the number of missionaries on staff.  Those signing up by November 30 save 25 percent on registration (via a $125 check from GMI, courtesy of a foundation grant) and 20 percent off the per-missionary graded rate.  Bu the way, none of the registration fees comes to GMI—our involvement is funded fully through the grant.

Count the hours that it would take you to do this on your own, without comparative benchmarks or a professional-grade survey instrument and follow-up consultation.

Pardon the shameless plug, but Engage is one of the best deals I know of in mission research.  Everyone wins:  Leadership teams get to celebrate successes and identify priorities.  Boards receive meaningful measures and see how leaders are taking initiative.  Field staff gets a chance to be heard and offer ideas.

 

What future missionaries are reading

A few months ago we did a survey with prospective missionaries and asked about what they are reading.  Along with a lot of David Platt and John Piper, we noted the following titles (write-in responses from a total of about 160 respondents):

A Gleam of Light  by Ila Marie Davis

Thriving in Cross Cultural Ministry by Carissa Alma

Why Jesus Crossed the Road by Bruce Main

Cross-Cultural Connections by Duane Elmer

Daws: A Man Who Trusted God by Betty Skinner

Do What Jesus Did by Robby Dawkins

Dreams and Visions by Tom Doyle

Engaging Islam by Georges Houssney

Following Jesus Through the Eye of the Needle by Kent Annan

Go and Do by Don Everts

Hudson Taylor’s Spiritual Secret by Dr. & Mrs. Howard Taylor

Kingdom Matrix by Jeff Christopherson

Kisses from Katie by Katie Davis

Many Colors by Song Cha Rah

Real Life by James Choung

Students of the Word by John Stott

The Mark of a Christian by Francis Schaeffer

The New Friars by Scott Bessenecker

To Repair the World by Paul Farmer

We’d like to respectfully add a title to the list. Crossing Cultures with Ruth by James Nelson is the first book GMI has produced specifically for those considering cross-cultural service. While accessing research is one of the “Fruitful Practices” for effective mission, not every new missionary has a bent for data and reports.  So we have weaved lessons from a decade of research into a Bible study.  The result offers memory hooks that connect current research with the timeless biblical narrative of Ruth, a cross-cultural servant.


Recipe for constituent pie

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

It’s Thanksgiving!  Time to think about pie—or at least about ways to slice your constituents into meaningful chunks.

Going wonky in this post with some of the detailed ingredients and cooking directions for segmentation.

But first, a bit about the danger of segmenting audiences, which involves grouping and dividing people.  A warning:

“Dividing” and the Body of Christ don’t go together particularly well.
Our default posture should be one of unity.

 In the church (and in church planting), the “homogeneous unit principle” acknowledges that growth tends to happen more quickly among groups whose members share a common language, ethnicity and socio-economic class.

This principle has been used wisely to develop indigenous churches that show the incarnational nature of the Good News in understandable forms—relevant and transformative.  The principle has also been used unwisely to segregate and isolate people within the Body of Christ on the basis of race, class, age, etc., preventing the Church from experiencing unity in the midst of God-ordained diversity.

The challenge in applying the concept wisely is well described in the very first Lausanne Occasional Paper from 1978.

In local congregations, intentionally segmenting people can be fraught with difficulty.  Christianity Today’s Andy Crouch addressed this issue eloquently in his 1999 essay For People Like Me from re:generation quarterly, a magazine whose demise I still mourn a decade after it folded.  I encourage you to click through and read the whole piece, but here’s the punchline:

For surely one of the scandalous things about the gospel—indicated by Jesus’ own practices of welcoming sinners and eating with them, calling tax collectors along with fishermen to be his disciples, and praying for the forgiveness of his executioners—is that it does not fit the marketer’s (or the Pharisee’s) formula “for people like me.” It is in fact for people not like me—unless they are “a wretch like me,” and wretchedness was never the basis of a successful marketing campaign. Christianity is not a product that can be added seamlessly into the lives of consumers like one more lifestyle-enhancing appliance. It is instead a call to a completely different way of viewing the world, one in which the one who looks least like me is at a minimum my “neighbor” (Luke 10:29-37) and could well be Jesus himself (Matt. 25).

So, before undertaking the task of segmenting an audience, be sure to check your conscience.  Segmentation can acknowledge God-given variation in giftedness and experience.  That is the beauty of the Myers-Briggs® types—there are no better or worse personalities—each has natural strengths as well as potential blind spots.  It can also validate multiple approaches for doing a task, blunting the arguments of those who are fixed on their method as the “one best way” to do a task.

With that addressed, on to wonkiness.

When communicating with large numbers of people (donors, staff, readers/listeners, etc.) segmentation is a strategy that reflects a middle ground between uniformity (one size fits all) and customization (every one unique).  Mass communication is often ineffective; individual customization is usually inefficient.  In segmentation, an approach is developed for each segment, but within a segment everyone is treated similarly.

Criteria for developing segments include:

  1. Meaningful subgroups really exist—there is a valid basis for segmenting an audience.
  2. The subgroups are identifiable—there is a reasonable way to segment an audience.
  3. The subgroups are actionable—there is a practical use for segmenting an audience.

The Missio Nexus CEO survey that we recently helped with was commissioned, in part, because Missio Nexus frequently heard CEOs asking how other CEOs were dealing with various challenges.  The idea was to document and share experiences among the CEO community.  The question CEOs were asking presupposes likeness among the peer group.

We thought it could be helpful to see if meaningful subgroups existed which would help focus the question or expand on it.  Profiling CEO segments could help CEOs better understand themselves and their peers.  Their question could become, “How are other CEOs like me dealing with this issue?” or “Why are CEOs dealing with this issue in different ways?”

Here are the general steps in segmentation, using some of our recent projects as examples.

Step 1: Select a basis on which to explore/generate segments.

You can segment audiences in many ways—some of the most common consider the needs, values, aspirations or behaviors of their audiences.  Consider behaviors.  If people behave according to certain patterns, those can dictate the communication channels used to reach them.  Child sponsorship agencies, for example, use several methods to sign up new sponsors that are behavior based: church partnerships, online ads, concert sponsorship, direct mail.  These are real meaningful, actionable segments.

In some of the recent segmentation work that we’ve done, the basis for segmentation has been as follows:

  • Church Planters: Behaviors (frequency of “fruitful practice” activities)
  • Mission Agency Website Visitors: Needs (information sought)
  • Mission Internship Prospects: Motivations (for considering a 1-to-3 year term of service)

In the Missio Nexus CEO survey, one objective was to look at recent progress and current or near-future challenges.  So, we developed segments based on relative priorities for organizational, staff and personal-effectiveness (combined).  We didn’t worry about why CEOs prioritized one area over another.  The shared need to address certain areas was enough.

Our hope is that priority-based segments would be actionable for Missio Nexus as an association that provides regular programming for executives such as C-Suite Webinars.  Priority segments give them a guide for how to plan relevant content that provides a balance for each type of CEO.  At an event, the CEO audience might not be large enough to justify separate tracks—but breakout sessions could be scheduled in a way that each group is likely to find something of interest.

Step 2: Select a method for creating segments.

If one quantitative measure is extremely important, such as expected lifetime donor value or likelihood of serving with your agency, segments can be driven by their impact on that variable.  This situation calls for decision-tree analysis.  Many statistical packages include such a method—CHAID and CART are traditional examples. The analyst feeds in a number of predictor variables—often combining different variable types—along with the known outcome of the key variable from a sample.  The software will identify a sequence of if-then steps involving the variables that best divide people into groups on the basis of the key measure.

This allows donor or staff prospects to be quickly qualified; responses can vary accordingly.  Those with a lower likelihood of giving or joining should not be ignored, but follow-up communication might be done a bit more frugally or infrequently.

Often segmentation will be driven not by a single measure but several measures with a common theme.  In our Agency Web Review we used a set of 16 types of information that visitors to mission agency websites might be interested in.  In that case, cluster analysis can be a great way to identify segments.  K-mean clustering is a well-regarded tool in which the analyst specifies the number of segments (clusters), assuming statistical significance.  Most analysts run and compare several variations using different numbers of clusters, selecting the one that seems most practical or intuitive.

Step 3: Create and label the segments

In our mission agency website visitor study, we chose five statistically valid segments (via cluster analysis) that also made intuitive sense to us.  Looking to name the segments, we noticed each segment’s various interests.  Among the influential variables were short-term opportunities and long-term opportunities.

One segment demonstrated relatively low interest in both kinds of opportunities.  That seemed strange—all of those responding had been screened for interest in serving cross-culturally.  We dug deeper, examining the group by its demographics.  It turned out that many people in the segment were underclass collegians (seniors and new grads were more likely to be in other segments).  Aha!  Now it made more sense.  Their low interest in service opportunities reflected the fact that they were years away from applying for career service.  They valued learning about agencies generally and exploring mission-oriented resources (perhaps for use in coursework or for their campus fellowship).  This group also included a fair number of non-students whose primary role was mobilizing others to go.  Therefore, we named the segment Scouts.  They were scouting out info for another time—or for other people.  We named the other segments through a similar process.

Step 4: Describe the segments in detail

Saving segment membership to your data set opens up a world of descriptive possibilities by cross-tabbing segment with other variables.  In our church planter segmentation, women were especially likely to be in one segment—even though none of the input variables were gender related.  The church planters were working among resistant peoples, often in cultures where women are closely protected and limited in their social mobility.  It came as no surprise, then, that women made up a large portion of the segment that emphasized prayer and judicious (not bold) sharing.  For many, that was the type of ministry that was available without severely violating cultural norms.

One way to see the relationships among segments is through segment maps.  These can be created quickly through a bit of reverse engineering.  (Warning: statistical terms coming—in case of dizziness, skip down two paragraphs.)  We use the variables from the cluster analysis as predictor variables in a discriminant-function analysis.  Cluster membership is the variable to be predicted.  We save the function coefficients as variables, and then we use the first two sets of coefficients as X-Y coordinates in a scatterplot.  When we color code by segment, results look something like this (taken from our Mission Internship Study):

Each point represents a respondent.  The segments naturally group together, and the X and Y dimensions distinguish segments from one other.  These dimensions, which reflect weighted combinations of the input variables, should be labeled to show how the segments relate to one another.

In the example above, three groups emerged with differing motivations for considering a cross-cultural internship of one to three years.  The map showed that the groups can be considered on dimensions related to the purpose of the internship (My Fit vs. Their Blessing) and their level of commitment to long-term mission (Committed vs. Exploring):

  • The Where segment is mostly committed to long-term service and want to test their fit in a particular setting or a particular agency;
  • The Whether segment is uncertain about long-term service and want to test if they should continue serving after the internship concludes.
  • The Whatever segment isn’t concerned about their long-term direction.  They simply want to meet people’s needs through the internship without considering their future career path.

Step 5: Develop a scoring model for classifying others

It isn’t easy to get everyone to take a survey, so the segments of constituents who don’t respond—and those emerging in the future—cannot be classified.  This limits the value of the segmentation.

The answer is to create a scoring model—either using non-survey data or developing a mini-survey that makes it easy to collect information, such as through a registration form.  Here is where we get to the quizzes that let people discover their “personality.”

With decision-tree segments, we simply use quiz questions based on the logic of the tree.  For cluster analysis-based segments, we use discriminant analysis.  The setup uses the same variables as in the mapping step above, but this time we use a stepwise procedure (to limit the number of variables) and select the option for “Fisher coordinates.”  This yields one equation for each segment.  When someone takes the quiz, we cross-multiply their responses with the Fisher coordinates and then compare the totals: the largest value is the “predicted” segment—which is shown to the quiz taker and/or added to the constituent database.

Quiz results usually include a thorough description of the predicted segment (and sometimes other segments as well).  Discussion questions can be added to help people think about how to maximize the strengths of their personality and to minimize or overcome the weaknesses.

This step is important because marketing research ethics statements usually indicate that participation in a survey should not influence the way the respondent is treated (compared to non-respondents).  Therefore, making an effort to classify non-respondents ensures ethical compliance.

Step 6: Develop and carry out a strategy for each segment

With segment membership assigned to constituents, it is time to put the segments into practice.  Should we emphasize some segments over others?  Should we communicate differently to each segment?  Should we develop offerings based on the needs of certain segments?  Should we organize staff responsibilities by segment?

The applications for using segmentation are many and far reaching.  Segmentation is usually strategic rather than tactical.  Since it involves high-level thinking, the segmentation process should have involvement and buy-in of senior leadership from its early stages.  In commercial research, I have seen segmentation projects aborted or shelved more frequently than any other kind of research.  It should not be undertaken lightly.

That’s the recipe for segmentation.  Wonky, yes—but underlying the fun, What kind of ____ are you?” quizzes is real science.  If you are thinking about segmentation and would like to have some help in your analysis kitchen, feel free to <a href=”mailto:jim@gmi.org”>email</a> or give us a call.  We’re glad to join in the process of delivering information that supports Spirit-led decisions.

 

What type are you: Outfitter? Orality Overcomer? Obi-Wan Kenobi?

 

Do you like personality tests?  Some people repeatedly retake the Myers-Briggs Type Indicator® assessment to see if they have changed their personality.

My kids, meanwhile, love online personality quizzes like “Which Star Wars Character Are You?”

Recently they found this “infographic” which combined the two concepts.  Here’s an excerpt:

 

 

 

 

 

 

 

 

My kids, checking up on their parents’ MBTI® types, dissolved in hysterics to learn that they were the product of a union between C-3PO’s personality and Yoda’s.

Such quizzes not only make for entertainment but also for interesting insights and discussions—with application for global mission.  I can envision a church-planting team having an extended discussion on whether they have the right mix of Star Wars/MBTI personalities to overcome the strongholds of evil in their quadrant—and using the results to inform recruitment of new team members.

However, another type of segmentation might prove more relevant—such as a quiz that lets you know your church-planting personality (more on that later).

With good data and the right analyst, your ministry can develop segments (donors, workers, prospects, churches, peoples) based on specific, relevant information that is most meaningful for your ministry.  Further, you can create classifying tools (quizzes) that your people can take to better understand themselves—or their ministry environment—informing Spirit-led decision making.

Most people are familiar with simple segmentation approaches that rely on one measure (such as birth year) that does a reasonably good job of dividing a large group into meaningful subgroups (such as Gen Xers and Millenials) that reflect a set of shared traits.

The MBTI rubric uses four dimensions of personality, each with two poles.  Tests determine on which side of each spectrum a person falls.  Voila!  Sixteen possible personality combinations emerge.

Mission researchers like Patrick Johnstone and Todd Johnson have popularized geo-cultural “affinity blocs”—segments that reflect collections of people groups on the basis of shared social/religious/geographic/cultural traits.  It is much easier to remember and depict 15 affinity blocs than 12,000 people groups.

Recently, GMI has done value-based or activity-based segmentation analysis on several survey projects.  One is the subject of GMI’s featured missiographic for early November—giving an overview of five personalities of those investigating mission agency websites, based on their information needs.

One of those segments is Faith Matchers—those for whom theological alignment is of primary importance.  When Faith Matchers visit an agency website, they are looking first for to see if an agency’s beliefs align with theirs before considering strategy, location or service opportunities.

Last week we learned that one agency web designer had read the detailed website visitor profiles and related communications ideas in GMI’s Agency Web Review report and made a small adjustment to the website to make sure that Faith Matchers would be able to find the agency’s statement of faith with a single click from the home page—an easy change based on segmentation analysis.

Some of our other recent segmentation work included identifying :

  • Three mission agency CEO personalities (Outfitters, Entrepreneurs and Mobilizers) based on organizational-, staff- and personal-effectiveness priorities, as described in the Missio Nexus 2013 CEO Survey Report based on the responses of more than 150 agency leaders.
  • Three motivation-based segments (Where, Whether and Whatever) for those considering six-to-24-month mission internships, drawn from a quick survey of GMI’s panel of future missionaries.  One group is committed to long-term service and discerning where or with what agency it should serve.  One segment is discerning whether it is called/cut-out for long-term mission service.  The largest segment is eager to serve now, with little or thought given to post-internship service (whatever).  Following is a scatterplot of the 205 respondents.

 

  • Four Church Planter personalities (Word-based Advocates, Orality Overcomers, Trade-Language Strategists and Judicious Intercessors) based on how often they engaged in “fruitful practice” activities, from a survey of nearly 400 church planters working among resistant peoples.

For that last one, we developed a 10-question quiz that church-planting workers can take to discover the strengths and potential growth areas of their church-planting personality.  Sound interesting?  Write us for details on how to get a copy of the Church Planting Personality Profiler—it’s available to member agencies of a particular network.

In a follow-up post, we’ll discuss analysis approaches for creating segments and how scatterplots and the classification quizzes are developed.

 

The pitfalls of self-assessment

 

This week, the eminently useful Brigada Today newsletter—in addition to drawing attention to GMI’s Agency Web Review—also posted an item from someone looking a self-assessment survey for local church missions programs, recalling that ACMC used to offer one.

 

Responses were subsequently posted by Marti Wade (who does great work with Missions Catalyst) noting that the tool is still available via Pioneers, which assumed ACMC assets upon its folding; and by David M of Propempo International, which also offers a version of the tool.  A snapshot of an excerpt from the ACMC/Pioneers version appears above.

Seeing the ACMC survey brought back a memory from a 2003 project that GMI did for ACMC.  We surveyed 189 ACMC member churches to understand the status of church mission programs as well as their needs and goals.  The survey included each of the twelve questions from the self-assessment grid.

Subsequently, we did statistical modeling to determine if/which/to what degree various church missions program elements were associated with growth in missions sending and with missions budget as a proportion of overall church budget.

Unfortunately, most of the correlations were not statistically significant, and those that were significant were negatively correlated—meaning churches that rated their mission program highly (placing greater priority on the various dimensions) tended to demonstrate less growth in sending or lower relative financial commitment.

How could this be?

Turns out that this is a fairly common outcome of self-assessment exercises.  In short, teams with excellent performance also tend to have high standards—and their vision for growth frequently leads them to be more self-critical than lower-performing teams, which often have lower standards.

So, am I discouraging local churches to use the Mission Assessment Tool?  Not at all.  I encourage churches to download it and use it as the basis for discussion—it can be a great discussion starter for vision building, clarifying core values and identifying priorities for development.  For the reason described above, you may find out that some team members differ on where the program stands—or where the opportunities are for growth.

But when program evaluation is the goal, it helps to have outside eyes providing the perspective.  Those well equipped to offer feedback on a church’s mission program are:

1. Those served by or in partnership with the mission team, such as missionaries who may have other supporting churches (these must be given anonymity in response) and/or

2. Outside consultants who work with many church mission programs and have a valid basis of comparison.

Meanwhile, at the 30,000-foot level, researchers, missiologists and consultants are eager to discover the key differences between high-performing church mission teams and others.  The statistical modeling sought to answer the question: What program elements are the most common outflows (or drivers) of increased financial/sending commitment: Better mission education?  Better worker training?  Greater emphasis on strategy?  More local mission involvement?  This is where self-assessment bias—seen across a sample of 189 churches—becomes a problem.

One helpful approach is to focus on relative data.  Were we to re-examine the analysis today, I would be inclined to transform the raw data into relative performance rankings (each church’s perception of its relative strengths and weaknesses).  This compensates for differing standards of excellence by looking at each church’s priorities.

Self-evaluation bias can also be reduced by developing assessment tools with response scales/categories that are so observably objective that they cannot easily be fudged.  The ACMC tool uses descriptions for each commitment level that are intended to be objectively observable—but in some cases they are subject to interpretation, or to cases where a higher-level condition may be met while a lower-level condition is unfulfilled.  In the 2003 study we gave specific instructions to respondents that they should proceed through each scale a step at a time, stopping at the lowest unmet condition.  However, such an instruction may not have been enough to overcome the need of some respondents to affirm their church’s mission program with high marks.

This issue also points to the importance of testing assessment tools for validity among a pilot sample of respondents—with results compared to an objective measure of excellence.

Take care with self-assessment.  After all, scripture itself warns us in Jeremiah 17:8-10 that “the heart is more deceitful than all else and is desperately sick; who can understand it?”

 

To retain missionaries, help them keep their CHIN UP

Retention of field staff is a key effectiveness issue for mission sending entities. In The Future of the Global Church (p227) Patrick Johnstone notes that “the period most likely to prove fruitful in cross-cultural church-planting ministry” is between the eighth and 17th years of field service.

Retention is also a key stewardship issue, as the costs of recruiting, qualifying, training, funding and sending a cross-cultural worker are vastly front loaded (incurred before sending and in the early stages of field ministry).  This is true regardless of the worker’s country of origin.  From a financial perspective, sending costs are “amortized” over a worker’s tenure on the field—the longer the tenure, the more cost-effective the sending process.

While it is true that when missionaries need to leave the field, allowing them to stay on can be damaging (personally, organizationally and ministerially), the general principle remains: encouraging and equipping workers toward longer tenures is a worthy goal.

A few weeks ago I spoke to prospective cross-cultural workers on “How to Become an Ex-Missionary…Or Not.” The research supporting the talk came from the qualitative module of the Engage study (fielded 2006 and 2007), which GMI did in partnership with Best Christian Workplaces Institute and Rob Hay, now principal of Redcliffe College.

A bit of backstory before getting to five key retention factors for North American cross-cultural workers.

The Engage research was initiated by the WEA Mission Commission as a North American follow up to the global ReMAP and ReMAP II studies on attrition and retention, which led to the publication of the useful books Too Valuable to Lose and Worth Keeping.

The ReMAP studies did a great job of outlining best practices in missionary retention on a global scale, as well as in drawing attention to the issue.  For example, the 60-plus agencies participating in the U.S. portion of the studies could be divided into virtually equal groups based on retention.  High-retention agencies averaged 97.4 percent retention annually, while the low-retention group averaged 90.4 percent.  When compounded over a decade, the high-retention rate projects to 77 percent of non-retired workers remaining on the field, while the low-retention rate projects to only 37 percent remaining.  You can read the source report here.

Still, there was one significant problem with the ReMAP studies: no current field workers were interviewed, only agency administrators.  We first got involved when the ReMAP II U.S. study coordinator asked GMI to review the U.S. data.  Certain variables appeared to yield counter-intuitive results—meaning agencies that rated themselves as high performers in certain dimensions actually had lower retention than those that rated themselves as lower performers.  How could this be?  Easy—self-evaluation often produces results such as these due to differences in standards.  Highly effective organizations typically have very high standards.  Therefore, they see more room for improvement than do their peer organizations.

The solution: don’t self-evaluate.  Allow others—in this case, current and former field missionaries—to rate how well a sending organization equips and supports them.

That’s what we did in the Engage study.  The quantitative module surveyed more than 1,700 current field staff from 17 organizations.  Results verified that an organization’s retention rate correlates positively with the attitudes of current field staff.  That finding refutes the hypothesis that ex-missionaries are merely can’t-hack-it, sour-grapes misfits who needed to be weeded out.  Rather, an agency’s missionaries reside along a likelihood-to-stay continuum.  The better an agency’s vision, leadership, training, policies and support, the less likely workers are to fall off of the attrition cliff.

The qualitative module compared the experience of more than 40 current field missionaries and more than 40 ex-missionaries who had left before retiring or completing a fixed assignment.  Questions asked of both groups included open-ended inquiries about factors that encourage or discouraged continuing service.

Group comparisons of coded responses yielded five key encouragement factors (as well as six key discouragement factors).

I remember those five encouragement factors through the acronym CHIN UP:

CH      A strong sense of personal CALLING and HOPE from God

I          A feeling of INCLUSION in team/agency decision making

N        A perception of great spiritual NEED among the people being served

U        A sense of personal USEFULNESS, regardless of visible ministry progress

P        A strong sense of God’s PROVISION via the prayer and generosity of others

Agencies and their member care departments will do well to regularly check the pulse of their field staff in those key areas.

Want to know more about applying insights from Engage?  Contact us at GMI about speaking to or consulting with agency leadership and/or member-care staff on retention issues.  We also love to speak with future missionaries about how they can prepare to avoid becoming ex-missionaries.

Also, agencies can sign up to do the Engage survey with your current field staff.  Best Christian Workplaces offers the survey at a very reasonable cost, using a sliding scale based on the number of field workers invited to take part.  It is a great investment that also adds to an ongoing database of learning about retention.  Please let them know that you heard about Engage from GMI.

 

 

Analysis links website “personality” to follow-up actions

 
Third in a series of three posts looking forward to the 2013 Agency Web Review by reviewing highlights from a prior edition of the study.

As in 2004, the upcoming edition of the Agency Web Review will not only consider the rational elements of a missions website (clarity, functionality, information) but also the emotional side—its “personality”—and how that aligns with visitors’ responses and potentially influences follow-up activity.

From a list of 40 or so descriptive terms, respondents select the ones that they felt best describe each website.  Once data collection is complete, GMI analysts use factor analysis to boil those characteristics down to a smaller set of themes, or personality factors.  Then, they use regression analysis to measure how those themes correlate with the follow-through outcomes that agencies are looking for: revisiting the site, recommending it to peers and pursuing service opportunities.

In 2004 four out of the 10 personality factors correlated strongly with desired actions.  Web designers for the participating agencies learned about these factors and were able to draw on them in adapting their sites to encourage future missionaries to take the next step.  Those four factors are shown in this chart:

The degree of correlation varied depending on the follow-through action being studied.  For Intent to Recommend, the strongest link was to the “Creative” aspect of a site (a factor comprised of “Creative,” “Fresh,” “Visual” and “Non-Traditional”).  For overall website appeal, the same four factors emerged, but with “Energetic” (an absence of characteristics such as “Calm,” “Simple” and “Casual”) at the top of the list.  

How did designers use this information? Here are two ways: 

  1. Striking the right balance between identifying with who your audience is and who it aspires to be.  The presence of Energetic and Creative suggests that an agency should demonstrate that it understands the next generation of workers and is relevant to them.  This is the classic affinity principle—demonstrating that an agency is “for people like me.” Prospects are drawn to organizations that have and welcome high-energy people with a creative spirit.At the same time, the presence of Wise (“Humble,” “Experienced”) and Capable (including “Secure” and “Aware”)  suggests traits that future missionaries do not yet have—especially in terms of cross-cultural effectiveness.  Prospects desire to develop these qualities and hope that an agency will be able to draw them out.  Prospects look for signals that an agency knows what it is doing and can help new people get where they need to be.

    It is easy to miss the mark a bit one way or the other.  Too much creativity can be misinterpreted as an emphasis on style over substance.  Too much emphasis on experience and resources can be misread as close-mindedness or lack of need.  In short, prospects don’t want agencies that aspire to be “like them”—rather, they do want to be understood while being given something to aspire to become.

  2. Designers also need to understand what is not helpful.  In this case, six other website personality factors were not significantly linked to any of the key follow-through behaviors:  intention to revisit, refer or respondto opportunities.  Websites that were viewed as “Courageous,” “Concerned,” “Sharp” (incorporating “Confident” and “Smart”), “Thoughtful” or “Fun” likely did not generate the best possible response from visitors.(Note that positioning does not refer to the inauthentic donning of characteristics that are not part of an agency’s identity, but rather to expressing aspects of one’s true identity that are likely to resonate with candidates.)

Web designers usually know how to help make a website more usable or functional.  Some also have intuitive skills that enable agencies to strike the right notes in messaging and imagery. But for those who don’t—and even for those who do—message modeling can provide a well-defined target to shoot for and criteria to assess whether the messages are hitting their mark.  Such models are a standard feature of the Agency Web Review report.

Eight years after the first review, who knows how the personality characteristics may have changed?  New factors are likely to emerge in association with key outcomes.  To find out, be sure to register for the 2013 Agency Web Review.

 

From their keyboard to your web designer

(Second in a series of three posts)

As we introduce the 2013 Agency Web Review, we are reviewing highlights from the 2004 edition.

It is great for an agency to get numeric ratings on various aspects of its website—especially when comparable ratings for a group of other agency websites are available to show relative strengths and weaknesses.

But the ratings come to life when designers and mobilizers learn the specific reasons beneath the great ratings—or when they get suggestions to improve elements of their site.

Back in 2004, when we first gathered opinions of dozens of agency sites, social media was still in its infancy and dialogue with prospects was much less prevalent—so getting actionable feedback from the target audience was more difficult.

Today future missionaries have more channels to offer ideas—the challenge is getting them to take time to look closely and consider how their input can help.

Next month, the Agency Web Review will provide incentives to those considering cross-cultural service to spend several minutes observing agency websites—and then offering their opinions about them.  In addition to numeric ratings, respondents will provide open-ended feedback on their initial impressions, the website’s strengths, and elements that can be improved.

Here are some of the verbatim quotes from future missionaries.  Would any of these apply to your site?

It was attractive, but it wasn’t obvious what they did.

 

It was a little cluttered with things and I really didn’t know what I should click on. But it was VERY informative.

 

Really liked the home page and the different pictures that come up when you point at the options to enter.

 

Opportunities— list was hugely long—I had no idea which ones to select.  Perhaps narrow the categories a bit for a first selection.

 

They recommended books which would be helpful for someone looking for more resources and help in a particular area.

 

I liked the opportunity boxes that popped up over the world and then I could click on them.

 

Include more stories about past missionaries’ experiences.

 

I love the emphasis on prayer. I think that it probably points to a good orientation.

 

Things weren’t hidden in fine print from what I could tell in my brief study. Everything seemed very up-front.

 

Give more detail about service opportunities without requiring a person to receive mailings.

 

I could not find a belief statement.

 

It is impressive they have the site in a different language.

 

Increase the size of the font, I can barely read it.

 

Visually a bit sparse, but seemed spiritually grounded.

 

There is a lot going on, visually creative. I wanted to learn more.

 

Seemed to be more focused on their organization than on the people they were serving, kind of a turn off.

 

Mission and values were clearly stated and inspiring.

 

The global map links on the home page didn’t all work.  I couldn’t find a webmaster link to report it.

 

How do I actually work for (agency)!? I have no idea how to apply or what to do if I want to work with them!

 

Present more ‘in your face’ opportunities to serve or donate.

 

I really liked that one of the first things I saw invited me to pray with the group.  It gave me a way to get involved right now.

At the agency level, this kind of feedback is especially valuable for designers.  Collectively, comments about dozens of sites can be coded and analyzed for helpful trends.  Comments in the 2004 study most frequently related to the following:

  • Ease of Navigation
  • Quantity of Information
  • Organization of Site
  • Design / Layout / Color
  • Graphics / Photos

The most-frequently-mentioned opportunities for improvement fell into these categories:

  • Information about Opportunities
  • Visual Style / Layout
  • Information About the Agency
  • Text / Font
  • Verbal Style

What kinds of ideas has your agency implemented on its website based on the suggestions of visitors?  Add a comment and let us know!

 

Getting to know you: Future missionaries surf for agencies

 

This is the first in a three-post series.

Every day, people considering long-term cross-cultural service visit sending agency websites and social media pages.  What they experience in a few quick clicks can inspire them to bookmark a site, tweet about it to friends or complete an inquiry form.  Or it can lead them to a quick exit.

With mobilization events like Urbana and MissionsFest taking place in the next few weeks, agencies should be ready to put their best foot forward in assisting field-bound people to discover the next step in their journey.

Next month, GMI will field the 2013 Agency Web Review, in which hundreds of people considering long-term cross-cultural service will explore dozens of sending agency web sites, evaluating various website elements and providing helpful open-ended comments. Participating agencies will receive agency-specific reports with detailed feedback on their site, plus comparative data showing how the site’s ratings compare to a group of several dozen other agencies.

The study draws on GMI’s opt-in panel of more than 3,000 people who have confirmed that they are considering a career in cross-cultural mission. 

Eight years ago GMI fielded the first edition of the Agency Web Review, which provided actionable results for agency web designers and mobilization staff.  Kristi Crisp of World Gospel Mission had this to say about the study:

The Agency Web Review results helped us to set a better direction and convinced us of the need for changes.  …The World Gospel Mission website is a completely different site now.  We changed our focus to getting people actually going…whether with our organization or with somebody else.

In anticipation of the 2013 study, we are taking time to review a few of the highlights of the 2004 study.  The electronic communications landscape has changed dramatically since then, but many of the findings from 2004 continue to be useful. 

We asked which of 11 key activities people considering a missions career had done.  Visiting agency websites ranked fourth on the list (after attending conferences, reading mission books/newsletters and talking with missionaries).  Six out of 10 prospects had already visited the website of a sending agency.

The following chart reveals stated priorities for missionary prospects when visiting an agency site:

 

To us, the results suggest that the primary questions visitors of website visitors relate to identity: Who are you?  What do you do?  Where do you do it? What do you stand for?

Once those questions are answered, prospects feel free to consider, “OK, how would I fit in?” “What would it take for me to be a part?”

Stated priorities don’t reveal the degree to which elements of a website are linked to key response actions (more info in a few days on that), but they do express visitor expectations.  Therefore, we recommend that web designers make sure expectations are easily met without a lot of searching.  That means a well-placed “About Us” heading, opportunities that are dated and kept updated, and some explanation about what people can expect to happen after the inquiry form is submitted.  (That last item was the lowest-rated of 21 site elements tested across all organizations.)

In addition, we suggest providing some unexpected elements of “delight.”  A few of the unexpected pleasures encountered by site visitors include:

  • an opportunity to be prayed for by agency staff
  • engaging videos from field staff that give people a taste of daily life on the field
  • links to helpful resources for people considering service—even from other agencies

We noticed that the highest-rated agency websites tended not to minimize their service requirements, but they worked hard not to represent those requirements as barriers.  Their positioning was something like this:

“Becoming a missionary takes real commitment, knowledge and skills.  It’s not easy, but it is do-able, and we will walk alongside you to help you develop into an effective cross-cultural servant who enables others to realize all that God is calling them to.”

Do you know of an agency that incorporated user feedback into its website makeover? We’d love to hear about examples or standout experiences you’ve encountered.

Learn details about how to take part in the 2013 Agency Web Review here.

 

Applied research helps donors, implementers to be better partners

Research provides a needed listening function for the mission community.  Listening well results in better understanding, and better understanding usually leads to better ministry.

A great example of the way that research increases understanding and leads to practical action in ministry is the Lausanne Standards project that fosters dialogue and collaboration among ministry implementers and funders about the giving and receiving of money in mission.

Check out this entertaining whiteboard video that illustrates (literally) how the Lausanne Standards were developed and the role that research played.

GMI is honored to have conducted the first round of research (mentioned in the presentation) that supported the development of the Lausanne Standards.   Rob Martin, Lausanne Senior Associate for Global Philanthropy, whose voice (and likeness) feature prominently in the video, graciously gave us permission to discuss some of that research here on this blog.

A survey of 147 mission leaders – divided roughly 55/45 between ministry implementers and ministry donors – revealed that both groups agreed that positive funding partnerships are almost always an important issue.  However, the leaders were divided on whether partnerships were problematic, and what the nature of the problems (if any) and solutions were.

Cluster analysis led to the identification and description of four “attitude segments” among ministry donors and implementers.  This enabled the research sponsors to understand the likely objections to developing a set of guidelines for philanthropic partnerships.

 

Each of these groups believes that funders and implementers want to partner well with one another.  However, each could pose a significant objection to the process of developing standards for effective funding partnerships.  Proceeding clockwise around the grid, from top left:

  1. Standards aren’t enough to fix the problems of dependence and power in philanthropy!  We need to overhaul the system and create new structures for working together.
  2. There isn’t a problem to address – the perceived conflicts in philanthropic partnerships are exaggerated.  Just because the work is hard doesn’t mean the system is broken.
  3. You can’t engineer a policy-based solution to a spiritual problem.  Partnership issues will dissolve when people focus more on the Lord and recognize their common dependence on God.
  4. Codes and policies are no substitute for deeper relationships with one another.  Making a greater effort to understand our neighbor will lead to more effective partnership, with or without a set of standards.

The Lausanne working group’s responses to these objections are:

  1. Yes, we can benefit from creating new forms.  Finding points of affirmation is a perfect starting point.
  2. Yes, the work is challenging, and good communication will help us to address challenges more effectively.
  3. Yes, human-centered solutions are insufficient.  Agreements must be developed and implemented in reliance on the Spirit.
  4. Yes, we must grow in understanding – and agreed-upon standards reflect an increasing level of understanding.

Watch the video again to see how some of these messages are communicated clearly and effectively.  That’s research in action!  Here, segmentation is not a tool to create or emphasize division but a means of addressing concerns to develop consensus and discover unity among varied perspectives.

Research for the sake of knowledge puffs up, but research for the sake of love builds up (variation on 1 Corinthians 8:1).  How are you are seeing research applied in your area of ministry?