Tag Archives: analysis

What type are you: Outfitter? Orality Overcomer? Obi-Wan Kenobi?

 

Do you like personality tests?  Some people repeatedly retake the Myers-Briggs Type Indicator® assessment to see if they have changed their personality.

My kids, meanwhile, love online personality quizzes like “Which Star Wars Character Are You?”

Recently they found this “infographic” which combined the two concepts.  Here’s an excerpt:

 

 

 

 

 

 

 

 

My kids, checking up on their parents’ MBTI® types, dissolved in hysterics to learn that they were the product of a union between C-3PO’s personality and Yoda’s.

Such quizzes not only make for entertainment but also for interesting insights and discussions—with application for global mission.  I can envision a church-planting team having an extended discussion on whether they have the right mix of Star Wars/MBTI personalities to overcome the strongholds of evil in their quadrant—and using the results to inform recruitment of new team members.

However, another type of segmentation might prove more relevant—such as a quiz that lets you know your church-planting personality (more on that later).

With good data and the right analyst, your ministry can develop segments (donors, workers, prospects, churches, peoples) based on specific, relevant information that is most meaningful for your ministry.  Further, you can create classifying tools (quizzes) that your people can take to better understand themselves—or their ministry environment—informing Spirit-led decision making.

Most people are familiar with simple segmentation approaches that rely on one measure (such as birth year) that does a reasonably good job of dividing a large group into meaningful subgroups (such as Gen Xers and Millenials) that reflect a set of shared traits.

The MBTI rubric uses four dimensions of personality, each with two poles.  Tests determine on which side of each spectrum a person falls.  Voila!  Sixteen possible personality combinations emerge.

Mission researchers like Patrick Johnstone and Todd Johnson have popularized geo-cultural “affinity blocs”—segments that reflect collections of people groups on the basis of shared social/religious/geographic/cultural traits.  It is much easier to remember and depict 15 affinity blocs than 12,000 people groups.

Recently, GMI has done value-based or activity-based segmentation analysis on several survey projects.  One is the subject of GMI’s featured missiographic for early November—giving an overview of five personalities of those investigating mission agency websites, based on their information needs.

One of those segments is Faith Matchers—those for whom theological alignment is of primary importance.  When Faith Matchers visit an agency website, they are looking first for to see if an agency’s beliefs align with theirs before considering strategy, location or service opportunities.

Last week we learned that one agency web designer had read the detailed website visitor profiles and related communications ideas in GMI’s Agency Web Review report and made a small adjustment to the website to make sure that Faith Matchers would be able to find the agency’s statement of faith with a single click from the home page—an easy change based on segmentation analysis.

Some of our other recent segmentation work included identifying :

  • Three mission agency CEO personalities (Outfitters, Entrepreneurs and Mobilizers) based on organizational-, staff- and personal-effectiveness priorities, as described in the Missio Nexus 2013 CEO Survey Report based on the responses of more than 150 agency leaders.
  • Three motivation-based segments (Where, Whether and Whatever) for those considering six-to-24-month mission internships, drawn from a quick survey of GMI’s panel of future missionaries.  One group is committed to long-term service and discerning where or with what agency it should serve.  One segment is discerning whether it is called/cut-out for long-term mission service.  The largest segment is eager to serve now, with little or thought given to post-internship service (whatever).  Following is a scatterplot of the 205 respondents.

 

  • Four Church Planter personalities (Word-based Advocates, Orality Overcomers, Trade-Language Strategists and Judicious Intercessors) based on how often they engaged in “fruitful practice” activities, from a survey of nearly 400 church planters working among resistant peoples.

For that last one, we developed a 10-question quiz that church-planting workers can take to discover the strengths and potential growth areas of their church-planting personality.  Sound interesting?  Write us for details on how to get a copy of the Church Planting Personality Profiler—it’s available to member agencies of a particular network.

In a follow-up post, we’ll discuss analysis approaches for creating segments and how scatterplots and the classification quizzes are developed.

 

The pitfalls of self-assessment

 

This week, the eminently useful Brigada Today newsletter—in addition to drawing attention to GMI’s Agency Web Review—also posted an item from someone looking a self-assessment survey for local church missions programs, recalling that ACMC used to offer one.

 

Responses were subsequently posted by Marti Wade (who does great work with Missions Catalyst) noting that the tool is still available via Pioneers, which assumed ACMC assets upon its folding; and by David M of Propempo International, which also offers a version of the tool.  A snapshot of an excerpt from the ACMC/Pioneers version appears above.

Seeing the ACMC survey brought back a memory from a 2003 project that GMI did for ACMC.  We surveyed 189 ACMC member churches to understand the status of church mission programs as well as their needs and goals.  The survey included each of the twelve questions from the self-assessment grid.

Subsequently, we did statistical modeling to determine if/which/to what degree various church missions program elements were associated with growth in missions sending and with missions budget as a proportion of overall church budget.

Unfortunately, most of the correlations were not statistically significant, and those that were significant were negatively correlated—meaning churches that rated their mission program highly (placing greater priority on the various dimensions) tended to demonstrate less growth in sending or lower relative financial commitment.

How could this be?

Turns out that this is a fairly common outcome of self-assessment exercises.  In short, teams with excellent performance also tend to have high standards—and their vision for growth frequently leads them to be more self-critical than lower-performing teams, which often have lower standards.

So, am I discouraging local churches to use the Mission Assessment Tool?  Not at all.  I encourage churches to download it and use it as the basis for discussion—it can be a great discussion starter for vision building, clarifying core values and identifying priorities for development.  For the reason described above, you may find out that some team members differ on where the program stands—or where the opportunities are for growth.

But when program evaluation is the goal, it helps to have outside eyes providing the perspective.  Those well equipped to offer feedback on a church’s mission program are:

1. Those served by or in partnership with the mission team, such as missionaries who may have other supporting churches (these must be given anonymity in response) and/or

2. Outside consultants who work with many church mission programs and have a valid basis of comparison.

Meanwhile, at the 30,000-foot level, researchers, missiologists and consultants are eager to discover the key differences between high-performing church mission teams and others.  The statistical modeling sought to answer the question: What program elements are the most common outflows (or drivers) of increased financial/sending commitment: Better mission education?  Better worker training?  Greater emphasis on strategy?  More local mission involvement?  This is where self-assessment bias—seen across a sample of 189 churches—becomes a problem.

One helpful approach is to focus on relative data.  Were we to re-examine the analysis today, I would be inclined to transform the raw data into relative performance rankings (each church’s perception of its relative strengths and weaknesses).  This compensates for differing standards of excellence by looking at each church’s priorities.

Self-evaluation bias can also be reduced by developing assessment tools with response scales/categories that are so observably objective that they cannot easily be fudged.  The ACMC tool uses descriptions for each commitment level that are intended to be objectively observable—but in some cases they are subject to interpretation, or to cases where a higher-level condition may be met while a lower-level condition is unfulfilled.  In the 2003 study we gave specific instructions to respondents that they should proceed through each scale a step at a time, stopping at the lowest unmet condition.  However, such an instruction may not have been enough to overcome the need of some respondents to affirm their church’s mission program with high marks.

This issue also points to the importance of testing assessment tools for validity among a pilot sample of respondents—with results compared to an objective measure of excellence.

Take care with self-assessment.  After all, scripture itself warns us in Jeremiah 17:8-10 that “the heart is more deceitful than all else and is desperately sick; who can understand it?”

 

Getting to know you: Future missionaries surf for agencies

 

This is the first in a three-post series.

Every day, people considering long-term cross-cultural service visit sending agency websites and social media pages.  What they experience in a few quick clicks can inspire them to bookmark a site, tweet about it to friends or complete an inquiry form.  Or it can lead them to a quick exit.

With mobilization events like Urbana and MissionsFest taking place in the next few weeks, agencies should be ready to put their best foot forward in assisting field-bound people to discover the next step in their journey.

Next month, GMI will field the 2013 Agency Web Review, in which hundreds of people considering long-term cross-cultural service will explore dozens of sending agency web sites, evaluating various website elements and providing helpful open-ended comments. Participating agencies will receive agency-specific reports with detailed feedback on their site, plus comparative data showing how the site’s ratings compare to a group of several dozen other agencies.

The study draws on GMI’s opt-in panel of more than 3,000 people who have confirmed that they are considering a career in cross-cultural mission. 

Eight years ago GMI fielded the first edition of the Agency Web Review, which provided actionable results for agency web designers and mobilization staff.  Kristi Crisp of World Gospel Mission had this to say about the study:

The Agency Web Review results helped us to set a better direction and convinced us of the need for changes.  …The World Gospel Mission website is a completely different site now.  We changed our focus to getting people actually going…whether with our organization or with somebody else.

In anticipation of the 2013 study, we are taking time to review a few of the highlights of the 2004 study.  The electronic communications landscape has changed dramatically since then, but many of the findings from 2004 continue to be useful. 

We asked which of 11 key activities people considering a missions career had done.  Visiting agency websites ranked fourth on the list (after attending conferences, reading mission books/newsletters and talking with missionaries).  Six out of 10 prospects had already visited the website of a sending agency.

The following chart reveals stated priorities for missionary prospects when visiting an agency site:

 

To us, the results suggest that the primary questions visitors of website visitors relate to identity: Who are you?  What do you do?  Where do you do it? What do you stand for?

Once those questions are answered, prospects feel free to consider, “OK, how would I fit in?” “What would it take for me to be a part?”

Stated priorities don’t reveal the degree to which elements of a website are linked to key response actions (more info in a few days on that), but they do express visitor expectations.  Therefore, we recommend that web designers make sure expectations are easily met without a lot of searching.  That means a well-placed “About Us” heading, opportunities that are dated and kept updated, and some explanation about what people can expect to happen after the inquiry form is submitted.  (That last item was the lowest-rated of 21 site elements tested across all organizations.)

In addition, we suggest providing some unexpected elements of “delight.”  A few of the unexpected pleasures encountered by site visitors include:

  • an opportunity to be prayed for by agency staff
  • engaging videos from field staff that give people a taste of daily life on the field
  • links to helpful resources for people considering service—even from other agencies

We noticed that the highest-rated agency websites tended not to minimize their service requirements, but they worked hard not to represent those requirements as barriers.  Their positioning was something like this:

“Becoming a missionary takes real commitment, knowledge and skills.  It’s not easy, but it is do-able, and we will walk alongside you to help you develop into an effective cross-cultural servant who enables others to realize all that God is calling them to.”

Do you know of an agency that incorporated user feedback into its website makeover? We’d love to hear about examples or standout experiences you’ve encountered.

Learn details about how to take part in the 2013 Agency Web Review here.

 

Analyzing open-ended questions: A bright future for word clouds

Commercial survey research firms usually charge clients significantly extra to include “open-ended” questions in a survey.  They tend to be messy and time-consuming.  Traditionally, analysts would read through a selection of responses to create categories of frequent or typical responses, then read back through all of the responses to categorize them.

For publishing in a peer-reviewed journal, multiple people would categorize responses independently, then work together to create a synthesized coding scheme to limit bias.

Most qualitative text-analysis software still requires you to manually “code” responses.

With all that work, open-ended questions are still important in exploratory and qualitative research – and frequently satisfying for survey respondents looking for an opportunity to say what is on their mind, unhindered by structured response categories.

But the tag-cloud age has been a blessing to those without the time and money to do full, traditional analysis of text responses.   Graphics with words sized by frequency of use enables analysts to quickly get a sense of the nature of open-ended responses.

New editions of survey software – even budget packages like Survey Monkey – include cloud-creating tools to help users understand open-ended responses at a glance, without all the coding work.

Even those doing traditional coding enjoy working with clouds, which help analysts to quickly create an initial set of codes.

If your survey package doesn’t have cloud-generating capacity, no problem.  Worldle is a free site that lets you create art-like word clouds.  The clouds in the previous post were created using Worldle.  It’s a terrific, easy-to-use site that lets you paste in your text – our data came straight from a spreadsheet – and generate a word cloud with one click.  It automatically removes common words, allows you to choose the relative cloud shape, color scheme, font and orientation of the words.  We chose to illustrate the top 100 terms for each question.  Wordle lets you save and use your clouds however you want to.

I really like the tool’s artistic quality.  Wordle clouds almost beg to be shown to others.  Then they become motivated, too.  My daughter, upon first seeing Wordle, immediately had a vision about making a sign to promote a bake sale.  A few descriptive terms later, she had created a beautiful graphic to draw people’s attention.

This is where research moves from information to influence.  Imagine asking your constituents about their needs – or your organization’s impact – then printing a graphic of their responses to hang in your office as a reminder and motivator to staff.  Unlike a research report, which may or may not get read before being filed away (or worse!), word cloud art can keep research right in front of your team.  The graphic format makes the information more memorable as well.

Researchers, meanwhile, can compare and contrast different audience segments, as I did in the word cloud below.

What applications can you think of for word clouds?