What type are you: Outfitter? Orality Overcomer? Obi-Wan Kenobi?

 

Do you like personality tests?  Some people repeatedly retake the Myers-Briggs Type Indicator® assessment to see if they have changed their personality.

My kids, meanwhile, love online personality quizzes like “Which Star Wars Character Are You?”

Recently they found this “infographic” which combined the two concepts.  Here’s an excerpt:

 

 

 

 

 

 

 

 

My kids, checking up on their parents’ MBTI® types, dissolved in hysterics to learn that they were the product of a union between C-3PO’s personality and Yoda’s.

Such quizzes not only make for entertainment but also for interesting insights and discussions—with application for global mission.  I can envision a church-planting team having an extended discussion on whether they have the right mix of Star Wars/MBTI personalities to overcome the strongholds of evil in their quadrant—and using the results to inform recruitment of new team members.

However, another type of segmentation might prove more relevant—such as a quiz that lets you know your church-planting personality (more on that later).

With good data and the right analyst, your ministry can develop segments (donors, workers, prospects, churches, peoples) based on specific, relevant information that is most meaningful for your ministry.  Further, you can create classifying tools (quizzes) that your people can take to better understand themselves—or their ministry environment—informing Spirit-led decision making.

Most people are familiar with simple segmentation approaches that rely on one measure (such as birth year) that does a reasonably good job of dividing a large group into meaningful subgroups (such as Gen Xers and Millenials) that reflect a set of shared traits.

The MBTI rubric uses four dimensions of personality, each with two poles.  Tests determine on which side of each spectrum a person falls.  Voila!  Sixteen possible personality combinations emerge.

Mission researchers like Patrick Johnstone and Todd Johnson have popularized geo-cultural “affinity blocs”—segments that reflect collections of people groups on the basis of shared social/religious/geographic/cultural traits.  It is much easier to remember and depict 15 affinity blocs than 12,000 people groups.

Recently, GMI has done value-based or activity-based segmentation analysis on several survey projects.  One is the subject of GMI’s featured missiographic for early November—giving an overview of five personalities of those investigating mission agency websites, based on their information needs.

One of those segments is Faith Matchers—those for whom theological alignment is of primary importance.  When Faith Matchers visit an agency website, they are looking first for to see if an agency’s beliefs align with theirs before considering strategy, location or service opportunities.

Last week we learned that one agency web designer had read the detailed website visitor profiles and related communications ideas in GMI’s Agency Web Review report and made a small adjustment to the website to make sure that Faith Matchers would be able to find the agency’s statement of faith with a single click from the home page—an easy change based on segmentation analysis.

Some of our other recent segmentation work included identifying :

  • Three mission agency CEO personalities (Outfitters, Entrepreneurs and Mobilizers) based on organizational-, staff- and personal-effectiveness priorities, as described in the Missio Nexus 2013 CEO Survey Report based on the responses of more than 150 agency leaders.
  • Three motivation-based segments (Where, Whether and Whatever) for those considering six-to-24-month mission internships, drawn from a quick survey of GMI’s panel of future missionaries.  One group is committed to long-term service and discerning where or with what agency it should serve.  One segment is discerning whether it is called/cut-out for long-term mission service.  The largest segment is eager to serve now, with little or thought given to post-internship service (whatever).  Following is a scatterplot of the 205 respondents.

 

  • Four Church Planter personalities (Word-based Advocates, Orality Overcomers, Trade-Language Strategists and Judicious Intercessors) based on how often they engaged in “fruitful practice” activities, from a survey of nearly 400 church planters working among resistant peoples.

For that last one, we developed a 10-question quiz that church-planting workers can take to discover the strengths and potential growth areas of their church-planting personality.  Sound interesting?  Write us for details on how to get a copy of the Church Planting Personality Profiler—it’s available to member agencies of a particular network.

In a follow-up post, we’ll discuss analysis approaches for creating segments and how scatterplots and the classification quizzes are developed.

 

The pitfalls of self-assessment

 

This week, the eminently useful Brigada Today newsletter—in addition to drawing attention to GMI’s Agency Web Review—also posted an item from someone looking a self-assessment survey for local church missions programs, recalling that ACMC used to offer one.

 

Responses were subsequently posted by Marti Wade (who does great work with Missions Catalyst) noting that the tool is still available via Pioneers, which assumed ACMC assets upon its folding; and by David M of Propempo International, which also offers a version of the tool.  A snapshot of an excerpt from the ACMC/Pioneers version appears above.

Seeing the ACMC survey brought back a memory from a 2003 project that GMI did for ACMC.  We surveyed 189 ACMC member churches to understand the status of church mission programs as well as their needs and goals.  The survey included each of the twelve questions from the self-assessment grid.

Subsequently, we did statistical modeling to determine if/which/to what degree various church missions program elements were associated with growth in missions sending and with missions budget as a proportion of overall church budget.

Unfortunately, most of the correlations were not statistically significant, and those that were significant were negatively correlated—meaning churches that rated their mission program highly (placing greater priority on the various dimensions) tended to demonstrate less growth in sending or lower relative financial commitment.

How could this be?

Turns out that this is a fairly common outcome of self-assessment exercises.  In short, teams with excellent performance also tend to have high standards—and their vision for growth frequently leads them to be more self-critical than lower-performing teams, which often have lower standards.

So, am I discouraging local churches to use the Mission Assessment Tool?  Not at all.  I encourage churches to download it and use it as the basis for discussion—it can be a great discussion starter for vision building, clarifying core values and identifying priorities for development.  For the reason described above, you may find out that some team members differ on where the program stands—or where the opportunities are for growth.

But when program evaluation is the goal, it helps to have outside eyes providing the perspective.  Those well equipped to offer feedback on a church’s mission program are:

1. Those served by or in partnership with the mission team, such as missionaries who may have other supporting churches (these must be given anonymity in response) and/or

2. Outside consultants who work with many church mission programs and have a valid basis of comparison.

Meanwhile, at the 30,000-foot level, researchers, missiologists and consultants are eager to discover the key differences between high-performing church mission teams and others.  The statistical modeling sought to answer the question: What program elements are the most common outflows (or drivers) of increased financial/sending commitment: Better mission education?  Better worker training?  Greater emphasis on strategy?  More local mission involvement?  This is where self-assessment bias—seen across a sample of 189 churches—becomes a problem.

One helpful approach is to focus on relative data.  Were we to re-examine the analysis today, I would be inclined to transform the raw data into relative performance rankings (each church’s perception of its relative strengths and weaknesses).  This compensates for differing standards of excellence by looking at each church’s priorities.

Self-evaluation bias can also be reduced by developing assessment tools with response scales/categories that are so observably objective that they cannot easily be fudged.  The ACMC tool uses descriptions for each commitment level that are intended to be objectively observable—but in some cases they are subject to interpretation, or to cases where a higher-level condition may be met while a lower-level condition is unfulfilled.  In the 2003 study we gave specific instructions to respondents that they should proceed through each scale a step at a time, stopping at the lowest unmet condition.  However, such an instruction may not have been enough to overcome the need of some respondents to affirm their church’s mission program with high marks.

This issue also points to the importance of testing assessment tools for validity among a pilot sample of respondents—with results compared to an objective measure of excellence.

Take care with self-assessment.  After all, scripture itself warns us in Jeremiah 17:8-10 that “the heart is more deceitful than all else and is desperately sick; who can understand it?”

 

Visual projects need visual research. Case Study: GMI logo.

Are you using stories visually?  If so – or if not – check out the Visual Story Network to discover the power of visual stories.

Which brings us to visual research.  When the output is visual, it helps if the input is, too.  Rather than using words to tell a designer what her work should look like, visual research shows the concepts and elements that can be easily adapted into visual communication.

There is a lot of psychological theory underlying representative aspects of visual design.  While it helps to know why something works, sometimes it is sufficient just to produce something that works.

Some forms of visual research are highly sophisticated; others are accessible and usable for almost anyone.  For an example of the latter, check out Visual Explorer.

Some simple, informal visual research – nearly a decade old – turned out to be influential in the design of the current GMI logo.  As you read the story, think about ways that you could apply visual research.

For many years, GMI used this logo:

To some, it said, “We aspire to be the IBM of the mission world.”  To me, it said, “The world as obscured by a Venetian blind.”

We sometimes paired it with the tagline “Helping the Church See.”  I guess the logo could represent our helping the Church to see God’s world by opening the Venetian blinds of ignorance.  But no one wants to be told he or she is ignorant.  And GMI’s technical skills go well beyond adjusting window treatments.  Why not take the blinds down completely, open the window and climb through?

In 2010, after at least a decade of talking about it, GMI finally took the initiative to rebrand.

There’s a tangential story that I’ll mostly skip over about the debate over a potential name change to something other than Global Mapping International.  In the end, GMI opted to emphasize its well-known acronym, paired with the tagline: Strategic mission research and mapping. 

We had done some simple research on the GMI brand back in 2003.  The first step was interviewing staff to get their input on the personality characteristics of GMI.  In words.  Some wanted to talk about what GMI did, but we fought to stay focused on who GMI was in terms of personality and values.  It would have been good to include some of our resource users and other stakeholders as well, but we were less interested at that point in external image and more interested in internal identity.

Eventually, nine themes emerged that were mentioned frequently through the use of related words:

  • trustworthy
  • informed
  • supportive
  • innovative
  • stimulating
  • adaptable
  • engaged
  • pragmatic
  • accessible
  • courageous
  • compassionate

We revisited this list during the design process, using a simple online survey with the board and staff to prioritize the characteristics – that prioritized the characteristics in the order listed above.

But the input was still purely verbal.  The designer took the old logo and the personality elements, then went to work on a new concept, seeking to retain a connection to elements of the old logo.  Here was the first draft I saw:

I liked the way that a data/technology element was incorporated, but I felt that the image offered little warmth and a bulky font.  It communicated “trustworthy” perhaps, but definitely not “accessible,” “adaptable,” or “compassionate.”

I was one of many who were given an opportunity to offer feedback on the design.  In addition to the comment above, I mentioned a piece of visual research that we had done in early 2003 to follow up on the personality characteristics that we had identified.

Our visual exercise involved a few hundred logos, some from ministries and some from commercial organizations, printed and strewn across a conference room table.  Staff members were asked to select logos that captured each of the personality characteristics identified in the interviews.  I analyzed the selected logos both by characteristic and as a group, looking for patterns or trends that might help to capture the full set of personality traits.

Visual elements that popped up frequently included silhouettes, question marks and a particular combination of colors.

I wrote to the design team:

When we did our visual tests way back when, the colors most often paired with blue were gold and black.  Would like to see a treatment that incorporates those – perhaps using them in the map border and in a scattering of the data ovals.

Unfortunately, I was living overseas and did not have a copy of the research “report” with the visual input to show the designer what I was talking about.  The document was only three or four pages — a few paragraphs of analysis interspersed with selected images taped onto the paper.  It existed only in hard copy; I never got around to scanning it.

With input from many people, the designer had to choose which ideas to incorporate.  The second attempt seemed to be a step back:

One issue was the skewed rendering of the continents relative to the oval representing the globe.

In my response, besides noting that issue, I asked the design team to locate the 2003 visual research report and try at least one design using the blue, black and gold combination.  I also wrote:

I would also like to see us at least consider rendering the GMI in lowercase in the main logo.

This was also supported by the visual research.  I mentioned three reasons for trying lowercase:

  1. To help communicate Adaptable and Accessible – hopefully not at the exense of Trustworthy and Supportive.  After all, if Intel and AT&T can use lowercase text in their logos…;
  2. To reflect a sense of servanthood and credit-sharing that has been a hallmark of GMI’s work in partnership with others; and
  3. To create room to give a nod to Innovation and Stimulation by stylizing or coloring the dot in the “i.”

The next version revealed that the designer was listening:

Now I thought we were getting somewhere.  In his email delivering the artwork, the designer mentioned reviewing the visual research (and admitted that he chose to substitute gray for black — which makes sense for high-tech applications).

This concept received positive feedback from almost everyone, so the process moved to refinement.  Several options and variations were considered, along with various color combinations.  As it turned out, the winning logo was indeed a combination of blue, black and gold.

Why those colors?  The standard color psychology interpretation holds that blue provides trust and dependability, black reflects strength and authority (plus clarity on a white background), and gold implies wisdom, along with generosity of time and spirit.  Deep blue and gold are also complementary colors on the color wheel.

I also felt that the logo represents a simple story about GMI does and why: GMI brings forth insights from information to support others in bringing the light of the Gospel to a dark world.

To go further, GMI produces and is engaged with data, organizing and interpreting it to draw out points of insight.  The data is drawn from and often describes the world.  The insight helps advance the work of revealing God’s light to the world.  Our work is supportive, so lowercase “gmi” is at the bottom.

I didn’t expect to get a logo that told a story, but I think the power of telling that story will be meaningful in communicating GMI’s work.

The logo may seem a bit busy, but then, so are we – so even its weakness fits.

Sometimes research projects have little impact on decision making for various reasons – changing conditions, political or financial considerations, a leader’s preference, or something else.  But in this instance, a simple visual research project spoke into the project in meaningful ways.  (It was much later when I noticed that the continents in world map appear in silhouette, another theme from the research.)

Think about ways that you may be able to gather image-based information for your visual projects.

Simple Survey Idea 4: Don’t give the answers away

Do you ever “give away” answers in your surveys?  I’m talking about subtle (and not-so-subtle) signals that can lead to bias.  Here are a few errors to avoid:

Pandering

Several weeks ago I refinanced my house using an online lender.  All ended well, but there were a few glitches along the way – a key email with documents attached was apparently lost and I had to prompt the company to follow up with the underwriter.

The day after closing I received the following survey invitation from the mortgage processor:

Subject: I so appreciate YOU! Please help if you can I am so close to being # 1 in the company having “GREATs”…

Thank you so much for being an amazing customer to work with. I greatly appreciate all your help to get your loan taken care of. I hope that you feel I have given you “GREAT” customer service. My managers would love to hear what you think about my performance as your processor. If you do not mind, please take 1 minute to fill out the 1 question survey to help me out. We are always looking for “GREATs.”

Apparently customer-service ratings at that company are used in compensating or rewarding mortgage officers.  That’s fine.  But the question it raises is: Why would the company – which cares enough about satisfaction to tie it to rewards – let the person being evaluated pander for good ratings in the survey invitation?

You may have seen a more subtle form of this:

Thanks for coming to the SuperDuper Missions Conference.  Weren’t the speakers and worship music great?  Plus, over 300 people responded to the challenge to give or go.  Hopefully you were as blessed as I was.

Say, I would love to get your feedback to help us make future conferences even better!  Here’s a link to the survey…

It can be hard to contain enthusiasm when asking for post-event feedback – especially if you sent out several enthusiastic pre-event emails.  But if you want honest input, commit to avoiding remarks that suggest how the event should be evaluated (or how you would evaluate the event).

It Must Be Important Because They’re Asking About It

Most people have encountered surveys with leading questions, designed to confirm and publicize high levels of support for a position on an issue.  Like this:

Are you in favor of programs that offer microloans to lift women in developing countries out of the cycle of poverty with dignity through sustainable small businesses, with local peer-accountability networks to ensure loan repayment?

Even if you have read articles about recent studies suggesting that the link between microfinance and poverty reduction is tenuous or non-existent, you might be hard-pressed to answer “no” to the question as worded.

But there are other, more subtle ways that organizations can “suggest” certain responses.  Telling people in the survey invitation that the survey is about microloans can encourage people to overstate their interest in that topic (as well as leading to response bias in which interested people are more likely to respond at all).  Better to say that the survey is about strategies for poverty reduction or (broader still) addressing key areas of human need in the developing world.

This lets you gauge interest in your issue by mixing it in with several related issues, like this:

From the following list, please select up to three programs that you have been involved in, or would consider becoming involved in:

__ Well-digging programs to help provide a consistent healthy water supply

__ Community health education programs to teach villagers basic hygiene

__ Microloan programs to help women grow sustainable small businesses

__ Literacy programs to help kids and adults gain life and career skills

__ Legal advocacy and awareness to stem human trafficking

__ Theological education programs to equip first-generation church leaders

__ Sponsorship programs to sustain the education and nurture of at-risk kids

The rest of the survey can be about microloans.  But before tipping your hand, you learn about interest in that issue relative to other issues — and even the correlation of interest among issues.  Plus, you can use survey logic to excuse non-interested people from follow-up questions that don’t apply to them.

You can go even further to mask your interest in the survey issue, even while asking several questions specific to that issue.  Before starting the battery of questions about microloans, include a statement like this:

“Next, one of the above topic areas will be selected for a series of follow-up questions.”

The statement is truthful and adheres to research ethics — it does not say that the topic will be randomly selected. But it leaves open the possibility that those who sponsored the survey may be interested in several types of programs, not just microloans, encouraging greater honesty in responses.

Unnecessary Survey Branding

However, these approaches still won’t work if the survey invitation is sent from someone at “Microcredit Charitable Enterprises” and the survey is emblazoned with the charity’s logo.  There are many good reasons to brand a survey to your constituents, starting with an improved response rate.  But sometimes, branding can be counterproductive.

If objective input is key, consider using an outside research provider in order to avoid tipping your hand, especially since research ethics require researchers to identify themselves about who is collecting the data.

Allowing Everything to Be “Extremely Important”

Another way that researchers can “give away” answers is by letting people rate the importance of various items independently.  Take this question, for instance:

In selecting a child-sponsorship program, how important to you are the following items?  Please answer on a scale of 1 to 5, where 1 is “Not at All Important” and 5 is “Extremely Important”:

1    2    3    4    5   Sponsor’s ability to write to and visit the child

1    2    3    4    5   Receiving regular updates from the child

1    2    3    4    5   On-site monitoring of the child’s care/progress

1    2    3    4    5   Written policies regarding how children are selected

1    2    3    4    5   Annual reporting of how your money was used

All of those are important!  The question practically begs respondents to give each item a 5.  Will that information help the agency?  Maybe for external communication, but not in deciding which areas to promote or strengthen.

Instead, consider this alternative:

In selecting a child-sponsorship programs, how would you prioritize the following items?  Distribute a total of 100 points across the five items.

Or

Please order the following five elements of a child-sponsorship program according to their relative importance, from 1 “most important” to 5 “least important.”  You can use each number only once.

In most cases, relative-value questions will produce much more useful data.

Are there other ways that you have seen surveys “give away” answers to respondents?   Or avoid doing so?  Let us know about your experiences and ideas.

Simple Survey Idea #3: Give Something Away

When you do a survey, you are asking people for their time and their opinions.  People are increasingly aware of the value of both.

With that in mind, it is a good practice – even among those who already know and trust you – to give something away in appreciation for their input.  Doing so will bless people and build goodwill.  It will also improve your response rate (and therefore the quality of your data).  And it will make them more inclined to participate in future surveys.

“But we don’t have the budget to give anything away,” I sometimes hear people say.  I say, “If you can’t find something to give away, you’re not trying very hard.”  You don’t need budget – there are lots of ways to give survey responders something for free.

The first thing you should give them is a short survey.  That may be a topic for another Simple Survey Ideas post, but it’s so important that it always warrants mentioning.

You can give people access to the survey results, a good idea if your responders are peers/stakeholders and you know they will be interested in what you are learning.  Depending on your survey software, it may cost you some time to format and email results out to those who responded.  But if you use an online package like Survey Monkey, you can set up options to automatically show the survey results to date upon completion of the survey.  That “Instant Results” feature is even available on the free Survey Monkey package.

Of course, the first few people who respond won’t get a very complete picture, so you might also want to send people a link to the full set of responses once you complete and close the survey.  This option is available in all of Survey Monkey’s paid subscription plans.

Quick aside: the advantages that you get with the online services’ paid plans (unlimited responses, survey logic, ability to download data, HTTPS security) make them well worth the cost (vs. the free plan) for almost any survey.  Even if you are just doing a one-off survey, you should still sign up for a month and then cancel the subscription when you’re done.  Your survey is worth the $24 investment.

Another useful offering is free-information-of-interest-to-respondents.  I use this with virtually every survey I do.  You can almost always find an article or ebook or presentation or video related to your survey topic or to a common interest of the survey audience.  Even if you don’t produce content, you can always find something free on the Internet to direct people to.

In this way, you can say in your survey invitation, “Everyone who completes the survey will receive a free ‘Top 10′ list of resources about _____.”  It doesn’t matter that the list is out there on the Internet for anyone to find – linking people to it is delivering value.  With Survey Monkey, the option to redirect survey finishers to the website of your choice only comes with annual plans.  So, you may need the workaround of embedding your own link on the last page of the survey, so responders can get to your resource.  At the risk of going beyond “simple,” try something like this:

Thanks for completing our survey.  Before clicking “Done,” click <a href=”http://www.yoursite.org/” target=”_blank”>this link </a> to open a new window with the free resource we promised.

Be careful that the resource will be of interest to nearly everyone that you invite.  Giveaways that appeal only to a certain segment of your audience will lead to response bias.

Should you ask permission of the content provider in advance?  It’s a good idea but not required – groups that offer free content on the web typically want people to find that content.  You benefit them by linking to their site.  Groups that provide many free mission-related resources include the World Evangelical Alliance and the U.S. Center for World Mission.

A quick-response incentive promises resources to the first X number of responders.  This can be a good idea if you have a limited number of tangible resources to give away – and especially if you need responses quickly.

A related incentive is the sweepstakes prize offer, where respondents are randomly selected to receive a prize – usually something with significant value.  Many researchers use a combination of a free something-for-everyone resource with a high-value sweepstakes prize for a few randomly selected winners.

I like sweepstakes offers – they are fun and they work to generate response.  But you have to be responsible with them – some laws apply (see a quick overview here and know that this post does not constitute legal advice).  If you go this route, make sure that everyone who responds has an equal chance to win (even those who don’t meet the criteria for responding to your survey – nothing ruins a good survey like people lying to qualify for a prize), clearly communicate what and how many prizes you are giving away, eligibility, how and how often you can enter, when the giveaway will take place, how winners will be notified, approximate likelihood of winning, and any geographic or residency limitations.

That sounds like a lot, but consider that the following covers all of that without sounding too much like the legal disclaimer lingo in car dealer’s radio ad:

“You and up to 400 others who complete the survey by March 31 will qualify for a random prize drawing for one of 10 copies of the Operation World DVD.  One entry allowed per survey link.  In April GMI will inform the 10 winners by email – they will need a valid U.S. mailing address to receive their DVD.  Not valid where prohibited by law.”

How to manage a random drawing without hundreds of slips of paper and a huge hat?  Discover the RAND function in Excel.  Very handy – be sure to sort, save and print results for your records.

Also make sure to give away everything you promise.  If some people don’t claim their prize by a given date, move on to the next people on your randomized list.

Prize giveaways are appealing to most, but it is not unusual for those in ministry circles to steer clear of them because of their similarity to gambling games of chance.  Before launching a contest, be sure your organization’s leadership knows about it.  If you run into concerns, one alternative is to allow or encourage winners to donate their prize to charity.

Some survey sponsors use a charitable donation as the incentive itself, which carries real appeal for respondents.  One commercial firm I worked with leads off its surveys with a question like this:

In appreciation for your opinion, our firm will be distributing charitable donations totaling $1000.  From the following list, please select the charitable organization that you would like your portion of the donation to go toward:

__ Organization A

__ Organization B

etc.

If your group is a charitable organization, you can use a list of projects instead.  This works well if you can (truthfully) mention that an individual donor has put up the gift money to be distributed in this manner.

A final tip that applies to any gift or incentive that you offer: don’t position it as the primary reason to respond – especially in the subject line of an invitation email.  Not only do words like “prize” and “win” tend to trigger spam filters, but leading with the gift offer sends a message to invitees that you view the exercise as a transaction (or worse, that you think they are primarily motivated by greed).

Instead, keep the focus on the importance of the survey topic and the value of the person’s opinion – then mention the gift or prize.  As a survey sponsor, your identity should be that of a listener asking people for the favor of their input and offering them the opportunity for involvement – plus a gift as a token of your appreciation – rather than as a purchaser of people’s opinions.