I’ve heard a few times now – from some within (and some flirting on the edges of) our industry – that market researchers need to stop focusing on research methodology so much.

There are two main ‘reasons’ put forward here;

  1. First, that methodological considerations are so elementary that we don’t need to talk about them any more [they’re not, and we do]
  2. And second, that it’s simply passé to focus on methodology; if we do, the market research industry will be left behind [it’s not, and if that’s the reason the market research industry is going to be left behind – if, indeed, it is actually going to be left behind – I’ll eat my stripy hat].

The upshot of these discussions is the irritating and ill-defined – but always emphatic – argument that we, as an industry, need to innovate.

But sound methodologies and innovation are not mutually exclusive concepts.

Notwithstanding that research methodology, itself, can be innovative, we need not, should not, and must not sidestep careful methodological considerations for each and every market research project we undertake.

Careful methodological considerations – built into the research design, and used to frame the analysis – are a fundamental bedrock for useful innovation.

Advertisements

‘One big focus group’

I’ve heard this term thrown around a bit (Google it) to describe ‘naturally occurring’ conversations on the internet as a rich source of customer insight. It annoys me.

Listening to/gathering online content is absolutely nothing like a focus group.

Beyond the obvious (ie there’s not a great deal of focus in terms of sample, or being able to probe specific topics), and maybe surprisingly, it lacks some key aspects of spontaneity than can be generated in a focus group. Moreover, unlike a focus group, it doesn’t readily provide a good foundation for analysis.

Highly edited

In many cases that we, as market researchers, will be interested in, the content that ends up in the netographer’s dragnet – eg a blog post, a comment on a community thread, a tweet, etc – is the product of a process that involves considerable cognitive effort on the part of the creator; either word crafting a thought/response, choosing an image, shooting and editing a video etc.

And various factors will shape that effort, such as the intended or perceived audience, the perceived importance of that audience, the background and motivation for generating and posting the content in the first place etc.

Clearly, this is a highly controlled, highly edited process – a fact that seems at odds with the perception of unsolicited online content as somehow more authentic than the content a researcher can get via other (more direct) methods of inquiry.

But the key issue here is that there are very few clues to help the researcher understand the motivations of any particular individual driving their broadcast to the online world.

The mother of issues

As a qualitative researcher, motivation is one of the key factors I’m interested in understanding. It’s the very cornerstone of my analysis; the context. But I can’t readily get that online. Without directly asking, in a timely and appropriate fashion (a whole other blog post), I’m not privy to the backroom.

Of course, not understanding the motivations/context behind the content is fine if you’re simply gathering and presenting content. But it’s the mother of issues if you want to provide your client with any substance.

We’re getting good at capturing data and making it look pretty. But is our industry paying enough attention to its analysis?


How exciting –  The Age of Conversation 3 will be out in April!

In the meantime, a list of the authors;

Adam Joseph Priyanka Sachar Mark Earls
Cory Coley-Christakos Stefan Erschwendner Paul Hebert
Jeff De Cagna Thomas Clifford Phil Gerbyshak
Jon Burg Toby Bloomberg Shambhu Neil Vineberg
Joseph Jaffe Uwe Hook Steve Roesler
Michael E. Rubin anibal casso Steve Woodruff
Steve Sponder Becky Carroll Tim Tyler
Chris Wilson Beth Harte Tinu Abayomi-Paul
Dan Schawbel Carol Bodensteiner Trey Pennington
David Weinfeld Dan Sitter Vanessa DiMauro
Ed Brenegar David Zinger Brett T. T. Macfarlane
Efrain Mendicuti Deb Brown Brian Reich
Gaurav Mishra Dennis Deery C.B. Whittemore
Gordon Whitehead Heather Rast Cam Beck
Hajj E. Flemings Joan Endicott Cathryn Hrudicka
Jeroen Verkroost Karen D. Swim Christopher Morris
Joe Pulizzi Leah Otto Corentin Monot
Karalee Evans Leigh Durst David Berkowitz
Kevin Jessop Lesley Lambert Duane Brown
Peter Korchnak Mark Price Dustin Jacobsen
Piet Wulleman Mike Maddaloni Ernie Mosteller
Scott Townsend Nick Burcher Frank Stiefler
Steve Olenski Rich Nadworny John Rosen
Tim Jackson Suzanne Hull Len Kendall
Amber Naslund Wayne Buckhanan Mark McGuinness
Caroline Melberg Andy Drish Oleksandr Skorokhod
Claire Grinton Angela Maiers Paul Williams
Gary Cohen Armando Alves Sam Ismail
Gautam Ramdurai B.J. Smith Tamera Kremer
Eaon Pritchard Brendan Tripp Adelino de Almeida
Jacob Morgan Casey Hibbard Andy Hunter
Julian Cole Debra Helwig Anjali Ramachandran
Jye Smith Drew McLellan Craig Wilson
Karin Hermans Emily Reed David Petherick
Katie Harris Gavin Heaton Dennis Price
Mark Levy George Jenkins Doug Mitchell
Mark W. Schaefer Helge Tenno Douglas Hanna
Marshall Sponder James Stevens Ian Lurie
Ryan Hanser Jenny Meade Jeff Larche
Sacha Tueni and Katherine Maher David Svet Jessica Hagy
Simon Payn Joanne Austin-Olsen Mark Avnet
Stanley Johnson Marilyn Pratt Mark Hancock
Steve Kellogg Michelle Beckham-Corbin Michelle Chmielewski
Amy Mengel Veronique Rabuteau Peter Komendowski
Andrea Vascellari Timothy L Johnson Phil Osborne
Beth Wampler Amy Jussel Rick Liebling
Eric Brody Arun Rajagopal Dr Letitia Wright
Hugh de Winton David Koopmans Aki Spicer
Jeff Wallace Don Frederiksen Charles Sipe
Katie McIntyre James G Lindberg & Sandra Renshaw David Reich
Lynae Johnson Jasmin Tragas Deborah Chaddock Brown
Mike O’Toole Jeanne Dininni Iqbal Mohammed
Morriss M. Partee Katie Chatfield Jeff Cutler
Pete Jones Riku Vassinen Jeff Garrison
Kevin Dugan Tiphereth Gloria Mike Sansone
Lori Magno Valerie Simon Nettie Hartsock
Mark Goren Peter Salvitti

Fluffy techniques

Useful qualitative research output has two defining features; depth and clarity. And perhaps surprisingly, this is where what might be perceived to be ‘fluffy’ research techniques come into their own. They’re an efficient way to get depth and clarity.

An example, asking the same question in two different ways, will help to explain what I mean.

Q: Does Brand X have an image problem?

There are 3 possible answers to this question; ‘Yes’, ‘No’ or ‘I don’t know’. Some clarity yes, but not much depth there. None of these answers are very helpful in terms of understanding anything about the possible image problem.

Let’s ask the same question, but in another way:

Q: If Brand X came to life as a person, what kind of person would they be? What kind of music would they listen to? How would they take their coffee? etc.

I can understand that to someone who hasn’t had much experience with qualitative research, the above line of questioning may seem somewhat fluffy.

Allow me to de-fluff it.

The output

As noted, the second question is just another version of the first. But the answers will be quite different.

Asking the question using this ‘fluffy’ approach delivers answers in 3D. And then following up, down and sideways on the answers with ‘Why? Why? Why?’ (not literally, in essence) helps to build a rich, relatively holistic picture of Brand X’s image. Critically, it also provides a context for interpretation.

The above technique is called personification. It’s just one example of a fluffy technique: there are many more in the qualitative toolbox. When used appropriately, fluffy techniques really deliver the goods. Research participants love them: they’re fun and engaging and something a bit different. Researchers love them because they enable deeper understanding of attitudes and perceptions, and thus greater insight.

Effectual fluff. Neat huh?

(This was a post I originally wrote for Marketing Mag)



Bang bang

19Jan10

Do you agree or disagree with the following statement…?

Data quality is critically compromised when double-barreled questions/statements are used in market research surveys and researchers who write them into their surveys should be shot.

;P


What has Botticelli’s The Birth of Venus got to do with market research? Well, note the crop. But as nice as it is, it’s only the bigger picture that tells us the actual story.

This is at the heart of the next issue for discussion; the ‘new’ (?!) practice of listening online.

Of course ‘listening’ itself, as a method of research inquiry, is pretty obvious and hardly new. What is new* for market research however, is;

  • the online location per se,
  • some of the online listening technologies, and
  • in some cases, the actual content generated online.

But new or not, you still need to know exactly who it is that you’re listening to. And you also need to think about the context.

For example…

If you’re undertaking a market research listening exercise for a client, let’s assume, quite reasonably, that your focus will be on listening to their customers or potential customers.

Is the Internet a good place to listen?

Well, of course it is! But there are some very important questions to ask before you begin;

Of all your client’s customers/potential customers, how many have access to the Internet?

And of all your client’s customers/potential customers who have access to the Internet, how many are confident enough in a) their opinions and b) their writing (or video editing) skills, to express those opinions publicly online?

And of all your client’s customers/potential customers who have access to the Internet, and who are confident enough in a) their opinions and b) their writing (or video editing) skills to express their opinions publicly online, how many bother?

And of all your client’s customers/potential customers who have access to the Internet, who are confident enough about a) their opinions and b) their writing (or video editing) skills to express their opinions publicly online, and who bother, how many express those opinions in an articulate way (ie in a way that a marketer or market researcher might find of use)?

The skews are breaking my brain.

Defining your sample in terms of exactly who it is that you want to get feedback from is absolutely key in terms of determining where and how you should collect your data.

Notably, if your client’s target market comprises a wider group of people than the customers or potential customers who fit the very narrow profile described above, then – and critically – collecting useful data to generate useful output will mean going well beyond the insights you glean from your ‘new’ online listening endeavours.

 

*relatively speaking and/or as the hype would suggest


 

 

There’s been a lot of talk about engaging research participants in this ‘new’ research paradigm.

I’m focusing on qualitative market research here because firstly, that’s my thing…

: )

…and secondly, because I hear that ‘engagement’, within the context of market research online communities, is community-nirvana. The ‘best’ communities are engaged communities.

This strikes me as, paradoxically, both obvious and alarming.

There’s a very fine line between engaging research participants enough to… well…participate in our market research, and over engaging them.

Without due diligence, research effects (pick one of many) are likely to confound the research output in unintended, unexpected and underestimated ways.

I don’t, for one second, suggest that other research methodologies are free of research effects – they aren’t! – but surely this doesn’t automatically generate a license for ‘new’ market research to ignore them.

And while I don’t think you can necessarily control for engagement, some questions;

  • To what degree should you try to ‘create’ it?
  • How much is too much? When does it start to mess with what you’re looking for from your research?
  • How do you disentangle it from your analysis?

New world

I’ve just come back from a fantastic trip to Chicago where I attended the ESOMAR Online Research 2009 conference. You can find Jeffrey Henning’s brilliant recap here (he pretty much live-blogged it – very impressive!).

Anyway, not surprisingly, there was lots of talk of ‘new’.

And it would have been easy to come away with the message that the market research industry really needs to get with the ‘new’ programme or, quite simply, it will wither away and die.

Because the new world of research is here! New methodologies. New technologies. New ways of engaging with respondents (Gasp! Did I say that? I meant ‘participants’, ‘co-creators’ or ‘collaborators’).

All well and good, but what does this actually mean?

New kinds of output?

Let’s all take a moment to think. What, as market researchers, is our purpose?

Market research is about understanding the market. At a very basic level, the end goal is to deliver information that will help our clients make relatively informed/better decisions about how to sell their products or services.

And here’s my point; the output (ie what our clients are paying for) is only ever going to be as ‘new’ as the questions they/we ask*.

Over the next few blog posts I’m going to take a look at some of the elements of ‘new’ I outlined above. Examine them closely. Explore what they mean for researchers at a practical level. Separate, if you like, the hype and theory from the actual task of delivering useful output.

Should be interesting…

: P

*If you’re in the ‘listening’ camp, ie “Oh no, no! We don’t ask questions, we just listen to the conversation!”, I’ll argue that you’re still (implicitly) asking questions when you choose to/not to include any particular content in your analysis.


Traditional research

The term ‘traditional research’ is often used to describe offline methodologies; mostly, I’ve observed, in the context of selling what I assume must be ‘non-traditional’ (??) online methodologies.

In this context, the word ‘traditional’ conjures imagery of antiquated, moth-eaten and fusty research practices.

And clearly, many offline methodologies are anything but. Not to mention that many research solutions require a hybrid of both online and offline approaches.

But if we have to throw labels around, it’d probably be more accurate to substitute the word ‘traditional’ with ‘established’ (as Paul Vittles from TNS usefully suggested during question time at an AMSRS breakfast seminar I attended last week).

So does this make ‘non-traditional’ research ‘unestablished’ research?

: P

Wow. Doesn’t that put a whole new spin on it!?


#stuffiwonder

Great post over at Ray Poynter’s (always) inspiring blog.

The way Ray described a short-term community, I think, delineates a clever, and potentially efficient, research methodology.

But I’m yet to be convinced of the long-term research community concept (although I’ve no doubt some readers are probably pulling their hair out over my apparent inability/reluctance to see the light/‘get it’).

: P

I’m assuming that in a ‘community for market research’ (vs a ‘community for customer relations/brand building’) context, a client will actually have some questions they want to ask and/or issues they’d like to explore.

If they don’t – if it’s a case of just wanting to pick up on issues entirely generated by the research community itself – then I’m guessing that they’d probably get better value by undertaking some basic social media monitoring … (god forbid).

Anyway, assuming we have a community that’s being used for market research, and there are some questions to throw into that research (in whatever shape or form), here are my questions;

1) At what point in time, along the short to long-term spectrum, does the nature of the output you get from a research community change? And probably more importantly, in what way does it change?

(Or is it different from day one because of how the participants are briefed about purpose of the community and their role in it?)

2) Would one analyse the data coming out of long-term vs short-term research communities differently? This, I suppose, brings us to the issue of the research community objectives. From the sales pitch, I sometimes get the impression that the sole objective of the exercise is to get the research community members to bond; never mind the insight, they’re bonding!

: P

Kidding.

But really, what are the issues, the benefits and drawbacks of community members ‘developing a sense of community’ (and from that, I would assume we mean ‘belonging’), particularly over a longer time frame?

Does it make participants more honest, or more willing to share? Maybe. But (it could easily be argued), maybe not. And what impact does, for example, group/clique think etc have on the output? How would you identify/measure the impact in such a wide (uncontrolled? quasi-controlled?) landscape?

(And to take it to the extreme, if one’s aiming for ‘uncontrolled’, then back to the point above; isn’t the ‘community’ just a very limited method for undertaking social media monitoring?)

3) If developing a sense of community/belonging is one of the key operating principles for long term research communities, then how do new recruits (or exiting members for that matter) impact the existing community dynamic and thus affect the output? How about changes in community moderators/managers?

4) Are there any studies looking at the differences in terms of valuable/usable output between short-term vs long-term research communities?

Are there any actual or even theoretical definitions of the ROI (tangible or even intangible) that one might expect from a short-term compared to a long-term research community available?

(I’ll probably come back to this with more questions when I’ve had time to think more).


Not everything's black & white

Coincidentally, after my last post, this came up in my Google Reader today;

More on the Problems with Opt-in Internet Surveys

Here’s the first article;

Study Finds Trouble for Opt-in Internet Surveys

I had the great privilege of attending an AMSRS Professional Development session earlier this year to hear Jon Krosnick speak. He was brilliant.

And so it’s with great interest that I follow this very timely and fascinating debate (hosted on Gary Langer’s excellent blog).

Make sure you follow the links to get the full story/debate. It’s an important one.

P.S And here’s a link to the study itself.


Zebras eating dinner

So! Continuing with the #stuffiwonder theme…

The telephone vs online survey debate.

The one that goes;

Really, given that everyone’s moving from landline to mobile/cell these days, telephone survey sample representivity is seriously compromised”.

More often than not (and, of course, depending on who’s doing the debating), it ends with a nod to online panel surveys. In this context, “…they’re probably just as good as – if not better – than telephone surveys”.

Right?

Well, I don’t know.

Panels are opt in. And yes, the same can (and should) be argued about telephone interviews. You most definitely need research participants to opt in beyond a “Bugger off, I’m eating my dinner” response.

But what differences might we see, in terms of motivation and the research output, between a sample comprising individuals who;

  • Have been approached randomly (and I get that it’s not really random; the population will be limited to those with landlines), vs
  • Sign up to be part of a/several market research panel/s and/to get paid for their opinions?

Who *are* these people?!

How do market research online community providers populate their clients’ communities?

(When I say “market research” communities, that’s exactly what I mean; a community used as a market research tool. I’m not talking about online communities that are used in a marketing/customer relations exercise.

I’m not quite sure that the difference is apparent to all, but they’re not the same; not by a long shot.

In one, you’re giving the community members love because you want to make them happy. In the other, the relationship is somewhat more pragmatic; you want to learn from them. Notably, if you’re giving them love to make them happy, you’re not necessarily going to learn much, because they’ll be all nice and lovely back).

Anyway, focusing specifically on market research communities; what checks are in place to ensure that the people who end up in the community represent the people the client actually wants to hear from (ie the population of interest)?

To borrow from the delightful John Lacey, I’m filing this one under #stuffiwonder.


Questioning the questions

Here’s another excerpt from my Marketing Magazine series on Qualitative Research…

Within a qualitative research context, there’s no right way to ask a question per se. There are actually many right ways to ask a question. And there are also many wrong ways to ask a question.

The wrong ways

You may have heard about some of the following heinous qualitative research crimes:

  • Asking leading questions
  • Asking closed ended questions
  • Asking vague questions

Why are these ‘wrong’?

Because leading questions ‘lead’ people to a particular answer, closed ended questions can end the discussion prematurely, and vague questions elicit vague answers that have little grounding for interpretation.

Well, theoretically. But all is not what it seems. An experienced moderator might use any of these types of questions purposefully, and with excellent effect:

  • A leading question often works well to test a hypothesis, or as stimulus in itself, to get the conversation going
  • A closed ended, or vague question can provide a foundation to open the discussion in interesting and new ways

They’re all part of the qualitative researcher’s toolkit and used in a timely and purposeful way, can add tremendous depth to the discussion.


Can you quantify it?

Here’s a post I wrote for Marketing Magazine about the ROI on qualitative research;

The value of market research – whether we’re talking about qualitative or quantitative research – is difficult, if not impossible, to quantify.

That’s because market research rarely, if ever, works alone in shaping strategy. It’s just one of many tools in a marketer’s tool bag.

In addition to this, market research is only ever:

  • As good as the research brief and the questions it asks
  • As good as the analysis and the debrief
  • As useful as its end users make it; it’s what they do with the output that can determine success or otherwise

Given the variables listed above (so called because they vary), it’s pretty much impossible to put a figure on its value per se.

Relevance

Let’s look at it in another way.

If a particular product or service or piece of communication is relevant, it’s far more likely to end up in the shopping basket (so to speak). So the absolutely fundamental, most basic question for marketers should be:

“How can we make our products/services/communications more relevant to our customers/potential customers?”

And there are two ways marketers can go about answering this question:

  1. They can ask their customers/potential customers
  2. They can guess

Ask them

If marketers ask their customers/potential customers (and listen to them), they’ll be in an excellent position to create relevant products, services, communications etc.

The value of qualitative research here is obvious; it’s a very good way of asking, and listening, to your customers/potential customers to find out what’s relevant to them.

By being relevant, you’re optimising the chance of collecting the sale. Therein lies the return on your investment.

Guess work

If marketers decide not to ask and, in effect, guess what the market wants, they run the risk of getting it wrong.

Consider the time, resources and money wasted when bad guesswork delivers a dud. Go one step further; cost it out. And add the opportunity cost.

When you have that figure, my work here is done. Because that figure gives you a very good estimate of the ROI for good qualitative research.

Nothing to sneeze at, is it?


internetinquirycover

Anyone working/playing in, or pondering the online research space would benefit from reading this excellent book.

To date it is, by far, the most erudite, interesting and useful book about online qualitative research methods I’ve come across.

As the editors note; “It’s not a ‘how-to’ guide. It is, rather, an exploration and explanation of vantage points, a project meant to stimulate thinking”.

Which it really does.

And you can never have too much of that.


The Zebra side of the fence

Another question from the discussion.

Q3: Is consumer attitude shifting in regard to research participation?

A: Chinese whispers from quant-land suggest that long, boring surveys are failing to engage respondents. This isn’t new, but there’s more research on research now, and thus we’re hearing more about it.

I’ve no doubt that poorly designed surveys would spark a shift in terms of how many people are willing to give their time to participate in research, and the price they set for their contribution.

Things are a bit different on my side of the research fence; ie qualitative.

In this context, buzz words such as ‘collaboration’ and ‘co-creation’ are being thrown around as the new terms of research participant engagement; these concepts underpin the fundamental premise of online market research communities.

But actually, collaboration and co-creation aren’t new concepts in qualitative research. In fact they constitute the very essence of most qualitative research – on or offline.

It’s always been a dialogue between researcher and respondent/participant. We ask questions and they collaborate in answering. Or they ask questions and we listen and ask questions about the questions…etc. We show them stimulus and they co-create a vision.

Beautiful.


Four to start

11Jul09

Myths

Following on from my last post, here’s the second question and answer…

Q2. Are there any lingering misconceptions that marketers attach to market research?

A: Four misconceptions de jour;

  1. That online research is quicker and cheaper. It usually isn’t if it’s done well. The technology doesn’t provide the value. It’s the analysis that provides the value. And that’s what takes thinking and time (note, I can only really speak for qualitative here).
  2. That numbers (quantitative research) are more important than the sentiment around those numbers (qualitative research). They’re both important.
  3. That market research is a cost. Good research is an investment in managing risk.
  4. That Maslow provides a good framework for interpreting research results. It doesn’t.

The other end

06Jul09

The other end (of the Zebra)

I recently answered a few questions about market research for Marketing Magazine. I must say, it was nice to be at the other end of the questions!

I think the article only appeared in the print edition (June, 2009), so over the next few days I’m going to post the nutshell version of the questions (4 altogether) and my answers.

Please feel free to add your own thoughts!

Q1: How do marketers get the best value out of market research when their budgets are being diminished?

A: Marketers will get the best value out of market research by making sure they do 3 things;

  1. Ask good questions (ie have clear and well defined objectives)
  2. Ask the right people the good questions (ie employ a useful sample)
  3. Ask the right people the good questions in the right forum (ie choose the optimal methodology)

Sounds simple, but this is where the clever thinking is. If you get these 3 right, you’ll get useful data, efficiently.


A fine weave

10Jun09

A fine weave

I’ve always been a big fan of desk research and in particular, the literature review.

Even in the days when it meant searching the CD-Rom database at the UNSW library (any one remember Psych-Lit?) to find the ‘literature’.

I’d scribble down the references on a scrap of paper and go, on foot, to hunt for the hard copies. These would invariably be housed up on the fifth floor, down the very end, where the heaters didn’t work in winter. But it didn’t diminish the thrill of the ‘find’.

I danced with (some might say nerd-like) delight when it became possible to access the library catalogue via the internet (albeit in the early days, you could still only get information on the item’s location, not the actual item – but it was still exciting!).

Things have changed considerably since then.

From the way we source information, to the type of information we end up with. All changed.

But importantly, it’s not just the content that’s changed, it’s probably also the quality.

Why? Because some information is (a lot!) easier to get than other information.

I sometimes wonder whether we’re too quick to stop at the easy-to-get-to stuff.

Do we know when and how to dig further for better information?

Are we teaching the next generation of researchers how to do this?

A skill worth honing

Being able to weave a literature review together is a skill worth learning and/or a skill worth honing, particularly for researchers.

Beyond the likelihood of increasing the quality of information one ends up with, the process can be enormously enlightening.

It provides a feel for the breadth, and often surprising depth, of knowledge around any particular subject; inspiring and humbling at the same time.

It provides exposure to confronting, yet compelling views, often contrary to one’s own.

It’s a way of learning what wheat looks like vs chaff.

More context. Better information. Better research.


Measurement

Are you measuring what you think you’re measuring?


Black and white

27May09

Black and white

Busy times at Zebra, so just a snack-sized bite…

Question: What’s the difference between good market research and bad market research?

Answer: Good market research makes you money; bad market research costs you money.

😛


changing the questions

I think a key part of our role, as professional market researchers, is to advise and steer our clients on, and towards, the research methodologies that will effectively and efficiently answer their research questions.

“Yeah, and…?” I hear you mutter. “You’re stating the bleeding obvious”.

If it is the bleeding obvious, then I’m confused.

Because while I busy myself with answering that brief, the passionate embrace of all things 2.0 (for the want of a better descriptor) by some researchers – along with the often alarming and dire warnings for the future of Research 1.0 (for the want of a better descriptor) – suggest that marketers must have suddenly changed their questions.

Have they?

In some cases, yes. The world itself has changed/is changing, and the marketing context is changing too. But it’s not changing entirely, and importantly, it’s not always changing in parallel.

A Research 2.0 solution suggests that marketers’ questions have changed as quickly and as radically as online technologies and forums themselves.

But marketers still want to know how to sell stuff. Does Research 2.0 output help them do that? Does the information gleaned from the ‘new’ listening posts answer their fundamental market research questions?

Or are those at the helm of the Research 2.0 front actually changing the brief itself? Shaping the research questions to fit the new technologies?

Is it research?

Perhaps what we’re defining as Research 2.0 isn’t, in fact, market research at all. Maybe it sits outside the realm of market research; more in the customer relations/customer service domain.

Without doubt, the online environment provides marketers with invaluable feedback – but of a very specific kind. Quite clearly, because of its skews and tip of the tailness, it’s not the kind that’s of much use in making big-marketing-budget-decisions.

Maybe, as market researchers, we are in pole position to harness and distill that specific feedback for what it may be worth. We can certainly lend our experience and caution to the analysis.

But are Research 2.0 methodologies really the silver bullet they’re being sold as? Is it a marriage of the right questions, with the right sample, in the right context?

Or is it a shotgun wedding?


Not so fast

28Apr09

Not so fast; andante dolce


My qualitative toolbox has grown with current online technologies. And the possibilities promised by evolving technologies are endless.

It’s intriguing to contemplate how these changes influence not only the way I might do research, but no doubt, the way I think about and frame research issues.

Never a dull moment, that’s for sure.

But exciting as it is, it’s always a good idea to stop and think. And here’s something to think about…

Lunch

You catch up with a friend (in real life) for lunch. You talk about this, that, and the other.

While between catch ups you talk on the phone, email each other often, are Facebooked, and have been members of the same online community for over 5 years, sitting down, face to face, puts a whole different angle on your understanding of their life. It provides a completely different context.

Watching their face light up when you ask after their kids, seeing the micro-frown when you discuss topic X, and observing their extended search through their handbag to avoid discussing topic Y; these are all things you could never pick up online.

The conversation takes paths that your online conversation couldn’t have travelled (and this works the other way too, but stay with me here…).

Take this into a research context, and you realise that while you can get some (often surprisingly) deep and passionate reads on emotion through written words, images etc via online methods, there are times when you’re just going to need more.

Springboards

The nuances of body language provide the researcher with cues and real life stimulus. A pause, a frown, half a giggle; all invaluable springboards to discussion.

And this dynamic is something that, quite patently, offline research can provide over and above an online dialogue.

I stress, this is only important if it’s important; it depends solely on the task at hand.

But for the most part, I like a side of offline context to go with my analysis.

And because of the above, I don’t think qualitative market research methods will change quite as fast as one might be led to desire or believe.

At least not the ones that provide the relatively comprehensive insights I need in order to do my job well.


The dividing line; the equator

No doubt market research is evolving. As it does. And as it should.

And a natural and obvious part of that evolution seems to be the move to online qualitative research methodologies; eg qualitative content analysis, bulletin board focus groups, online communities etc.

But is online research necessarily the best or only way forward? Is traditional (offline) research on its way to the research graveyard?

I don’t think so.

For the record, I like online qualitative research (speaking for the most part, with bulletin board focus groups in mind here).

It’s fun to do (albeit time and labour intensive). Most research respondents who participate seem to enjoy it. And importantly, the output, for the task at hand, has been pointed and relevant.

In some cases an online qualitative approach is the optimal methodology. When you need to reach otherwise impossible to reach individuals, generate interaction between typically un-collaborative individuals etc, it’s worth its weight in gold.

But let’s not put the cart before the horse. Let’s take a reality check.

Online for everyone?

Not all people are like us.

We (you and I and other readers of this blog) are not particularly representative of any given market. We’re a highly skewed group in terms of our attitudes, our communication skills and our love (?!) of things collaborative and co-creative.

Believe me. Not all people are like us.

I’m reminded of this every time I do (offline) group discussions or depth interviews.

I’m reminded of this every time I talk to my relatively less-online-focussed clients or friends.

I’m reminded of this every time I walk down the street and see people talking, thinking or engaging in any one of the many offline activities that make up the bulk of their lives.

Neilsen (I’m guessing inadvertently) help me to make my point. According to a recent study, Australians* are spending a whopping 16.1 hours a week online. Up from 13.7 hours in 2007.

But one week = 168 hours.

That means they spend 151.9 hours offline. *And that figure was based on a sample of internet users.

Which suggests that we’re not quite at the point where online conversations are a part of everyone’s everyday life. Not by a long shot.

And that brings me back to the important issue of sample. Are we willing to accept the (well documented) skews that come part and parcel with online samples? While in specific cases an online skew won’t be an issue, more often than not, it will.

So, as much as we might like to, we simply can’t take all our research online (yet). The vast majority of the people we want to understand just don’t hang out there.

(Part two coming soon).


An uncomfortable ride

I’ve been quiet. I’ve been thinking.

A click around the online market research community (oh the imminent irony!) tells me that, apparently, market research is – or should be – changing. And I mean really changing; as in beyond all recognition.

In a nutshell, and broadly generalising, here’s the gist;

1. Traditional (offline) research is becoming irrelevant

2. We need to find new ways to engage with  consumers, respondents, research participants

3. In the age of collaboration and co-creation, market research online communities are the way forward

Hmmm…

I started thinking about this here and here.

Over the next few posts, I’m going to look at points 1-3 in a bit more depth. Stay tuned for the possibly uncomfortable ride…


Red herring

So after a slight diversion, it’s back to finish off my thoughts on advertising concept research.

I’ve talked about the importance of capturing un-considered responses and this is my main issue with researching advertising concepts online. There’s simply too much time for respondents to think.

Between the time you show them the advertising, and the time it takes to read and process the question, they’re going to have time to think. And then they’ll have (unlimited) time to craft their responses. And edit them. And polish them.

In effect, responses are likely to be sanitised beyond usefulness. They won’t give you any sense of whether, and/or how, the advertising is actually working or not.

Tools

There’s a whole raft of tools available for researchers that purport to enable online advertising concept research. These tools allow ‘participants’ to tag or mark up the stimulus, eg proposed copy, website etc.

I can see why this idea has appeal; within the co-creation paradigm, it scores a 9/10, right?

Yes, it does. A wonderful tool to use if you want to involve your geographically-dispersed design team. But you’ll struggle to get anything useful from your non-design-schooled research participants.

Why? Because the focus will be on the stimulus, not the concept.

The ‘design’ tweaks respondents make will be just that; design tweaks. And are they really going to do a better job than your design team?

Without an opportunity to explore respondents’ reasons for their tags and mark ups, red herrings are all you’ll be eating for dinner.


Half a cat

28Mar09

Half a cat

If a company offers products and services that have personal relevance, well, the cat’s half in the bag.

If they can work out how to tell me about those relevant products and services, in a relevant way, at a relevant time, then they’ve got themselves the full cat deal.

But what is relevant?

Hooray for qualitative research!


zebra-shoes

Maybe I should stick to knitting, but despite my obvious leaning towards qualitative (vs quantitative) research, I feel compelled to write about a quantitative research issue that isn’t quite getting the consideration it should; sample representativeness. This, I might add, seems to be a problem particularly – although by no means exclusively – for quantitative research conducted in the online environment.

Sample representativeness

When we (when I say ‘we’, I don’t mean you or me of course, I mean them, but let’s go with ‘we’) undertake a quantitative market research study, it’s rare that we have either the time, or a budget, that would allow us talk to each and every customer/potential customer of interest.

Instead, we choose a selection of those customers/potential customers to represent the greater population of interest to us. In research-speak, this selection is called the sample.

In a quantitative research context, the way you choose your sample, and the structure of that sample, is everything. These two factors will pretty much define the extent to which you can extrapolate your research findings to the population of interest. In non-research speak, that means the extent to which you can have any confidence in the research results.

Making sure that the sample you want to use to generalise to the greater population of interest is representative of that greater population is;

1. Of vital importance

2. Not always easy

3. Of vital importance

Number 3 isn’t a typo. Issues arising from number 2 often mean that the sample may be seriously compromised. I put number 3 there as a reminder.


Hands up

15Mar09

Hands up

Raise your hand if you can think of some brands that need to work on customer engagement.

And/or raise your hand if you can think of some brands that should be (more) transparent.

And/or lastly, raise your hand if you can think of some brands who should join the ‘conversation’.

If you’ve got your hand up, maybe you can answer this;

What, exactly, do you mean?

Go on. Define ‘customer engagement’, ‘transparency’ and the ‘conversation’.

No wait. I mean in a useful way; a way that can be operationalised and measured.