So, in the follow up to the last post, it's time to look at the using qual v quant. Much later on, we'll go on about how to physically go about each one, how to brief, who to brief, methodology and the like. For now, here's some rough and ready guidelines.
Qualitative pre-testing is good for:
- Taking people through unfinished stimulus
- Being able to match the mood of real TV press consumption
- You can probe more emotional responses
- You can explore 'reasons' why in more detail
- You can dissect elements better - what's working, what's not...AND MOVE ONTO WHAT WOULD IMPROVE IT THEN AND THERE. Mood, characters, music, setting, copy style etc.
I think it's much better in enabling you to develop work and avoid 'killing' it - purely because the methodology allows ongoing dialog. I was testing some work for a bed retailer, which was all about showing empathy. The executions were all about understanding how important it is to sleep well, some about losing your edge, some about how on form you can be. But they bombed. That would have been it, but we asked people why this stuff wasn't relevant. And they told us that the right bed matters for the bits when you're awake. That perfect 'me moment' of reading in bed, a cup of tea, or whatever. And the work that developed from that ended up on TV and saw sales uplift by 20% straight away.
BUT if your primary objective is to prove that stuff will work, there is safety in the numbers that quant provides.....and you can measure impact a bit better too.
When you brief it in to the research agency you should:
- Allow time for recruitment, fieldwork and analysis. I cocked up big time on my first little qual project when I just assumed the recruiters could sort out respondents and a venue in a week. They couldn't, and I made the mistake of settling for an imperfect sample. No surprise that the learnings were totally confusing, and, I hate to say it, the wrong work ran, and stiffed. Lesson learned.
- Choose the right researcher.....researching work means that you need the type of person who knows how communications WORK. And try to do some groups yourself if you can. When paper testing an e - commerce site, I made the mistake of using a researcher who, I found out after, had never really done much work in new media before. I got loads of great feedback on branding ideas, but very little on how the site's navigation should work. Thankfully I had James Boardwell to develop the site afterwords, who's forgotten more about web stuff than I'll ever know.
- Ensure the sample matches the audience in the creative brief (see point 1!)
- Bloody brief them properly! Make sure they have a copy of the creative brief, and they really understand the task, who the audience is and how you think the work is supposed to work. I was away, and the account team asked a researcher to test the viability of brand identities. They forgot to tell him they were supposed to convey intelligent simplicity, so we just got back a list of what people liked and what they didn't. We had to do all over again when I got back.
- Work hard at the stimulus. Ask the researcher what they think will work best - but be firm about what your own preference is. Make sure all the work is presented at the best level of 'finish'. More worked up ideas invariably win over rougher scamps. You don't want to kill some work because it didn't have a shot in, when the others did. And make sure the researcher gets the work early enough to prepare. We sometimes do internal groups here, just a few people who are not creatives, and don't work on the specific brand. It helps get a feel for the real research. Even then, people kept focusing on some press stuff that was done on the mac, as opposed to some line drawings. The bloody, sneaky, creative team had 'finished' the work they liked the best and not told me until they handed over the boards. It took a lot of probing to find out they didn't work as well, and naturally I got accused of trying to kill the team's favorite route. Not pretty.
- Ask them to separate creative strategy (is the brief right), the creative idea (is the core concept right) and the execution (is the detail right..casting, mood etc). I remember jumping up and down when a researcher essentially killed a creative route about personalization. It was about different people liking different stuff, and every execution didn't work because it only showed one example of something you like or you don't. It was too polarising. But he never showed the executions together - or projected showing lots of different things together. So the core idea just died, even though we had tons of evidence that it was right. Execution was wrong, not much else was.....
- Make sure anyone affected by the outcome can see at least one group, either by video or even better, they are there. It matters. And let the researcher know this as soon as you can. One of the biggest shocks I had this year was going to a small scale presentation, only to find it was to half of the global team. It wasn't fair to me, it's not fair to anyone.
Quant is a tough one. It isn't possible to make a realistic test of work in a laboratory situation. It just isn't. The Stella Artois reassuringly expensive' stuff failed in research......yet it's one of THE most successful campaigns.
However, some think it makes sense to have some sort of appraisal of the work's viability. The demand for pre-testing is huge, like it or not. And there are plenty of agencies just dying to carry it out.
They tend to have 7 measures of effectiveness:
- Points communicated
- Impact/stand out
- recall of brand
- Brand imagery
- Emotional involvement
- Preference or attitude shift
- Open ended questions
They all have a habit of skewing it to structure, logic and rationality. Beware. And it's hard for people to describe in a multi-choice format how they FEEL, or project how they would behave.
So when you brief a quant agency:
- Make sure the sample matches the one in your creative brief
- Do what you can to make sure the questions match how you think the work will make people respond. This is bloody hard when most agencies use a standardised procedure. You need to make the researcher your friend and do what you can. At the very least, make sure it's acknowledged what the differences are. I was faced with a researcher who was adamant that quality messages on some supermarket POS were a waste of money. His methodology was all about what people would do that day, instead of long term behavior and consideration . I had to get him to admit he hadn't addressed LONG TERM behavior anywhere.
- Remember, this form or research is highly defensive, it stops bad work running, but it's very hard to use it to find out what you need to do to improve it.
So there you go, some hard and fast guidelines to what some call pre-testing, but I like to call development research. The difference is crucial.
Anyone bored yet? Is this useful? Too basic? Do let me know.
I'll tell you one thing, it's great going right back into so called standard practice now and again. It's easy to get into bad habits.
Again - this is a wonderful post NP but I think if you gave some practical examples of things you've done [both good and bad] you may find people 'get it' more easily.
That came out as a criticism and it wasn't meant to be ... sorry.
I also think it would be good if you do a post on WHEN research is needed versus when clients think it is needed and how to demonstrate this.
I say it because we often have clients wanting to research EVERYTHING [especially with our US clients] and its only because of George's talent and trust that stops work grinding to a complete halt when they want to investigate whether the bloody font size is engaging enough.
[That is not a joke, a client asked us about that once. Or an ex client I should say, ha!]
Posted by: Rob @ Cynic | August 01, 2007 at 06:04 AM
For God's sake, feedback's the only way you learn. I tried to keep these short anf to the point, but if they come accross as dull textbook their isn't much point. I'll update them I think, and take the time to make them a bit more human.
And as for the WHEN to do research, that's certainly a post all in itself!
Posted by: NP | August 01, 2007 at 01:20 PM
Oh bollocks, it wasn't meant to offend - it was actually a congratulations on writing great stuff.
That's the problem with blogging, emotion can't come out and shine.
Posted by: Rob @ Cynic | August 01, 2007 at 02:09 PM
Pay attention 007.
I was thanking you for the feedback. If something can be made better, it bloody well should.
I only strop at patently lame taste in music.
Posted by: NP | August 01, 2007 at 02:21 PM
It's late ... I'm tired ... it's bloody hot and I've been ranting at the cocks who think an ad has a deliberate social cause in it - give me a god damn break and accept my compliments.
Now did my crap preso make it through to your 'alternate' address???
Posted by: Rob @ Cynic | August 01, 2007 at 02:50 PM
Fair play.
And it did thankyou.
It's anything but crap.
Posted by: NP | August 01, 2007 at 02:53 PM