EFFECTIVENESS
A VIEW FROM KAT
With it being IPA Effectiveness season, everyone's brains are on one thing: did it work? And in this new frontier of marketing, AI and modern technology, I firmly believe that the power of human judgement and nuance matters more now than ever. When everyone is optimising what they've always done, efficiency can start to nudge personality out of the way.
The tools available to all of us are raising the baseline of what good looks like, which is genuinely exciting, but as that average climbs, the space between competent and brilliant gets harder to find. And that changes what we need from our effectiveness thinking, because the measure of success matters as much as the work itself.
This month, Rob interrogates some of marketers' most-used frameworks and asks the most important question for growth; how are they not serving you? In a world where growth is becoming more challenging than ever, and the rules are being written in real time, it is vital that we take the time to ask the hard questions and see if 'what we've always done' could be ensuring you stay exactly where you are.
Kat xx
Kat Bozicevich
CEO
WHAT GETS SMOOTHED AWAY
There are few things more reassuring than a dashboard that is improving. Costs coming down, conversion moving up, arrows pointing in the right direction in brightly coloured green. It creates the distinct impression that progress is being made, even when the commercial result remains stubbornly indifferent to the whole exercise.
At some point, the question stops being whether the activity is working, and starts being what, exactly, it is working on.
The answer usually sits inside a model. When marketers see a problem, they often reach for the same diagnostic framework: the funnel. The funnel is seductive because it comes pre-loaded with an implied solution. Map your metrics onto awareness, consideration and conversion, and the answer to any problem is right there in the structure: whichever stage is underperforming, that's where the money should go. Conversion down? Push spend to the bottom. It feels like rigorous diagnosis, but it's actually the model doing your thinking for you.
The difficulty is that the funnel was never designed to do that job. Mark Ritson has made this point recently: it describes how demand forms across a market, not how individuals move through a pipeline. As an observational tool it has genuine uses. As a management tool it distorts priorities, a bit like a sat nav that calmly suggests driving down a canal because it looks quicker on the map.
The consequence is predictable; more effort goes into converting people already close to buying, less into reaching those not yet paying attention. KPIs improve, because they are being optimised to improve, but the underlying pool of potential buyers does not expand at the same rate. Growth arrives, but not in proportion to the effort going in.
The same instinct shows up in how data gets handled. Most outputs collapse to a single figure: average recall, expected return, predicted uplift. Clean, comparable, easy to defend. What disappears is the spread underneath, and the spread is where much of the interesting information lives.
Activity that performs consistently across the board tends to win over activity that divides opinion, even when that is exactly where the growth potential sits. For example, something that lands very strongly with some groups and not at all with others will usually lose out to something that performs moderately well across the board.
The safer option gets picked, delivers what it promises, and rarely stretches beyond it. And in doing so, what gets filtered out is the possibility of disproportionate impact. When everything is built to find a single answer, anything that behaves differently starts to look like an error rather than a signal.
Which can be infuriating given how much wonderful variation sits underneath all of this.
A meaningful share of the population processes information, attention, and messaging in different ways. That does not cluster neatly around a central point. It spreads, overlaps, contradicts itself, and occasionally ignores the very thing it is meant to respond to.
When decisions anchor on the average, those differences are removed from consideration. The system becomes very good at optimising for the middle and less effective everywhere else. You can see it in performance over time, where each increment works slightly less hard than the last, and more effort is required to maintain the same output.
This is where the big effectiveness opportunity sits.
Growth outside the average tends not to arrive through doing more of the same more efficiently. It tends to show up in media choices that reach people in different contexts, creative work that lands unevenly but more powerfully, partnerships that give a brand access to audiences and behaviours that would not appear in a standard plan.
These things often look less efficient at first glance. They are harder to benchmark, harder to compare, and harder to defend in a system built around clean numbers. They also tend to be where disproportionate effects come from, because they are not optimised to the same centre point as everything else.
But some of the more interesting effectiveness work is starting to move in this direction. Less focus on neat progression through stages, more on how things build unevenly over time. Brand and performance working together, different parts of the market responding at different points, and growth arriving in new ways.
It is harder to present. It is also closer to what tends to happen.
The models still have a role. They help organise thinking and give people a shared language. The question worth asking more often is where they stop being helpful, and what gets smoothed away and lost in the process.
A NOTE FROM THE EDITOR
It’s Effectiveness Month on The Difference.
That gives us a good excuse to focus on the thing that sits underneath most of what we do, but rarely gets the airtime it deserves. And focus on something more important that just ‘what works’; deciding what ‘working’ actually means.
This year feels particularly important. We have just submitted three IPA Effectiveness Award papers, built in partnership with our clients. Anyone in the industry knows that the IPAs are an institution unto their own. More than just award entries, they are shared attempts to understand how growth really happens, across messy systems, imperfect data, and the occasional overconfident model.
Across this edition, we explore some of the tensions that sit behind that work. How we measure. How we model. And where those models quietly stop helping.
Effectiveness is not really short of answers.
It is more that, every so often, we realise we may have been asking the slightly wrong question all along...
Rob Beevers
Chief Marketing Intelligence Officer
Rob Beevers
Chief Marketing Intelligence Officer