Atomic 212 founding partners Claire Fenner and James Dixon deep dive into the guiding principles and practices of media buying in their new book, with a focus on the importance of deploying a scientific process of hypothesis, test, learn, document, and repeat.
This is an excerpt from the third chapter of ‘How to do Effective Media: Media Planning Buying as a Science’. In an attempt to lift the lid on the “secret sauce” of scientific media buying and give marketers the tools to talk the language of CFOs and CEOs, this chapter explores the systemisation of media for increasing ROI.
- Marketers should apply a scientific process to all marketing.
- This process is as old as science itself: hypothesis, test, measure.
- Macro and micro testing should permeate everything the marketer does.
3.1 Scientific Process
Old world: “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”
New world: “That’s pretty wasteful. You should have implemented a scientific measurement process some years ago my friend.”
With the tools and methodologies available to marketers in the modern marketing environment, the adage about wasted advertising spend shouldn’t hold anymore, if it ever really did.
Marketers criminally underuse testing
Science and experimentation go together like a horse and carriage – ask a kindergarten class what a scientist does, and they will tell you: “experiments!”
Yet, while marketers are coming around to the world of data science, there remains an entrenched reluctance to experiment with what does and doesn’t work in a campaign on a significant scale.
Julian Runge, a behavioural economist and digital marketing researcher who has done work with Facebook’s marketing science research group, believes that this dearth in experimentation can be marked down as largely being institutional, what he refers to as “organisational obstacles”. Many of these obstacles are legacies of a time before big data but have become entrenched and therefore have become roadblocks to informed advancement.
“Marketing mix models are well established and trusted decision-support tools that are seen as less technologically complex and costly than setting up randomised custom trials (RCTs),” he wrote for Harvard Business Review.
“But in fact, launching RCTs on digital channels doesn’t require unusually complicated technology, can be done at near zero cost, and can actually help optimise existing marketing mix models.”
In the same piece he outlined how simple, cheap and effective RCTs can be. In his work at Facebook, he ran an observational survey of how leading firms used RCTs and found that running 15 experiments annually, as opposed to zero, led to around 30% higher ad performance, while firms that conducted 15 experiments the year before had a 45% increase in performance.
Just stop and process that: 45% performance improvement in two years. And all that was required was 30 RCTs, which can be performed “at near zero cost” over 24 months.
Runge and his colleagues at Facebook wrote a paper about their studies, wherein they devised a four-point plan, or what they call a “reinforcement learning frame” to help firms adopt experimentation:
1) Devise an exploration policy
2) Define a reward that is to be maximised through learning
3) Continuously innovate and develop new candidate policies
4) Rigorously reinforce
You have to love it when someone doesn’t just highlight a problem, they bring a solution to the table.
And while Runge noted that RCTs can be done with minimal investment, he does recommend a portion of the advertising budget be set aside for experimentation, writing that “10% is typical among companies with a successful experimentation program.”
Ultimately, macro and micro testing and the development of scientific processes should permeate everything the marketer does – because it gets results!
So, what is a ‘scientific process’?
Scientific process is the pursuit of knowledge through developing a hypothesis, testing it and measuring the results. We are taught this framework in high school but it is unlikely to surface as a daily practice thereafter.
Unfortunately, the scientific process does not find its way into many fields of marketing. Marketing and science are too often seen as chalk and cheese. But it doesn’t have to be this way. If the scientific process was diligently and consistently applied within the marketing practice, it would undoubtedly fuel business growth and sales.
The ideal approach, in my view, is an an-always on attitude to macro and micro testing, something I rarely see in practice. Generally, marketers undertake micro tests in a silo channel (e.g. advert copy test in Google AdWords) where the test is not incorporated within a structured macro program. Don’t get me wrong, this type of testing can lead to useful learnings, but its usefulness is limited because the learnings generally stay within one channel and one team.
Imagine a marketing department where the annual growth target is based on and from a scientific process.
Here’s the conversation:
CFO: What do you think we can hit next year?
CMO: We have been running some models, based on the learnings from the past 12 months and the tests that we deployed throughout the year. We tested TV, it has a 14% impact on sales, and it is at 65% optimal budget. We tested radio, it has a 12% impact on sales and 95% optimal budget. We have modelled this out and can drive another 18% sales if we invest $x, or 25% if we invest $y, noting that the second option would have a 20% ROAS.
3.2 Measurement with Modelling
How on earth do we not understand Marketing Effectiveness? We have discovered penicillin, landed men on the moon, and split the atom, but we have not yet understood where the other half of the marketing budget goes.
Media effect is complex, but that shouldn’t stop us from trying, and we can do much better. Humans have proven many times that where they put the focus, they can solve incredibly complex problems.
There are so many variables that impact a business’ growth. There are the primary Ps, then there are the secondary Ps, then there is a myriad of external factors such as competitors, economies, and consumer sentiments.
The bulk of the understanding is derived from subjective evaluation. It is opinion-based and therefore fundamentally flawed. Not so much a science as a guessing game. This has bred a scepticism in the CFO office and a depreciation of the CMO voice as one of authority for growth.
As I explained in the previous chapter, the key to combatting this issue is science. CMOs must develop a scientific process.
Article originally published on Mumbrella.