How do you measure engagement success? The Mandarin‘s associate publisher Alun Probert says the same stringent gateway reviews should apply to return on investment for digital communication.
It’s curious that in the fast-moving world of communication and advertising with all its talk of big data, programmatic advertising and digital dashboards, one of the most common quotations rolled out at conferences still dates back to an era before the invention of the light bulb.
John Wanamaker (1838-1922) has an interesting CV as a pioneer in high street retail and banking entrepreneur, impressively also serving as the 35th US Post Master-General, but to marketers everywhere he’ll be forever remembered as the man who famously said: “Half the money I spend on advertising is wasted; the trouble is I don’t know which half …”
Despite massive advances in technology and data collection, in 2018, Wanamaker’s conundrum remains largely unsolved for communications people everywhere. It’s true that progressive retailers like Amazon and Tesco can pinpoint what promotions are working by demographic and time of day, but these measurement tools work best when tracking communications effectiveness against hard metrics, like sales.
Measuring the effectiveness of government advertising remains a tough nut to crack. In the years since we first heard the phrases “Drink Drive, Bloody Idiot” (1989) and “Slip, Slop Slap” (1981), data shows that these trailblazing advertising campaigns and the initiatives that followed were clearly successful in reducing the numbers of drink drivers on our roads and increasing awareness of skin cancers.
Is a ‘like’ worth much?
But being able to specifically pinpoint which component in a behavioural change campaign was the most effective (and which wasn’t) remains a challenge. It’s difficult, for example, to know with any confidence specifically which policy levers or communications activity were most pivotal in inspiring large numbers of Australians to finally quit smoking. Plain packaging? Larger and more confronting pack images? The restrictions on smoking in pubs and restaurants? The price? Peer pressure? Or one of the many award-winning advertising campaigns that represent the best creative work done in the government sector?
While the introduction of peer review and gateway processes are helping some departments to have a clearer view on the likely effectiveness of proposed advertising campaigns, the explosion of new activity in digital media, websites, social media and two-way community engagement programs, means that assessing the return on investment, Wanamaker style, on government communications is, if anything, getting harder.
As we get to better understand that it’s practically impossible to correlate “likes” on a Facebook page and “hits” to our websites with actual changes in behaviour, a whole new paradigm of questions arises for communications teams adapting to the digital age.
With hundreds of millions of dollars publicly earmarked for digital transformation in Canberra alone, notwithstanding public money already sunk into building and maintaining hundreds of state government websites and portals, are we in any position to know what percentage of our most recent investments in new channels of communications might also have possibly been wasted?
Every new app and social media or website analytics tool built or procured represents an additional cost to the business that needs to be justified by a return on investment. Digital evangelists will argue the benefits of monitoring online chatter or changes in website traffic. Without a clear purpose, it’s easy for it simply to be another cost to the business with no return.
No need for fuzzy justifications
We can’t turn back time, but there’s no reason why the same stringent gateway reviews used to assess traditional advertising campaigns should not now be extended to evaluate all proposed expenditure on websites or other new digital channels designed in the broadest sense to improve the effectiveness of government communications.
As a lifetime marketer, I learned begrudgingly to side with my colleagues in Finance who wanted to measure the return on any investment in advertising I was recommending. But few similar practices exist for departments planning to invest in digital communications, whether building more websites or building a platform of listening tools. The fact that it’s public money being spent only emphasises the importance of knowing the answer to the question. “Why are we doing this?”
Some progressive departments are already reviewing the costs of managing multiple redundant websites and duplicate software platforms, often finding easy opportunities for savings in both cost and duplication.
Others are applying traditional governance principles to manage the proliferation of social media channels under the same roof. But whether the expenditure comes from the “advertising” budget or from “IT” or “corporate governance”, it’s surely time that governments everywhere started to look at peer-review assessments before building any more websites.
Managing the advertising peer-review process taught me that in the light of Wanamaker’s conundrum, the only way to viably assess the effectiveness of any advertising activity is to agree on the specifics of how you’ll measure the success (or otherwise) of the activity before you spend the first dollar. Trying to figure that out after the event just doesn’t work.
Looking at the hundreds of websites, apps and conflicting systems in place across the sector. It’s easy now to surmise that some of the money being spent in the chase of the digital dream might also have been wasted. I’m not sure that anyone knows whether it’s 50%.
In the chase of the big digital dream, like Wanamaker, it’s a question we should be able to answer.