Today a colleague sent me a link to a new article on Attribution and media measurement with a request to share my thoughts. Written by a statistician, it was the latest in a series of published perspectives on how Attribution should be done. When I read it, several things occurred to me (and prompted me to blog about it):
- Are we still at a point where we have to argue against last-click attribution? If so, who is actually arguing for it? And are we already at a point where we can start criticizing those few pioneers who are testing attribution methodologies?
- Would a media planner (usually the person tasked with optimizing campaigns) understand what the author meant in his critique: “the problem with this approach is that it can’t properly handle the complex non-linear interactions of the real world, and therefore will never result in a completely optimal set of recommendations”? It may be a technical audience, but we’re still marketers… right?
- The article discusses “problems” that only a few of the largest, most advanced advertisers have even thought about. When it comes to analytics and media measurement, 95% of advertisers are still in first grade, using CTRs and direct-conversions as the primary metric for online marketing success. They have a lot of ground to cover before they are even at a point where they can make the mistakes the author is pointing out.
In reading the comments below the article, my mind drifted back to business school (or was it my brief stint in management consulting?) and the theoretical discussions that took place among pontificating strategists. And then it hit me… even in one of the most innovative, entrepreneurial and growth-oriented industries, an Ivory Tower mindset somehow still persists in some corners of agencies, corporations, media shops and solution providers. Not afraid to share my views, I responded to the article in what I hope was a polite and direct way of saying “stop theorizing and focus on the real problem.” Here is my post:
“…We all agree that you need a statistically validated attribution model to assign weightings and re-allocate credit to assist impressions and clicks (is anyone taking the other side of this argument?). And we all agree that online is not the only channel that shapes brand preferences and drive intent to purchase.
I sympathize with Mr. X – it’s not easy (or economically feasible) for most advertisers to understand every brand interaction (offline and online) that influences a sale. The more you learn about this problem, the more you realize how hard it is to solve. So I agree with Mr. Y’s comment that we should focus on what we can measure, and use statistical analyses (coupled with common sense) to reach the best conclusions we can. And we need to do it efficiently and cost-effectively.
While we’d all love to have a 99.9% answer to every question re: attribution and causation, there will always be some margin of error and/or room for disagreement. There are many practitioners (solution providers and in-house data science teams) that have studied the problem and developed statistical approaches to attributing credit in a way that is more than sufficient for most marketers. Our problem is not that the perfect solution doesn’t exist. It’s that most marketers are still hesitant to change the way they measure media (even when they know better).
The roadblocks to industry adoption are not the lack of smart solutions or questionable efficacy, but rather the cost and level of effort required to deploy and manage a solution. The challenge is exacerbated by a widespread lack of resources within the organizations that have to implement and manage them: the agencies who are being paid less to do more every year. Until we address these issues and make it easy for agencies and brands to realize meaningful insights, we’ll continue to struggle in our battle against inertia. For more on this, see “Ph.D Targeting & First Grade Metrics…”
I then emailed one of the smartest guys I know (data scientist for a top ad-tech company) with a link to the article and thought his reply was worth sharing:
“I think people are entirely unrealistic, and it seems they say no to progress unless you can offer Nirvana.”
This brings me to the title of this post: It’s hard to solve problems from an Ivory tower. Note that this is not directed at the author of the article, but rather a mindset that persists in every industry. My point is that arm-chair quarterbacks do not solve problems. We need practical solutions that make economic sense. Unless you are blessed with abundant time, energy and resources, you have to strike a balance between “good enough” and the opportunity cost of allocating any more time to the problem. This is not to say shoddy work is acceptable; as stated above, statistical analysis and validation is the best practice we preach and practice. But even so-called “arbitrary” allocation of credit to interactions that precede conversions is better than last-click attribution. It all depends on your budget, resources and the value of advanced insights. Each marketer needs to determine what is good enough, and how to allocate their resources accordingly.
Most of us learned this tradeoff when studying for finals in college: if you can study 3 hours and make a 90, or invest another 3 hours to make a 97 (recognizing that 100 is impossible), which path would you choose? In my book, an A is an A, and with those 3 additional hours you could have prepared for another test, sold your text books or drank beer with your friends. Either way, you would extract more value from your limited time and energy.
To sum up, we need to focus our energies away from theoretical debates on analytics and media measurement, and address the issues that prohibit progress. The absence of a perfect solution is not an excuse to do nothing. And more often than not, the perfect solution is not worth the incremental cost and effort.
As always, feel free to comment, tweet, like, post, share, or whatever it is you do in your own social sphere. Thanks for stopping by!