Omni.Digital Attribution Recap

September 17th, 2015 by Steve Latham Leave a reply »

Screen Shot 2015-09-17 at 11.54.00 AM

Last week I had the pleasure of speaking on Attribution at AdExchanger’s Omni.Digital conference in Chicago. Our panel “The Next Wave of Attribution Vendors” was moderated by AdExchanger’s Lead Research Analyst Joanna O’Connell. As usual, there wasn’t time to fully answer all of the questions, so here is a recap in Q&A format.

What is Attribution and what’s it good for?

While most agree on how to define “multi-touch attribution” (attributing fractional credit to the interactions that result in a conversion), each set of stakeholders often use it for different reasons. For example:

  • Analysts view it as a means to delivering more accurate reports to the media team.
  • Media buyers often use it to validate performance of their media buys.
  • Advertisers often use it to confirm that their media budgets are being properly invested.

While each use case is useful, it is also limited. Fractional attribution by itself is not an end, but rather a means to learning and optimizing. Through statistically validated insights, brands and agencies glean a much better view into which media partners, strategies, formats and creatives work (and which do not). They can also make more intelligent decisions for cutting waste and re-allocating budgets. If marketers want to get the full value from Attribution, they need to act on the insights. If they don’t, they are leaving money on the table.

What is wrong with Attribution solutions, and where is disruption needed?

This answer has several parts.

First, we can all agree that last-click attribution is flawed approach. Whether desktop or mobile, last-click rewards the lowest-funnel media and penalizes everything above it.

Second, static multi-touch models (e.g. even weighted, U-shaped, time-decay) are better than last click, but only marginally. These still reward vendors who over-serve likely converters and perpetuate the epidemic of Retargeting Gone Wild.

Third, the first wave of advanced (algorithmic) attribution solutions weren’t viable for most advertisers: complex and lengthy implementations, continued reliance on services and a high price tag that only the largest advertisers can justify.

The “new wave“ of vendors recognize advertisers need fast, easy, and affordable technology-based solutions that leverage readily-available data. Extensibility and automation obviate the need for complex integrations and labor-intensive analysis while reducing the time lag between implementation, production and insight. Through rapid onboarding, automated processing and timely reporting, the value proposition is fundamentally changing. Validated insights, recommendations and forecasting, delivered quickly, efficiently and affordably… these are the new table stakes in the Attribution space.

How big a problem is mobile in the world of attribution?

When you consider that attribution is based on conversion path modeling, the lack of user-level mobile data makes analysis very challenging. To assemble mobile conversion paths, you either need a cookie-less ad server or a partner, such as a DMP or mobile conversion vendor, to aggregate publisher data for each device. There are workarounds (i.e. manually aggregating data from publishers and DSPs) but it’s a lot of work. See The Dark Side of Mobile for more on this topic.

What about Google and Facebook?

Google, Facebook and now Verizon aren’t making life easier for advertisers seeking independent validation and advanced insights. Walled gardens can be scaled with a running start. As these publishers consolidate the market, their walls are looking more like the Wall of the North that was built to defend us against the Wildlings. Those of us on the side of openness and transparency are hopeful brands and their agencies will vote for with their budgets to reverse the trend towards data protectionism.

What progress are marketers making in adopting attribution?

The adoption curve is steepening but it’s still early. Surprisingly (or not), the majority of advertisers still rely on last-touch (click or impression) to reward conversions. Some of the pioneers are still nursing their wounds (especially those who tried to run before learning to walk) and a majority of the settlers are waiting for assurance that the path is safe before proceeding. It’s taken longer than anticipated, but progress is happening.

Among those that are leveraging attribution, most are still picking low-hanging fruit with a focus on desktop media spend. Very few have figured out mobile and even fewer are connecting the dots to gain a multi-platform view of users. While there are solutions available, few brands or agencies have the resources needed to take advantage of these new opportunities. I expect this will change in 2016.

Are advertisers using attribution outputs to plan media mix?

Savvy agencies and brands are acting on the insights, but too many just use Attribution to validate that their media is working. As most media spend is still done through IO’s, media buyers must take action to optimize spend, whether it’s pausing underperforming campaigns, re-allocating budgets to top performers or addressing frequency issues with their vendors.

While all agencies claim to be active in their approach to campaign management, too often they tend to “set it and forget it.” We have some great clients who actively review results and take action to improve performance.  But there is also a subset of agencies who are either too busy or lack the resources and/or commitment to capitalize on the insights.  As Brands become more involved in the process, I expect these agencies to become better stewards of their clients’ budgets.

Are they pushing the output of attribution into media buying systems?

While modeled outputs can be sent to a DSP as inputs for buying orders, the idea of a self-optimizing, closed-loop is still a bit futuristic.  First, it can only be done for programmatic buying, which is only one of many vendors on a given media plan (IO driven media still dominates).  Second, it will require close oversight as there are numerous factors that can produce false positives or negatives on a real-time basis.  A few examples that can wreak havoc are: accidental removal (or duplication) of a conversion tag, a glitch in how a confirmation page is served, or a hiccup in the ad server or disruption in the delivery of log files. These events happen often, so if you’re going to send buying signals in real-time to your DSP, you’ll need some guard rails.

In a more practical sense, attribution-based insights are used to compare the accuracy and effectiveness of operational (post-click and post-view) KPIs which are often relied on for daily buying decisions. In some cases we’ve seen these KPIs are sufficient for real-time decisioning. But in many cases they are subject to being gamed by vendors (cookie bombers) who are inadvertently rewarded while quality placements are penalized.

The bottom line is the industry is making progress, but we’re still a long way from Nirvana (self-optimizing systems that use modeled attribution KPIs to guide real-time decisions).

What advice would you give marketers? 

There are two important components to success in measuring and optimizing media: the System and the People.

On the People (behavioral) side of the equation:

  1. Get aligned. Many brands still have silos that make cross-channel initiatives challenging. Internal stakeholders need to agree on the end goal (omni-channel proficiency), which requires integrated planning and measurement.
  2. Delegate, but don’t abdicate. If brands choose to delegate measurement to their agency, they need to be active participants in the process. One way is through monthly meetings to review results (beyond the top line). Review the latest results (vendors, strategies, formats), discuss the lessons learned and define changes to make. Trust but validate, and keep your finger on the pulse of the campaign.
  3. Do something! Don’t let the absence of a perfect solution prevent you from moving forward (remember perfect is the enemy of good). Set expectations that will be easy to meet. Each discovery will surface many new questions, as well as insights.
  4. Rationalize Incentives. Unfortunately, advertiser objectives (maximum efficiency) are not always aligned w/ those of their Agencies and Publishers (maximum spend). Recognizing there is waste in every campaign, incentivize your agency to identify underperforming spend, re-allocate what they can and use the remainder to test and learn. Provide your agency with incentives to optimize efficiency, even if it means spending less in the aggregate (e.g. give them a bonus for saving you money).

On the System (technology and data) side, consider the following:

  1. Focus on your key needs: determine what objectives you’re seeking to achieve, and the questions you’re trying to answer. Then ask vendors how they can help you achieve your specific goals. Ask “how can you help me _____?” rather than “what do you do?”
  2. Leverage your existing infrastructure: If you have an ad server and/or a DMP, you should be able to receive a unified data set (impressions, clicks, visits and conversions per user) for attribution modeling. Tagging every ad is no longer viable (too much effort, latency and data loss) or necessary. Rather than re-invent the wheel, seek to use data that already exists.
  3. Focus on ROI. A good attribution platform should yield $20+ in savings and $50+ in revenue for every $1 invested. Put in this light, you can’t afford not to invest in insights that can drive dramatic improvement in efficiency.  And while it used to be that only the largest brands could afford algorithmic attribution, it’s much more affordable today with solutions starting in the low 4 figures per month.
  4. Learn to use it. While attribution has become much more intuitive and user-friendly, advertisers need to invest some time upfront to learn the new KPIs, reconcile them against older metrics, and teach the organization how to use them.
  5. Crawl > Walk > Run.  Start with desktop and online conversions, then connect offline conversions.  Once you’ve picked the low hanging fruit, add A/B testing to validate causality. Once you’ve mastered desktop, tackle mobile media (by then there should be more options for obtaining conversion path data).  Once you figure out Desktop and Mobile, then add device bridging to get true cross-platform, omni-channel insights.  Remember you have to walk before you run.  If you set reasonable goals and manage expectations, the probability of success will be significantly higher than if you try to do it all at once.

I hope you found this informative and thought-provoking. As always your comments and questions are welcome!

Steve Latham | @stevelatham

Encore TwitterContact-us3c


Leave a Reply