Archive for the ‘Our Favorites!’ category

The Dark Side of Mobile Attribution

August 14th, 2015

Repost of my Data Driven Thinking byline published by AdExchanger August 2015.

The good news: Mobile will be the freight train that drives the media industry.

The bad news: The lack of data availability and transparency will cost marketers billions of dollars.

mobile ad spendSince the iPhone’s 2007 introduction, the media industry has deemed every year to be “The year of mobile.” It took longer than expected to mature, but desktop’s awkward little brother is about to dwarf big bro and steal his girlfriend along the way. Mobile surpassed desktop in consumption in 2014 and will surpass it in spending in 2016. eMarketer predicts mobile media will reach $65 billion by 2019, or 72% of digital spending.

As we move towards a “mobile-first” world, we need to address a very big problem: We still can’t accurately measure performance. The ability to target customers in new and innovative ways outpaces the ability to measure effectiveness of those tactics.

Mobile’s Measurement Problem

The digital media ecosystem was built on cookies to target, track and measure performance. Cookies are imperfect but good enough to develop accurate insights into customers’ journeys. Using cookie data to assemble and model conversion paths, marketers can use fractional or multi-touch attribution to optimize media campaigns much more effectively than with last-click metrics.

In mobile, third-party cookies are blocked on most devices and privacy regulations limit device tracking. Consequently, traditional ad servers are limited to reporting on last-click conversions where possible.

For brands seeking to drive app installs, mobile attribution companies like Kochava, Tune, Appsflyer and Apsalar can track the click that led to the download in Apple or Google stores. Some are working on post-click and post-view reports, but these will be of limited help to advertisers seeking actionable insights.

last-user-sessionThe lack of mobile data means advertisers cannot quantify reach and frequency across publishers. They also cannot measure performance across publishers via multi-touch attribution. The cost and complexity of device bridging further obfuscates user-level engagement.

Rays Of Light

Mobile data and measurement challenges won’t be solved overnight, but a convergence of factors point to a less opaque future. Here are my predictions:


1. Ad servers will adapt to device IDs

Conceptually, a device ID is not unlike a cookie ID, privacy issues notwithstanding, but it takes time and money to introduce a cookie-less ID system. Following the lead of Medialets, traditional ad servers will introduce their own anonymous IDs, instead of cookies, that map to probabilistic and deterministic device IDs. Like cookies, these IDs will allow them to log user level data that can feed fractional attribution models. We’ll probably see some early announcements before the end of year, with more to come in 2016.

2. Data unification will become readily available

To date, demand-side platforms, data management platforms, tag managers and data connectors have fixated on using data to help advertisers target, retarget, cross-sell and remarket. The same data that is used to drive revenue can also be used to connect user-level data for measurement purposes. Companies, such as Liveramp, Signal, Exelate and Mediamath, are already unifying data for analysis. More will follow.

3. Device bridging will become ubiquitous

To date, connecting devices across publishers has been a luxury afforded by the largest advertisers. In time that will change as wireless carriers, and possibly some publishers, offer device graphs exclusive of media and standalone vendors, such as Tapad and Crosswise, will reach economies of scale. At the same time, ad servers and data connectors will build or license device graphs and offer bridging as an extension of their service.

As ad delivery, data management and device bridging become more integrated (e.g. see announcement by Tapad and Medialets), costs will come down and advertisers of all sizes will be able to measure engagement across devices.

4. Mobile attribution vendors will be forced to evolve

As ad servers and data connectors incorporate device-level conversions in their data sets, including app installs, mobile attribution companies will have to expand their offerings or risk becoming redundant. Some may stick to their knitting and delve deeper into mobile analytics and data management. Others may pivot towards media and expand into desktop or addressable TV. Others may just be acquired. Regardless, it’s unlikely this category will remain as-is for much longer.

5. Last-touch attribution may finally go away.

We’ve been predicting the end of the click as a key performance indicator for years. But inertia, apathy and a continuous stream of shiny objects have allowed last-touch metrics to survive while brands and agencies fought other battles.

Now that we’ve tackled video, programmatic, social, native, viewability, fraud and HTML5, the new focus on insights and big data may finally drive the roaches away. The click will be hard to kill, but as we become smarter about measurement, it will become much less visible.

As the mobile data gaps are filled, the promise of cross-platform, cross-device, cross-channel attribution can become a practical reality for advertisers of all sizes.  From a measurement perspective, our best days are still ahead.  But as mentioned in the headline, getting there is going to be quite costly.

Steve Latham

Shedding Light Beneath the Attribution Canopy

May 22nd, 2015

adexchanger_logoAdexchanger recently published a timely article “Breaking through the Attribution Canopy” on the Attribution marketplace (view it on Encore’s facebook page). Overall they did a good job of highlighting the conflicts of interest that are inherent when your media vendor is also your trusted source of insights.  They also touched on the emergence of new solutions that are designed to address the needs of the larger market.   Along with other industry executives, I was quoted in the interview.

During the interview, we discussed a lot of issues surrounding media attribution and optimization.  But as with any interview, only a few of my comments were published.  To provide some context and clarify our POV, here are the key takeaways:

  • We are glad to see that Attribution has (finally) reached a tipping point.  Brands, agencies, DSPs and media platforms are scrambling to leverage machine-based insights to optimize media spend.  Continuing to rely on last-touch KPIs for is simply a lazy and irresponsible approach to measuring media.
  • We believe measurement, analysis and optimization decisions should be driven by the advertiser, its agency or an independent solution provider, not its media vendor.  Even if the fox is friendly, it shouldn’t be in the hen house.
  • We also believe data should be easily ported, integrated and made available for analysis, regardless of who sells the media or who serves the ads.  Openness, transparency and portability are not only ideological values; they also make business sense.
  • The growing concentration of power of leading media and technology vendors should be on everyone’s radar as a threat to transparency and openness.  If you look at the markets for programmatic display, video advertising, search, social marketing, mobile advertising* and ad serving, the dominant players are making it difficult and expensive to independently analyze their data in the context of other media. The path to marketing and advertising success does not end in a walled garden.
  •  To date, advanced insights (e.g. algorithmic attribution and data-driven optimization tools) have been reserved for the largest advertisers who can afford six-figure price tags.  As the article points out, there is a large unmet need beyond the top 200 advertisers.  To address the needs of the thousands of middle market advertisers, a new model (no pun intended) is needed.  Heavy, expensive and service-intensive solutions cannot scale across the broader market.  The next phase of adoption will be won by light and agile solutions that are affordable and easy to implement.
  • To deliver modeled insights at scale, the solution must be automated, efficient, flexible and customizable for each advertiser.  It should also be affordable.  On this point, we wholeheartedly agree with Forrester’s Tina Moffett “I think one advantage [attribution start-ups] do have is they were able to see the market needs and where the gaps were … and where existing players were falling short.”

For these reasons, we are very excited about the prospects for innovators who are able to address unmet needs for the large and growing middle market.

*For more on my quote that Google gets half of all mobile ad dollars, please see the emarketer report published earlier this year.

As always, thanks for reading and feel free to share comments or contact me if you have any questions.

Steve Latham

Observations on the Attribution Market

July 7th, 2014

The market for Attribution companies has definitely heated up with high profile acquisitions by Google and AOL.  I view these transactions as strong proof points that brands and their agencies are starving for advanced data-driven insights to optimize their investments in digital media.  The same thesis that led us to start Encore several years ago still holds true today: traditional metrics are no longer sufficient and advanced insights are needed to truly understand what works, what doesn’t, and how to improve ROI from marketing dollars.

Over the years we’ve analyzed more than 100 brand and agency campaigns of all sizes – from the largest CPG companies in the world, to emerging challengers in Retail, Automotive, Travel and B2B.  Based on these experiences, here are 5 observations that I’ll share today:

  1. We are still early in the adoption curve.   While many brands and agencies have invested in pilots and proofs of concept, enterprise-wide (or agency-wide) adoption of fractional attribution metrics is still relatively low, and the big growth curve is still ahead of us.  About 18 months ago I wrote about 2013 being the year Attribution Crosses the Chasm.  I now see I was a bit early in my prediction – 2014 is clearly the year Attribution grows up.
  2. There is still some confusion about who should “own” cross-channel / full-funnel attribution.  Historically brands have delegated media measurement to their agencies.  We now see brands taking on a more active role in deciding how big data is used to analyze and inform media buys.  And as the silos are falling, the measurement needs of the advertiser often transcend the purview of their media agency.  In my opinion, responsibility for measurement of Paid, Owned and Earned media will increasingly shift from the agencies to the brands they serve.  This is already the case for many CPG companies we serve.  In measuring media for more than a dozen big consumer brands, we’re seeing the in-house teams setting direction and strategy, while agencies play a supporting role in the measurement and optimization process.  We’re happy to work with either side; they just need to decide who owns the responsibility for insights.
  3. Multi-platform measurement is coming, but not as fast as you might think.  We are big believers in the need for device bridging and multi-platform measurement and are working with great companies like Tapad to address the unmet need of unifying data to have a more comprehensive view of customer engagement.  To date we’ve presented Device Bridging POVs to most of our customers.  And while are interested in this subject, very few will invest this year.  It’s not that the demand isn’t there – it will just take some time to mature.
  4. Marketers need objective and independent insights – now more than ever.  Despite increasing efforts by big media companies to bundle analytics with their media, the days of relying on a media vendor to tell you how well their ads performed are limited.  It’s fine to get their take of how they contributed to your business goals, but agencies and brands need objective 3rd party insights to validate the true impact of each media buy.  And with the growing reliance on exchange-traded media and machine-based decisioning, objective, expert analysis is needed more than ever to de-risk spend and improve ROI.   We’ve found this approach works well – especially in days like these where it’s all about sales.  This leads to my 4th observation…
  5. In the end it’s about Sales.  While digital KPIs are great for measuring online engagement, we’re seeing more and more interest in connecting digital engagement to offline sales.  Again, we’re fortunate to work with great partners like (m)PHASIZE to connect the dots and show the true impact of digital spend on offline sales.  We’re also working on opportunities with LiveRamp and Mastercard to achieve similar goals.  Like device bridging, I see this becoming more of a must-have in 2015, but it’s good to have the conversations today.

There is so much more to discuss and I’m sure our market will continue to iterate and evolve quickly.  But to sum it up, it’s an exciting time to be in the digital media measurement space. Attribution is finally coming of age and it’s going to be a hell of a ride for the next few years.

As always, comments are welcome!

Steve Latham

Encore Twitter



Inefficiencies of Exchange Traded Media

January 21st, 2014

Encore’s latest POV on Inefficiencies of Exchange Traded Media was published by AdExchanger on January 21, 2014.  You can read the article on AdExchanger or read the POV below.

While exchange-traded media is praised for bringing  efficiency to the display ad market, a deeper dive reveals numerous factors are costing advertisers billions of dollars in wasted  spend.  While programmatic buying is relatively efficient (compared to other media), on absolute basis, a lot of wasted spend generally goes unnoticed.

Research shows that perverse incentives, a lack of controls and limited use of advanced analytical tools have made a majority of exchange-traded media worthless.  While we advance how we buy and sell media, there is still significant room for improvement in the quality and economic returns from real-time bidding (RTB).


Where Waste Occurs

Optimizing media starts with eliminating wasted spending. In the RTB world, waste can take many forms:

  • burningmoneyFraud: Either 1x1s sold into exchanges to generate revenue or impressions served to bots, or non-human traffic.
  • Non-viewable ads: These are legitimate ads that are not viewable by the user.
  • Low-Quality inventory: Refers to ads served on pages whose primary purpose is to house six, eight or more than 10 ads.
  • Insufficient frequency: Too few ads served per user – one or two – to create desired awareness.
  • Excessive frequency: Too many ads served to individual users – more than 100, 500 or more RTB impressions over 30 days
  • Redundant reach: Multiple vendors target the same users. This is often a consequence of vendors using the same retargeting or behavioral tactics to reach the same audiences.


Quantifying The Costs

The percentage of wasted impressions varies by campaign, but it’s usually quite significant. Here are some general ranges of wasted RTB impressions:

  • +/- 20% of exchange-traded inventory is deemed fraudulent, according to the Integral Ad Science Semi-Annual Review 2013[TH1] .
  • +/- 7% of viewable inventory is served on ad farm pages (more than six ads)
  • +/- 60% of legitimate inventory is not viewable per IAB standard
  • 10 to 40% of Imps are served to users with frequency that is too low to influence their behavior
  • 5 to 30% of Imps are served to users with frequency greater than 100 over the previous 30 days (the more vendors, the higher the waste due to redundant reach and excessive retargeting)

To put this in the context of a typical campaign, assume 100 million RTB Impressions are served in a given month.

RTB waste infographic crop


In most cases, less than 20% of RTB impressions are viewable by humans on legitimate sites with appropriate frequency. In other words, 20% of all Impressions drive 99% of the results from programmatic buying.  Because RTB impressions are so inexpensive, it’s still a very cost-effective channel.  That said, there is considerable room for improvement within RTB buying.

Who’s To Blame?

When we present these analyses to clients, the first question often asked is, “Who’s to blame?” Unfortunately, there is no single culprit behind the RTB inventory problem. As mentioned, the problem is due largely to a lack of controls and perverse incentives.

  • Lack of Controls: While a growing number of brands and agencies are incorporating viewability and using algorithmic analytical tools, most are still in the dark ages. Some feel their results are “good enough” and choose not to dig deeper. Others seem not to care. Hopefully this will change.
  • Perverse incentives: We live in a CPM world where everyone in the RTB value chain – save the advertiser) – profits from wasted spending. It’s not just the DSPs, exchanges and ad networks that benefit; traditional publishers now extend their inventory through RTB and unknowingly contribute to the problems mentioned above. While steps are being taken to address these issues, we’re not going to see dramatic improvement until the status quo is challenged.


How To Fix The Problem

The good news is that the RTB inventory problems are solvable. Some tactical fixes include:

  • We should invest in viewability, fraud detection and prevention, and algorithmic attribution solutions. While not expensive, they do require a modest investment of time, energy and budget. But when you consider the cost of doing nothing – and wasting 50 to 80% of spending – the business case for investing is very compelling.
  • We need to stop using multiple trading desks and RTB ad networks on a single campaign, or they’ll end up competing against each other for the same impressions. This will reduce the redundant reach and excessive frequency while keeping a lid on CPMs. It will also make it easier to pinpoint problems when they occur.
  • Finally, we need to analyze frequency distribution each month. Average frequency is a bad metric as it can mask a lot of waste. If 100 users are served only one ad, and one user is served 500 ads, the average frequency is six but 99% of those impressions are wasted. Look at the distribution of ads by frequency tier to see where waste is occurring.

For strategic change to occur, brands and their agencies must lead the way. In this case, “leading” means implementing controls and making their vendors accountable for quality and performance of display media.

  • Brands must demand more accountability from their agencies. They also need to equip them with the tools and solutions to address the underlying problems.
  • Agencies must demand better controls and make-goods from media vendors. Until we have better controls for preventing fraud and improving the quality of reach and frequency, media vendors need to stand behind their product, enforce frequency caps and make internal investments to improve the quality and efficiency of their inventory.
  • All buyers must make their DSPs and exchanges accountable for implementing more comprehensive solutions to address the fraud and frequency problems.


The Opportunity

We can’t expect a utopian world where no ads are wasted, but we can and should make dramatic improvements. By reducing waste, advertisers will see even greater returns from display media. Higher returns translate into larger media budget allocations, and that will benefit us all.

While fixing the problems may dampen near-term RTB growth prospects, it will serve everyone in the long run. Removing waste and improving quality of media will help avoid a bubble while contributing to the sustainable growth of the digital media industry.  Given the growing momentum in the public and private equity markets, I hope we as an industry take action sooner rather than later.

As always, comments are welcome.

Steve Latham

Encore Twitter


Ad:tech NY Attribution Case Study

January 7th, 2013


In November 2012, between a hurricane and a Nor-easter, I presented a case study on Full-Funnel Attribution at the one of the premier industry conferences: Ad:tech NYC.

For the presentation I joined by Brad May of KSL Media, who is not only a client but also an early adopter and supporter of Attribution.  Building on the insights from the Attribution Case study presented at Ad-Tech in San Fran, I was honored to speak again and present a case study illustrating how advanced analytics and full-funnel,cross-channel Attribution can be utilized to maximize performance and boost Return On Spend.

Among the highlights of the case study, we demonstrated:

  • After modeling the impact of assist impressions and clicks, Display advertising accounted for almost 20% of achieved actions.
  • Mobile ads generated low-cost mobile-generated actions (this year’s theme – mobile, mobile, mobile).
  • Search played largely a supporting role.
  • Frequency is an issues that all advertisers need to keep a close eye on.

For those who didn’t make the show, I’m happy to share the case study in two formats (both are hosted on slideshare):


As always, feel free to comment, tweet, like, post, share, or whatever it is you do in your own social sphere.  Thanks for stopping by!


Encore TwitterContact-us3c

Crossing the Chasm: an Insider’s Perspective on Media Measurement

September 15th, 2012


Back in 2005 I had a digital marketing agency and we were buying display ads, email sends and paid search for FedEx Kinko’s.  I still remember the anxiety I felt when we presented results from our media buys.  Paid search of course looked great, with a very low Cost Per Action.  Email produced decent results but clearly took a back seat to search.  The media step-child we call Display accounted for a majority of the spend, and appeared to be a complete waste of money with a CPA that was 10x that of search.  Naturally my client asked the question: “why on earth are we spending money on display when search is so much more efficient?”

My answer was that: 1) we were buying all the search that was available, and 2) we believed the investment in display was creating awareness that ultimately drove more searches.  We cited the increase in clicks and leads from search that correlated with the growth in display advertising, but lacked hard data to support our thesis.  Fortunately my client agreed, but also challenged me to do a better job of justifying the media plan.  We did what we could at that time and found that 20% of those who converted from a paid search had previously clicked on a display ad.  But that was the extent to which we could attribute credit for awareness created by Display ads.

Now, 7 years later, we’ve evolved significantly as an industry. We can now view comprehensive conversion paths and use machine learning and statistical analysis to allocate credit to each impression, click and interaction that influences an online conversion.  We can show the true ROI from online advertising, show the role and ROI for each channel, publisher and placement, calculate optimal frequency and quantify wasted spend efficiently and cost-effectively. Capability-wise, we’ve come a long way since the days of last-click reporting and since 2010, the industry has been ripe for massive adoption (learn why: Five Forces Driving Attribution).

But despite these circumstances, most advertisers still measure and optimize the same way we did back in 2005.  They realize there’s a better way to do it, but for a variety of reasons (read more: “Ph.D Targeting, 1st Grade Metrics”) they stay rooted in outdated metrics that preclude them from optimizing spend and maximizing ROI.

In Geoffrey Moore’s classic high-tech marketing book “Crossing the Chasm” he defined five groups of adopters: innovators, early adopters, early majority, late majority and laggards.  According to Moore, an industry “crosses the chasm” when the early majority adopts new technology, which sparks rapid growth and value creation in that sector.  Adoption by the early majority constitutes the evolution from “fad” to “norm”.  When it comes to measurement of online advertising, these days are still ahead of us.

While some advertisers (innovators and early adopters) have raised the bar in media measurement and optimization, the early majority is still in the consideration stage.  For years they’ve been hearing about Attribution from innovators, and are now seeing case studies and thought leading research from early adopters.  Resembling the tortoise more than the hare, the early majority are reading POVs, evaluating solutions and building the business case to invest in Attribution and advanced analytics.  For the past 2 years I’ve wondered “will this be the year we cross the chasm?”  Despite high hopes for 2011, the big event has not yet happened.  A handful of brands and agencies dipped their toe in the water, but only a small percentage of advertisers have taken the plunge.

Earlier this year I again wondered if 2012 would be the year we cross the chasm.  Given that the year is 2/3 over, and most new initiatives take place in Q1 or Q2, I have my doubts.  But there is a lot more activity and focus on Attribution than in prior years, helped in part by Big Data fever that has everyone atwitter.  As noted in my prior post, mass adoption will happen when Advertisers start demanding better metrics, deeper insights and demonstrable improvement in ROI from their agencies.  It’s happening today in small numbers, but at an increasing rate.  It “feels” like we are close to the Tipping Point and I am hoping 13 (as in 2013) will be our lucky number.

As always, feel free to comment, tweet, like, post, share, or whatever it is you do in your own social sphere.  Thanks for stopping by!


Encore TwitterContact-us3c

It’s Hard to Solve Problems from an Ivory Tower

March 2nd, 2012

Today a colleague sent me a link to a new article on Attribution and media measurement with a request to share my thoughts. Written by a statistician, it was the latest in a series of published perspectives on how Attribution should be done. When I read it, several things occurred to me (and prompted me to blog about it):

  1. Are we still at a point where we have to argue against last-click attribution?  If so, who is actually arguing for it?  And are we already at a point where we can start criticizing those few pioneers who are testing attribution methodologies?
  2. Would a media planner (usually the person tasked with optimizing campaigns) understand what the author meant in his critique: “the problem with this approach is that it can’t properly handle the complex non-linear interactions of the real world, and therefore will never result in a completely optimal set of recommendations”?  It may be a technical audience, but we’re still marketers… right?
  3. The article discusses “problems” that only a few of the largest, most advanced advertisers have even thought about.  When it comes to analytics and media measurement, 95% of advertisers are still in first grade, using CTRs and direct-conversions as the primary metric for online marketing success. They have a lot of ground to cover before they are even at a point where they can make the mistakes the author is pointing out.

In reading the comments below the article, my mind drifted back to business school (or was it my brief stint in management consulting?) and the theoretical discussions that took place among pontificating strategists.   And then it hit me… even in one of the most innovative, entrepreneurial and growth-oriented industries, an Ivory Tower mindset somehow still persists in some corners of agencies, corporations, media shops and solution providers.  Not afraid to share my views, I responded to the article in what I hope was a polite and direct way of saying “stop theorizing and focus on the real problem.” Here is my post:

“…We all agree that you need a statistically validated attribution model to assign weightings and re-allocate credit to assist impressions and clicks (is anyone taking the other side of this argument?).  And we all agree that online is not the only channel that shapes brand preferences and drive intent to purchase.

I sympathize with Mr. X – it’s not easy (or economically feasible) for most advertisers to understand every brand interaction (offline and online) that influences a sale. The more you learn about this problem, the more you realize how hard it is to solve.  So I agree with Mr. Y’s comment that we should focus on what we can measure, and use statistical analyses (coupled with common sense) to reach the best conclusions we can. And we need to do it efficiently and cost-effectively.

While we’d all love to have a 99.9% answer to every question re: attribution and causation, there will always be some margin of error and/or room for disagreement. There are many practitioners (solution providers and in-house data science teams) that have studied the problem and developed statistical approaches to attributing credit in a way that is more than sufficient for most marketers.  Our problem is not that the perfect solution doesn’t exist. It’s that most marketers are still hesitant to change the way they measure media (even when they know better).

The roadblocks to industry adoption are not the lack of smart solutions or questionable efficacy, but rather the cost and level of effort required to deploy and manage a solution.  The challenge is exacerbated by a widespread lack of resources within the organizations that have to implement and manage them: the agencies who are being paid less to do more every year.  Until we address these issues and make it easy for agencies and brands to realize meaningful insights, we’ll continue to struggle in our battle against inertia. For more on this, see “Ph.D Targeting & First Grade Metrics…”

I then emailed one of the smartest guys I know (data scientist for a top ad-tech company) with a link to the article and thought his reply was worth sharing:

“I think people are entirely unrealistic, and it seems they say no to progress unless you can offer Nirvana.”

This brings me to the title of this post: It’s hard to solve problems from an Ivory tower.  Note that this is not directed at the author of the article, but rather a mindset that persists in every industry.  My point is that arm-chair quarterbacks do not solve problems. We need practical solutions that make economic sense.  Unless you are blessed with abundant time, energy and resources, you have to strike a balance between “good enough” and the opportunity cost of allocating any more time to the problem.   This is not to say shoddy work is acceptable; as stated above, statistical analysis and validation is the best practice we preach and practice.  But even so-called “arbitrary” allocation of credit to interactions that precede conversions is better than last-click attribution.  It all depends on your budget, resources and the value of advanced insights.  Each marketer needs to determine what is good enough, and how to allocate their resources accordingly.

Most of us learned this tradeoff when studying for finals in college: if you can study 3 hours and make a 90, or invest another 3 hours to make a 97 (recognizing that 100 is impossible), which path would you choose?  In my book, an A is an A, and with those 3 additional hours you could have prepared for another test, sold your text books or drank beer with your friends.  Either way, you would extract more value from your limited time and energy.

To sum up, we need to focus our energies away from theoretical debates on analytics and media measurement, and address the issues that prohibit progress.  The absence of a perfect solution is not an excuse to do nothing. And more often than not, the perfect solution is not worth the incremental cost and effort.

As always, feel free to comment, tweet, like, post, share, or whatever it is you do in your own social sphere.  Thanks for stopping by!


Encore TwitterContact-us3c



OMMA Metrics Interview: Multichannel Attribution and Insights

August 30th, 2011

Encore founder and CEO Steve Latham was recently interviewed by Erick Mott from Creatorbase at OMMA Metrics 2011.

Key questions answered:
  • What is attribution?
  • What does Encore Media Metrics do?
  • How do “last-click” models compare to attribution analysis?
  • How can media spend be optimized by using attribution analysis?

As always, feel free to comment and share!

The Encore Team

Encore logo

Encore TwitterContact-us3c

Display Advertising Landscape

June 10th, 2011

In early June I was fortunate to be one of 350 ad tech CEOs who attended LUMA Partners’ Digital Media Summit in NYC, featuring the best and brightest in the industry.  I’ve been to some great networking events before (IAB, 4A’s, etc.) but this was tough to beat.

In addition to meeting some amazing people, one of the highlights was the release of the latest display ad landscape or “LUMAscape” aka “the slide” that was originally produced by Terence Kawaja in 2010.  For those who are new to display advertising (or have been out of the market for the last 3 years), buying display media is like buying a house: you also need phone service, internet, cable, gas, electricity, dog-walking, etc.  In this case, Media is the house; ancillary services include ad verification, OBA compliance, data/tag management, audience measurement, ad serving, and our favorite: attribution.

The newest version of the slide is getting ever closer to accurately depicting all the segments and sub-segments that comprise the digital advertising landscape.  It also marked the debut of Encore Media Metrics as a recognized leader in the Attribution and Measurement category.

“The Slide”may also viewed on slideshare or you can download the LUMA Display Landscape here.

The industry is extremely fragmented, and is likely to stay that way for a while.  So if you want to play in the display advertising space (either as a buyer, seller or manager) you need to understand the difference between a DSP, DMP and SSP without yelling “WTF!”  Yes, it’s easier said than done but this map should help you get started.

Steve Latham (@stevelatham)

Encore logo


Spur Interactive TwitterContact-us3c


The Five Forces Driving Attribution: Media Measurement Comes of Age

May 27th, 2011

Advertisers are (finally) looking beyond the last click.  Here is an overview of the Five Forces that are driving adoption (also published by MediaPost in May 2011)

It’s been 3 years since measurement buzzwords “attribution” and “engagement mapping” emerged with great anticipation and excitement in online advertising.  The idea of looking across digital channels and beyond the last click to measure media throughout the funnel was thought to be the holy grail in online marketing.  Recognizing that “last-click wins” is insufficient for measuring the brand-building attributes of display media, brands, agencies and media vendors saw Attribution as the next big thing in digital advertising.

Yet as we entered 2011, very few marketers were using Attribution to measure and optimize online media spend.  Despite the universal desire for better measurement, most were still using old metrics (click-through rates, cost per click and direct cost per action) to analyze paid media.  Greg Papaleoni, who develops Analytics and Insights for Yahoo! Advertising Solutions, sums it up well: “While Full Funnel Attribution is the future of the ever-evolving digital media measurement landscape – it should be the present.  Those advertisers who embrace and implement this logic, methodology and technology sooner rather than later will enjoy a massive advantage over their competition.”

While adoption has been slow to date, this is changing quickly due to the convergence of numerous factors.  Borrowing on Michael Porter’s “Five Forces” model for analyzing industries, here is my take on the Five Forces that are driving digital media attribution (author note – I received permission from Professor Porter to adopt his model to this category):

1. The continuing shift of media budgets from traditional to digital.

While total U.S. media spend will grow only 3% in 2011, digital spend will grow 14%, surpassing Newspaper as the #2 medium.  Accounting for almost 30% of daily media consumption, Digital spend will continue to outpace all other channels for the foreseeable future.

2. The resurgence of display advertising

Per eMarketer, Display media spend will grow 14% in 2011, outpacing 10.5% growth in paid search.  While there are many reasons behind the growth (consumption of social, video and mobile content, better targeting capabilities, real-time bidding, richer formats, etc.) I believe the resurgence of display is driven by two primary factors:

  • The maturing of search: There are only so many searches every day, and most marketers have optimized their paid search efforts.  For the big advertisers, there are no more keywords to buy.  As one search exec was recently quoted “paid search inventory is maxed out.” Incremental dollars will have to go elsewhere.  Display is the obvious choice.
  • The return of branding:  As the economy recovers, marketers are re-investing in their brands.  During lean times, online dollars focused on harvesting existing demand (via search).  But with the improving economy, brand-building is once again a strategic priority.  In the digital realm, display media offers the most efficient, effective and scalable way to create awareness, consideration and preference for brands, products and services.

3. Increasing focus on accountability

While marketing budgets may have loosened, the focus on results has not.  As a result, marketers are keeping a very close eye on ROI from “brand-building” media.  With the ever-increasing need to show ROI, brands now want branding plus performance.  To properly measure brand-building media, we need to measure engagement, not clicks.

4. Evolution of web architecture

Recent forays by IBM and Oracle into the marketing arena signal a new wave in convergence of IT and Marketing.  As the IT behemoths push technology-based marketing solutions, CIOs are becoming more attentive to the needs of the marketing department.  The deployment of Data Management and Universal Tagging Platforms enable advanced analytics and media measurement that were off-limits to marketers in the past.  With this roadblock removed, the stage is set for new measurement tools to be deployed across their digital infrastructure.

5. The emergence of better Attribution solutions.

While early Attribution solutions were expensive and limited in capabilities (e.g. couldn’t attribute credit for organic conversions), a new breed of point-solution vendors (including my company Encore Media Metrics), are now offering more effective, flexible and affordable solutions.  For a very modest investment (as low as 1-2% of media spend), advertisers can now have a much more holistic and accurate view into the performance of each channel, vendor, format, placement and keyword.  These insights are enabling advertisers to optimize media budgets, yielding 20-40% gains in revenue.  The immediate return on investment in Attribution solutions may exceed 1-20x (100%-2,000%).

The Five Forces Driving Attribution are illustrated below:

Five Forces Driving Attribution

As our business objectives change, so must the manner in which we measure results.  As dollars continue to flow into digital, brands and their agencies must use more efficient, accurate and effective metrics for measuring media throughout the funnel.  The emergence of more advanced and affordable Attribution solutions, supported by growing support from IT departments is paving the way for Attribution to become a foundational component within the digital marketing ecosystem.

Matt Miller, SVP of Strategy & Analytics at Performics, agrees, stating “Attribution is one of the top priorities for us and our advertisers.  Focus on attribution will only increase as advertisers build and implement strategies to maximize ROI across all digital channels.”

As always, comments are encouraged. And please feel free to share!

Steve Latham (@stevelatham)

Encore logo


Spur Interactive TwitterContact-us3c