One evergreen topic that continues to showcase its importance surrounds ensuring that attention is on the right things and automation is occurring for everything else where business cases are justified. We have found ourselves at an inflection point where the frontline skills required for a successful company have far surpassed the days of data entry and repeatable tasks. Frankly, these jobs in the digital space are going away and it continues to force talent to transition to higher caliber functions. This is not doom and gloom for those pursing management or operations positions, in fact it reinforces that the opportunity lays with them to drive automation and efficiency.
For many companies a significant opportunity is capitalizing on accessible data. Though even with data “accessible” the frontline continues to spend more time locating, organizing and pulling data than analyzing it. With data storage becoming increasingly cheaper, accessible data is on the rise and analyzing more of it introduces increased latency into the production of comprehensive insights for clients and internal stakeholders. This is a weak spot for management and operations teams because data aggregation can be automated, and where human contribution is most significant in providing context and perspective on the results is where the least time is spent. This behavior is what I refer to as The Minimal Analysis Trap (TMAT), where the effort to assemble data is so fatiguing that the conclusion reached about it is only simple and cursory.
"With emerging data aggregation toolsets in the hands of operations teams it is promising that that they will be able to easily roll-up what is important and evolve more rapidly as the industry changes or requires flexibility"
There are a variety of ways that the frontline can find themselves in TMAT. The first and most common is caused by a lack of management clarity around which metrics are important to the business and how they should be measured. While commonsense in nature, this does require leaders to take a step back to review the lowest level metrics and determine which of them represent a subset of a Key Performance Indicator (KPI) that warrants monitoring and focus. If you’re a media buyer it might be Return On Ad Spend.If you’re a publisher it might be Revenue per Thousand Impressions. Regardless of which KPI is important to a business, often the frontline is littered with an excessive number of data points to analyze. Without clear direction the conclusions reached about data can be incomplete resulting in frontline revision grinds or a general acceptance from leaders of low-quality output. Take for example a metric like Click Through Rate where there have been countless examples of fraudulent traffic being purchased to make this metric look appealing to ad buyers but in fact the traffic does not perform well on conversion. Too often however ad buying renewal decisions are made on this metric alone without additional perspective.
A second and related way the frontline finds themselves in TMAT is due to the greater industry lacking clarity around which metrics are universally relevant and how they should be calculated. In the past decade a prominent example of this challenge was especially highlighted in how ad viewability is calculated. If you use the Media Ratings Council’s formula you would have noticeably different results than some large media agencies, and sometimes without clear transparency or rationale from them on why their calculation iteration was more justifiable. This puts a significant tax on the frontline when manual data aggregation is in play because the margin for error increases in keeping formulas and analyses correct when working across multiple clients. It is then even more challenging when trying to report to internal stakeholders on how effective overall the business had been at providing success across multiple clients.
Having data readily accessible is also still a challenge even in a world with standardized metrics. Historically data aggregation had been mostly a function of technologists who managed database architecture, table relationships and reporting interfaces. And while this continues to be a technology function at many large corporations it requires an ongoing relationship between operations and technology for it to be sustainable, and operations needs to be the driver. Unfortunately,these in-demand technologists understandably seek, in general, more cutting-edge innovation work in lieu developing operational efficiency tech. This gap causes support and subject matter knowledge to be volatile and lack sustainability which halts timely business decision making. The positive trend on this front is that this common problem has resulted in many third-party providers offering services that can receive data sets from different origins, whether it be through preestablished API connections or through emailed data sets, which empowers operations teams to aggregate the data themselves and have a more stable footing.
With emerging data aggregation toolsets in the hands of operations teams it is promising that they will be able to easily roll-up what is important and evolve more rapidly as the industry changes or requires flexibility. This inevitably can shift management focus away from resolving the negative outcomes derived from TMAT and more towards evaluating whether the frontline talent is trained and capable of providing enriching insights and analyses with readily accessible data. Once operations and management leaders have ensured they’ve adequately removed ambiguity from success metrics and have equipped their teams with the tools to aggregate data autonomously, it is then that businesses can have a deeper relationship with the health of its business through higher quality insights.