Analytics Project Tips
John (name changed) is a Marketer for a leading apparel eTailer based in New York. John joined his current company recently and is struggling to make sense of the reports in his Inbox. The #s and findings in the reports just don’t add up. While their analytics vendor sends out 10-15 reports each week, no one on his team uses these reports. No one on the team was even aware of who requested so many reports. The vendor blames the poor data quality for the unreliable numbers and the poor usage. John wonders how he can improve the quality of his analytics projects and if there is a solution.
This is a typical scenario encountered by many a marketer / e-marketer. The root-cause of these are common mistakes that analytics teams (internal and external) make. What are these common but critical mistakes and how can you make sure you avoid them?
1. Unused Reports / Unimplemented Models = No Perceived Value
Is your team / vendor doing work that is eventually not getting used?
You should first inventory the analytics projects and identify the key stakeholders for each. Identify team members who will benefit most from each project. Get these team members to review the work and use it to make decisions. Motivate them to inform you if they feel that they may not have a need for it (now rather than later).
You should do a quarterly exercise to cut redundant work by sitting with your team and your vendors . This exercise will ensure only useful projects remain. A review before a major project launch will ensure you are investing only in useful projects.
As a vendor, you need to take ownership of tracking usage of each project that you work on. Most BI Tools today come with usage tracking features. XL and other reports sent using email can use email/link tracking tools to measure usage. Share the findings with your project sponsor to showcase the impact of your work. A dramatic solution to an unused report is to pull the plug on it for 1-2 weeks and see if anybody complains. If no one does, you can indeed pull the plug on those reports.
2. Erroneous Work = No Credibility
Is your team following due procedure when it comes to QA of the reports / analytics projects that are going out?
Are there a standard set of checks required before any report / analytics recommendation / model goes out? Is the process documentation detailed enough to avoid errors due to personnel changes? Are there many QA check-points within a project? E.g. Ensure that the # of customers in your modeling data-set on is in the ball park.
QA is an important ingredient for an error-free deliverable in any analytics project. You need to confirm advanced analytics projects and presentations make business sense. You also need to ensure that the model is a good model.
3. Data Discrepancy / Data Quality = Unreliable Deliverable
Who do you blame as an analytics team for poor data?
The Data Team (if you were not part of it) or just the stars (if you were). Data will never be perfect. You need to budget enough time in a project for data treatment and call out the assumptions made.
When many data sources are available, you must use the newer, more reliable and less error prone one. It could also so happen that the data was clean for some channels and no so clean for some other channels (e.g. Online). In that case, adopting a hybrid strategy might work better. A first step is to gain access to all key data sources and compare the key metrics (KPIs) across them.
A digital website audit by a digital analytics vendor would be a good first step. This would help identify mistakes around usability, analytics implementation and reporting. You must repeat this audit at least 2-4 times a year to ensure any new changes made do not impact the site and the data. You might need more frequent audits for a website that undergoes frequent changes.
4. Conflicting Stories = Poor Credibility
Does your report tell a story, a single story?
Getting a deliverable ready to present / share with your client is only the first step. Making sure that the #s and recommendations make sense is important. It is easy to get lost in the numbers and lose sight of the reason for the request. The “So-what” or the “5-Whys” technique will help get to the bottom of what the numbers mean. They will also help make recommendations to the client to take certain actions.
You must remember “correlation is not causality” and “averages are misleading” when making recommendations. You might need to drill-down further to see if the recommendations you are making are valid.
5. Projects that don’t drive decisions / actions = No Value
Is your project making an impact?
You feel that you are doing some great work. But, is it making an impact by driving decisions. Is your model implemented in market? If your answer is “No”, you might not have given it enough thought at the start of the project. You must invest time at the start of a project to ensure that you are solving the right problem. You must also ensure that you are structuring the problem in the right way. A well-structured problem helps more than a cutting-edge tool or even a sophisticated technique.
You must ensure that any report includes recommendations based on your interpretation of data. You must also encourage users to provide feedback on these recommendations. Starting a conversation intrigues more stakeholders to use and get value from the reports. Even if you get corrected on your findings, it is likely to be because of some information that you missed as a team. You should request for the extra information and use it to improve the recommendations. More information could include campaign calendars, creatives, market dynamics or product launch schedules.
Author Profile: Randhir Hebbar is an entrepreneur and one of the founders of Convergytics – Asia’s leading analytics brand in 2015-16 (as per UK based Global Brands Magazine). He heads the Digital Analytics and BI Practice at Convergytics and also is the Account Lead for several key accounts. He has consulted with dozens of leading Fortune 500 Retail, e-Tail, Technology, Telecom and Media Companies over the past 15 years and as an Analytics Leader within the organizations that he has been a part off, has encouraged team members to strive to avoid some of the mistakes listed in the article.
The day analytics comapnies are rewarded for outcomes/impact they achieve for a client these issues get automatically mitigated
Of course, Prem. Outcome based pricing/rewards might help avoid some of these.
But, think of an internal analytics team. Client has a team doing all of his analytics and he is still not getting the results because of poor data quality, focus on unimportant problems, talent deficit, lack of guidance / mentoring of team, lack of domain and technical knowledge and many other obvious issues that plague analytics teams today. Will outcome based pricing solve all the issues. Worth pondering about?