
In a recent study[1], 29 teams of statistical experts were asked to answer a correlation question based on a single dataset. Twenty of them found a statistically significant correlation; nine did not. And within that 20, the findings varied enormously, with the teams announcing everything from a ‘slight (and non-significant) correlation’ to a ‘strong trend’.
Why? Because each team used a different analytical approach to interpret the data.
It’s a scary idea in a world where companies are increasingly relying on research and data analytics to help them make ‘evidence-based’ decisions. How do you ensure the ‘evidence’ being used is reliable? The answer lies in combining the science of statistical analysis with the art of business understanding.
How to make the most of experimental data
For example, when you’re dealing with experimental data, how the experiment is set up in the first place influences the data analysis. If a credit card concept test is being launched via tele-marketing, the statistician may use the customers’ disposition to receiving marketing calls as a ‘balancing’ variable.
Good experimental designers understand which variables need to be controlled, which may influence an outcome, and how to avoid proxy effects. This in turn requires your researchers to have a strong understanding and appreciation of marketing actions, consumer decision making and anticipated outcomes. Look for teams of market researchers and statisticians who work together to understand the business issues before they set up your experiment.
How to minimize interpretation errors
In market research, it rarely makes a material difference if you misinterpret ‘confirmatory’ analysis. If the marketing team strongly believes that Pack A is the one to launch versus Pack B, research that refutes it is likely to be heavily scrutinized. On the other hand, if the superiority of Pack A is not clear then the analysis is less likely to be challenged. But, given that the packs are not too different to start with, incorrectly interpreting results and launching B (instead of A) is unlikely to make a significant dent in your business.
In contrast, ‘exploratory’ analysis has a relatively higher risk of misinterpretation – or at least of missing the real value in the data. When you’re using research to explore, the starting point is not ‘which pack is better?’ but ‘what can we learn from the pack test?’ Getting this right is harder than it sounds. When you don’t know what you don’t know – how do you know what to pay attention to?
In India, in the late 90s, the growth of Kit Kat was posing a strong challenge to Cadbury’s. The confectionary giant wanted to segment its market to identify white space opportunities for growth. We tried segmenting by occasion-based needs, but the statistical outputs were frustrating. Most of the reasons for buying chocolate skewed heavily to one particular large segment – clearly not a successful outcome to statisticians tasked to ‘break’ the data into multiple segments.
However, having double and triple-checked the data – and still found no discernable segmentation possibilities – we went back and thought about what else the data might be telling us. What creative interpretation could we draw? Eventually we realized that our inability to segment revealed a vital insight: people don’t need a special occasion to buy chocolate – they just love it!
This ‘moment of truth’ was one of the triggers for Cadbury’s very successful campaign ‘Khane walon ko khane ka…’ which was a powerful force in extending the brand’s relevance from children to adults.
In another example, a technology player that had worked hard to become the ‘first recommended’ brand in retail stores couldn’t understand its research data. All the analysis pointed to the fact that a competitor, the second recommended brand, was making equal – and often better – sales. When we looked at the in-store sales situation closely, we found a classic sales dynamic at play. Customers saw the first recommended brand as a ‘retail push’ and were tending to opt for the second product the sales staff recommended. What the data was really showing was that the coveted ‘first recommendation’ spot didn’t offer the advantage the client was hoping for. The client dropped its focus on recommendation and started looking more closely at consumer behaviour.
The takeaway is clear. Data analytics can help organizations make much smarter decisions – but only if the people working with the data understand the business and consumer environments and can make the insightful leap of creative interpretation.
[1] Dr.Uhlmann (INSEAD, Singapore) & Dr.Silberzahn (IESE, Spain), Nature, http://bit.ly/1FVTsLc (2015)