Better Decisions with Data: Asking the Right Question
July 5, 2024192 views0 comments
In this Nano Tool for Leaders, Stefano Puntoni and Bart De Langhe, authors of “Decision-Driven Analytics,” explain how to get the most out of your data.
Nano Tools for Leaders® — a collaboration between Wharton Executive Education and Wharton’s Center for Leadership and Change Management — are fast, effective tools that you can learn and start using in less than 15 minutes, with the potential to significantly impact your success and the engagement and productivity of the people you lead.
To get the most from your data (and collect the data you really need), understand what you are asking for.
Read Also:
Nano Tool
Most businesses rightly see data as a source for making better decisions. But the conventional data-driven approach often falls short because of common errors made by the decision-makers. Many leaders excessively rely on existing data that may or may not address the issue at hand, or they pass key decisions to data scientists who don’t really understand the business dilemma they are trying to solve. Decision-makers are also prone to leading with a preference, arriving at a solution, and then finding the data to back it up.
Alternatively, decision-driven analytics puts decision-makers at the center, resolving the common mismatch between analytics and actual business decisions. It starts from the decision that needs to be made and works backward toward the data that is needed. But it also requires more from leaders, who must shift the focus from getting answers to asking the right questions. This approach highlights the strategic importance of what we don’t know, underscoring the importance of intellectual humility. In fact, crafting the right questions is an essential and foundational step in the data-analysis process.
Action Steps
1. Clarify the decision.
When determining the decision(s) to be made, focus on options within your control and pertinent to your organizational position. Since most decision-makers gravitate toward familiar options, broaden yours by engaging with diverse perspectives. Ask trusted colleagues or gather your team to get further clarification. The choice set should include options that are both feasible and impactful, and that can genuinely enhance desired business outcomes. It should exclude options with prohibitive costs or risks. During this critical first step, data — especially existing data — should not be a consideration. If analytics starts from the data that is available instead of the decision to be made, it increases the chances of asking the wrong type of question. That’s not to say that data mining and exploratory data analysis have no role in business, but decision-driven analytics assume that the first responsibility of business executives is to have a razor-sharp focus on their job and deliver on their key responsibilities. To do that, it works better to start from the decision at hand and work backward towards the data that is needed. Data mining and exploratory data analysis are valuable, but they serve a different goal.
2. Ask a “factual” question.
Ask a “factual” question when you are seeking a prediction. Businesses need to answer this type of question frequently. For example, the smooth running of manufacturing operations rests on predictions about when a machine is likely to start failing. That information can then be used to create a maintenance plan. For retailers, understanding the cost of dealing with returns is similarly important: It would be incredibly valuable to identify which products have a higher propensity to be returned. With this knowledge, companies can strategize proactively either by adjusting the product pricing, considering additional measures to reduce returns, or even deciding against offering such products entirely.
3. Ask a “counterfactual” question.
Ask a “counterfactual” question when you need to compare outcomes with and without a particular intervention. These questions involve hypothetical scenarios and demand causal inference, making them inherently more intricate than factual questions. For example, during the 2012 U.S. presidential campaign, Barack Obama’s team gained a crucial edge against Mitt Romney’s by asking “Who is most susceptible to being persuaded to vote for Obama?” This counterfactual question allowed the team to identify swing voters most inclined to change their voting behavior when visited by a campaign worker, rather than targeting those most likely to vote for Obama or hard-core Romney supporters — both of which would have squandered valuable resources. Their approach revolutionized the way political campaigns were conducted, emphasizing precision and efficiency in mobilizing support and persuading undecided voters.
How An Organization Might Use It
One of Hewlett Packard’s (HP’s) business strategies is the “Instant Ink” subscription that lets customers pay a monthly fee to receive printer ink delivered to their home and cancel the subscription at any time. Imagine that HP wants to use data to proactively address customer attrition, intervening with incentives such as discounts that they hope would stop customers from canceling. But because they cannot offer those incentives to everyone, they need to know whom to target.
If they ask a factual question, “Who is most likely to cancel?”, and then target those customers, they would get an answer that could be useful in many ways — but not helpful in informing the decision about whom to target. It ignores the possibility that incentives might only work for certain customers. The question HP really needs to ask is a counterfactual one: what effect are incentives likely to have on specific customers? Answering it requires collecting and analyzing new data.
The best approach is a randomized experiment. HP would divide a group of customers randomly into two groups, and offer an incentive to one. Then they would monitor the attrition rates for the two groups. If there was a difference, HP could conclude that the incentive could be effective, and then look further at which customers responded. This randomized experiment is a dramatic departure from the “best practice” of focusing only on customers at the highest risk of canceling their subscriptions. Instead, it reveals whether incentives could work to reduce cancellations, and if so, which type of customer to target with those incentives.
Contributors to this Nano Tool
Stefano Puntoni, PhD, Sebastian S. Kresge Professor of Marketing; Professor of Marketing; Co-Director, Wharton Human-Centered Technology Initiative, the Wharton School; Bart De Langhe, PhD, Professor of Marketing, KU Leuven and Vlerick Business School; founder of Behavioral Economics and Data Analytics for Business (BEDAB); co-authors of Decision-Driven Analytics: Leveraging Human Intelligence to Unlock the Power of Data (Wharton School Press, 2024).