Read &  Repeat 
 

AI is no Magic Wand for Data-Driven Business Decisions

Data analytics can provide great insights and improve how a business is being run. But it's important to understand the side effects. Otherwise, you could end up optimising one metric by a little, while actually increasing overall costs. 🤷‍♂️

  • Paper: Why You Aren't Getting More from Your Marketing AI? Chances are, you haven’t asked the right questions.
  • Summary: "Fewer than 40% of companies that invest in AI see gains from it, usually because of one or more of these errors: (1) They don’t ask the right question [...]. (2) They don’t recognize the differences between the value of being right and the costs of being wrong [...]. (3) They don’t leverage AI’s ability to make far more frequent and granular decisions [...]. If marketers and data science teams communicate better and take steps to avoid these pitfalls, they’ll get much higher returns on their AI efforts."
  • By: Eva Ascarza, Michael Ross & Bruce G.S. Hardie, 2021 (via Harvard Business Review)

Published in May 2023

While reading through HBR’s 10 Must Reads 2023: The Definitive Management Ideas of the Year from Harvard Business Review (on Amazon) at the gym, I thought that the article “Why You Aren’t Getting More from Your Marketing AI” from 2021 addresses a basic issue regarding how many people approach business problems and AI – or more accurately Machine Learning and other statistical methods and tools.

I only highlight a few quotes here, but the whole paper is well worth reading.

The managers had made a fundamental error: They had asked the algorithm the wrong question. While the AI’s predictions were good, they didn’t address the real problem the managers were trying to solve.

I found it particularly striking that the models might work as intended and the quality of the output might be great. Yet, they still are only as good as the questions and inputs.

Alignment: Failure to Ask the Right Question
The real concern of the managers at our telecom firm should not have been identifying potential defectors; it should have been figuring out how to use marketing dollars to reduce churn.

Even with the most advanced tools, it is still easy to focus on what seems obvious, instead of what is actually the underlying problem you want to solve.

By giving the AI the wrong objective, the telecom marketers squandered their money on swaths of customers who were going to defect anyway and underinvested in customers they should have doubled down on.

Something I also never really thought much about, but a question you should definitely ask when supporting data-driven decisions: How costly are which inaccuracies? Maybe it’s safer and better for the business to err on one side, rather than another. And the team analysing the data should know about this too.

Asymmetry: Failure to Recognize the Difference Between the Value of Being Right and the Costs of Being Wrong
A bad forecast can be extremely expensive in some cases but less so in others; likewise, superprecise forecasts create more value in some situations than in others. Marketers—and, even more critically, the data science teams they rely on—often overlook this.

Inventory management and stock decisions are probably a straightforward example for this kind of misjudgement that might not be fully built into a forecasting algorithm. The models and algorithms themselves are not really smart. They only operate within the parameters provided. If the input is incomplete, the output will likely reflect that.

Unfortunately, in improving the system’s overall accuracy, they increased its precision with low-margin products while reducing its accuracy with high-margin products. Because the cost of underestimating demand for the high-margin offerings substantially outweighed the value of correctly forecasting demand for the low-margin ones, profits fell when the company implemented the new, “more accurate” system. […] data science teams that build prediction models, who then assume all errors are equally important, leading to expensive mistakes.

Understanding the effect of such AI-driven – or in general data-driven – optimisation on the business is crucial to achieve positive results overall, not just improving some metric that might not really be all that important.

[…] many marketers don’t exploit that capability and keep operating according to their old decision-making models.
[…]
[…] quantify the potential gains of making AI predictions more frequently or more granular or both.

This is a generic point that not only applies to AI, but in many other situations as well. New capabilities can only be used effectively, if the processes around them are also adapted to leverage the new insights appropriately.

Marketing managers have to do a better job of communicating and collaborating with their data science teams—and being clear about the business problems they’re seeking to solve

Besides asking the wrong questions, which is what the paper started with, sharing enough context information across the different teams is crucial to be able to come up with a good solution.

(Prompt for Craiyon to generate the header image: “You have to ask your Marketing AI the right questions!” / Style: Drawing.)