Some decisions are narrow and can be modeled; others are wide and require stakeholder alignment. Understand this distinction to use the type of AI that will be most helpful.
Topics
Frontiers
An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series


PPaint/Ikon Images
Leaders should understand the differing capabilities of different flavors of AI when applying the tools to decision-making. Analytical AI, such as traditional machine learning models, are useful for optimization problems that require a predictive recommendation. Generative AI is better applied around the decision-making process, where it can help users explore and understand a less-precise problem.
On a rainy Tuesday in London, the leadership team of a consumer goods company reviewed two business decisions: “Where should we open our next five stores?” and “Should we pivot the brand toward wellness?” Generative AI had been used to support the decision-making process for addressing both questions. The team ended up with plenty of plausible qualitative arguments for the proposed road map for store expansion — without data or analytics to support these recommendations. The tool had helped the team produce a polished narrative on the wellness pivot, along with a compelling deck advocating the strategic move, but stakeholder engagement was shallow, and there wasn’t a shared conviction that the organization was ready to move.
The meeting exposed the flawed assumption that all AI is the same and that every type of artificial intelligence supports decision-making equally. In reality, different decisions require fundamentally different AI roles. Some decisions are narrow: Objectives are clear, data is available, and outcomes can be measured quickly. Others are wide: Goals are contested, information is incomplete, and alignment matters as much as analysis. When leaders treat both decision types as the same, they predictably misapply AI technology, using generative tools where analytical engines are needed or where the real work is deliberation and commitment. The result is a disappointing output that fails to support narrow decisions and fragile buy-in and difficult execution for wide decisions that demand socialization and alignment.
That mismatch is showing up across industries. AI adoption is now widespread, yet many organizations still struggle to convert AI activity into measurable business impact. In its 2025 report on the state of AI, McKinsey describes this gap starkly: 88% of companies now use AI in at least one function, but only around 40% are able to see a positive impact on the bottom line. In our work with executive teams, the pattern behind that gap is consistent. The pressure to use AI — amplified by headlines about generative and agentic systems — often outruns the harder discipline of deciding where AI should lead, where it should support, and what kind of AI fits the decision at hand. As a result, teams build impressive decks for problems that require more time for internal alignment, and they use conversational generative tools for decisions that demand rigorous analytics.
The solution is to calibrate AI’s role in the decision.

