What if you could cut a week’s worth of supply chain analysis down to just a few minutes?
According to MIT Professor David Simchi-Levi and his coauthors in the recent article “How Generative AI Improves Supply Chain Management,” large language models (LLMs)—a form of generative AI—are poised to do just that. These tools can help companies optimize logistics, reduce costs, and adapt to market shifts with unprecedented speed and clarity.
New information technologies have already helped companies dramatically improve supply chains with automated, data-driven decision-making, resulting in more efficient and less expensive supply chain management. Still, leaders devote significant resources to understanding system recommendations, exploring scenarios, and running “what-if” analyses—processes that often require support from experts to explain results and update systems.
Luckily, large language models (LLMs)—a type of Generative AI (GenAI)—help companies accomplish these tasks without extra assistance. LLMs can quickly find data, generate insights, and analyze scenarios, enabling leaders to make decisions faster (reducing time spent from days to minutes) and boosting productivity.
Based on Microsoft’s experience supplying servers and other hardware to 300+ worldwide data centers, the authors identify four primary considerations for using LLMs strategically.
Data exploration and analysis
With LLMs, leaders can ask important questions in plain language, for example, “How much can we save shipping A to B via C instead of D?” When used as a cloud service, the AI can turn these questions into technical queries, send them to the company's database (in a language like SQL), and provide clear answers that protect data privacy since nothing is shared with a third party.
GenAI can also explain the reasoning behind decisions and offer extra insights and trend information. Here are a few examples:
- Responding to demand shifts. Cloud providers like Microsoft oversee massive server demand from services like Azure and Microsoft 365 by generating hardware deployment plans, optimizing costs, and tracking demand changes ("demand drift"). LLMs now automate this process, explaining supply chain decisions, flagging errors, and generating reports in minutes—tasks that previously took planners a week with manual analysis.
- Upholding contract obligations. Enforcing contracts in the automotive industry involves original-equipment manufacturers (OEMs) that manage thousands of supplier agreements (detailing pricing, quality, lead times, and supply resilience). By analyzing these contracts with LLMs, one OEM found overlooked volume-based discounts, saving millions previously missed by manual review.
Scenario-based queries
Large language models (LLMs) allow supply chain planners to ask complex what-if questions (e.g., cost impacts of demand spikes, factory shutdowns, or material price changes) and receive precise answers. For example, a planner might ask, “How much would shipping delays increase costs if we moved 30% of production to a new factory?" The LLM translates these queries into mathematical adjustments and generates human-readable insights, streamlining analysis.
Microsoft uses this process to optimize server deployment plans and cut costs while assigning hardware types, shipping dates, and data center locations. Previously, planners struggled to interpret optimization outputs, requiring days of collaboration with engineers to explore scenarios. Now, LLMs provide instant answers, reducing analysis from days to minutes. (Microsoft’s open-source code is available on GitHub.)
Real-time supply chain management
With LLM technology, supply chain planners can dynamically adjust mathematical models to reflect real-world disruptions, such as factory outages or supplier delays, without relying on IT teams. For example, during a facility shutdown, planners can directly instruct the LLM to rerun optimization models, generating revised plans that highlight unmet demand, cost impacts, and alternative solutions like expedited shipping or inventory transfers—all communicated in plain language.
Looking ahead, LLMs are poised to enable end-to-end decision-making by allowing users to describe problems in natural language (e.g., production scheduling or inventory allocation). While current LLMs can generate mathematical models and recommendations, challenges remain in validating their accuracy against real business conditions. Future advancements aim to streamline this process, making supply chain planning faster, more adaptive, and accessible to non-technical users.
LLM adoption and integration
Companies implementing LLM technology for supply chain management must address key challenges to use the technology efficiently and maximize benefits:
- Clear communication and training. LLMs require precise language to deliver accurate results, so users must be trained to ask clear, specific questions. Organizations must also educate managers on what the technology can and cannot do to ensure proper use.
- Accuracy via confirmation and controls. Because LLMs are capable of inaccuracy, companies need safeguards like domain-specific examples and fallback responses to catch errors. That said, verifying complex AI-generated models is an ongoing concern that requires further research.
- Shifting towards collaboration. Automation through LLMs allows planners and executives to focus less on routine tasks and more on interpreting AI insights and partnering across teams. As a result, leaders need to break down silos and shift processes to support new ways of working.
David Simchi-Levi will be teaching Supply Chain Strategy and Management with Thomas Roemer on Nov 13-14, 2025, live in Cambridge, MA.