In a recent LinkedIn Live webinar, my MIT Sloan colleagues Rob Dietel, Senior Director of Partnership Programs, and George Westerman, Senior Lecturer and Principal Research Scientist, delved into a topic that needs to be discussed more often and openly at the highest levels of leadership: the importance of technology vs. transformation. The latter is usually tougher because people don’t change on command. People change when leaders create clarity, build confidence, and thoughtfully reinforce new ways of working.
In the discussion, George offered a practical way to reframe the work ahead: AI-era leadership is less about controlling a plan and more about steering an organization through an emergent journey.
See more: Watch the LinkedIn Live discussion.
Start with a “law” leaders ignore at their peril: tech changes fast; organizations don’t
George framed a simple dynamic that many leaders feel but rarely name. If Moore’s Law describes technology’s exponential growth, he argues there’s an equally important counterforce: organizations change much more slowly.
In an AI-saturated environment, that mismatch creates two predictable traps. One is over-indexing on tools (buying, building, launching) while under-investing in adoption, behaviors, and operating model. And the other is confusing activity with progress, because pilots and proofs of concept are visible, while shifts in mindset and capability take longer and feel less apparent.
The implication is clear: you can’t outsource AI transformation to “the AI team.” The real work is to make the business better because the technology exists, not to grow the technology for its own sake.
Replace the five-year plan with “directed emergence”
George emphasized the importance of recognizing that today’s environment makes traditional planning feel increasingly outdated. When technologies, customer expectations, and competitive pressures are all changing rapidly, a rigid multi-year plan can become an expensive way to be confidently wrong.
Instead, he described a leadership posture he calls “directed emergence”: set a direction and a vision, then help the organization “move its way” in the right direction without pretending you can predict what the world will look like in three years. I’ve been calling this strategy by direction vs. strategy by destination.
It’s a subtle but profound shift. It asks leaders to do two things at once. Provide enough steering so the organization isn’t “a herd of cats.” And create enough room for teams to learn their way forward through experimentation, feedback, and iteration. This is where culture stops being a “soft” factor and becomes the operating system for transformation.
Ask a better first question: “What problem are we solving?”
When leaders feel pressure to “do something with AI,” George offered a reset that should be printed and posted in every executive meeting! Don’t start with “What are we doing with AI?” Start with: “What problems do we want to solve now and how might AI help?”
This matters because problem-first framing does three things immediately:
- It anchors investment in outcomes, not novelty.
- It forces clarity about users (customers, employees, partners) who must actually experience the change.
- It creates a natural discipline for risk: not everything should be automated, and not everything should be scaled.
That last point came through strongly when the conversation turned to risk. The most sophisticated organizations aren’t racing to hand over high-stakes processes wholesale to generative systems. They are sequencing adoption in ways that build capability and confidence without creating avoidable exposure.
In fact, this is a very MIT kind of approach to decision making in the face of bewildering complexity. We are frequently asking ourselves and each other, and everyone we work with: What problem are we trying to solve? Especially when we are getting bogged down.
Build transformation as a portfolio: the “mountain and foothills” approach
I really appreciated George’s metaphor about scale: Think of a giant mountain with foothills leading upward. The first foothill, he said, is simply enabling people with safe tools, clear policies, and permission to learn and innovate so capability spreads and the organization becomes more fluent. From there, organizations move into transforming well-managed functions where risk can be controlled, often keeping a person in the loop.
He cited visible traction in areas like software development (where copilots can improve productivity) and contact centers (where models can support or coach agents), as the work is measurable and the operating conditions are comparatively bounded.
This is a useful antidote to the “all-or-nothing” thinking that AI hype can induce. Leaders don’t need a single monolithic transformation. They need a sequenced pathway that builds strength over time. It is only when the biggest transformations begin to emerge that the organization has earned the right to scale.
Make learning non-optional: close the loop with real reviews
Finally, George underscored a discipline many organizations talk about but don’t consistently practice: returning to the work after delivery to ask what actually changed. If goals are measurable, there’s a chance you can measure outcomes. If they’re amorphous, you never arrive. And once an initiative is complete, leaders should invest in a genuine review focused on learning, not blame.
In the AI era, this becomes even more important: costs can be high, risks can be real, and scaling prematurely can create long shadows. The learning loop isn’t a bureaucratic step—it’s how directed emergence stays directed.
A final thought
AI-powered transformation is here, and it is accelerating. But the core leadership work remains stubbornly human: aligning on outcomes, shaping culture, building confidence through sequencing, and creating the feedback systems that let your organization learn faster.
In that sense, the “rules of leadership” aren’t so much changing as they’re becoming clearer. The leaders who thrive won’t be those who chase the next tool. They’ll be the ones who can help their organizations change intentionally.
One final note: it’s telling that the MIT Sloan Executive Education AI portfolio spans many entry points—strategy, leadership, and applied domains—because the real challenge is rarely a single skill gap. It’s building the integrated capability to lead, decide, and adapt in a fast-moving world of intelligent machines.