That was the focus of a recent LinkedIn Live talk with Paul McDonagh-Smith, Visiting Senior Lecturer. In the last decade-plus of his work with MIT Sloan Executive Education, Paul has collaborated with many faculty and experts across the Institute to build innovative technology-focused courses.
In this LinkedIn Live, Paul argued that while models matter, they are only the starting point. To capture real value, leaders must shift their thinking, engineer for the “last mile,” and activate their workforce to embrace AI with confidence. Paul invited the audience to take a step back from the “shiny object”, like models, and consider what really drives transformation. His insights offered a clear call to action for executives: if you want to capture the value of AI, you need to engineer the conditions for adoption at every level of your organization.
See more: Paul McDonagh-Smith shares insights on AI Adoption in a LinkedIn Live webinar, presented by Courtney (Burt) Reed, Associate Director of Marketing, MIT Sloan.
From models to mindset to metrics
AI’s history is one of constant evolution. Rule-based systems gave way to machine learning, which in turn was surpassed by deep learning and today’s generative and agentic tools. Each advance emerged from fixing the limitations of what came before.
But Paul reminded us that while models are important, they are only the beginning! Real impact requires leaders to embrace a mindset of exploration, experimentation, and adaptation. Organizations, he suggested, can be thought of as living organisms: the ones that thrive are those that adapt, evolve, and incorporate new patterns into their DNA.
Just as important are AI-native metrics. Traditional KPIs, designed for an earlier era, often miss the distinctive contributions of AI. Instead, leaders should measure:
- Augmentation: How well humans and machines amplify one another
- Autonomy: The share of work that AI can perform safely and reliably
- Velocity: How quickly ideas move from conception to implementation
The point is not to discard legacy measures but to expand the toolkit. Without new metrics, organizations risk overlooking the very value AI is meant to create.
Engineering the last mile
Paul introduced a powerful new idea: “Last Mile AI Engineering,” borrowing a metaphor from the telecommunications industry, which refers to the physical last mile of cable leading to a customer. In a nod to his (and my!) British background, he likened it to “minding the gap”—the space between AI’s potential and its real-world impact. Closing that gap depends on more than technical fixes; it requires intentional design.
He outlined five guiding principles:
- Decision-first value targeting: Start with the moments that matter. Which problems, if solved with AI, would make the greatest difference?
- Human-centered co-design: Bring together frontline employees, technical experts, and business leaders in “three-in-a-box” teams.
- Context over compute: Ensure systems are grounded in organizational knowledge, not just raw processing power.
- Ship small, learn fast, scale: Move beyond endless proofs of concept; iterate quickly and expand what works.
- Trust by design: Build transparency, oversight, and risk management into the system from the start.
In Paul’s view, Last Mile AI Engineering is where human capabilities—creativity, empathy, collaboration—meet machine intelligence. It is the bridge that turns technical potential into organizational performance.
Adoption: the human dimension
Of all the themes Paul explored, adoption may be the most critical. Application is easy; adoption is hard. The difference comes down to people.
AI is embraced when it clearly outperforms humans in tasks where empathy is not essential. It is resisted when trust is lacking or when human judgment is central. Leaders must therefore orchestrate adoption, not just deploy technology.
Practical strategies:
- Earn trust with proof, not promises. Show transparent metrics, sources, and confidence levels.
- Clarify human–machine roles. Make it clear who does what, and allow people to opt out when necessary.
- Reduce cognitive load. Use AI to simplify, not to add layers of oversight.
- Share the dividend. Return productivity gains to employees through time, incentives, or targeted upskilling.
- Governance by design. Define acceptance criteria—accuracy, fairness, safety—before deployment.
- One audience member asked if AI adoption is only as strong as its weakest link. Paul’s answer reframed the issue: adoption is not just about the workforce. It extends to customers, clients, and partners. Success requires building a system of shared literacy and trust.
Lessons from the audience
The webinar’s Q&A revealed just how pressing these issues are!
When asked about research showing that 95% of organizations aren’t seeing value from generative AI, Paul cautioned against despair. The challenge is not AI’s capability but its fit to context and the immaturity of many workflows.
In response to a question about “broken processes,” he agreed that AI should not be layered on top of flawed systems. Instead, AI adoption is a chance to rethink and re-engineer work itself.
On the prospect of another “AI winter,” Paul was cautiously optimistic. Unlike past eras, today’s investment scale and broad integration make a collapse less likely—but only if organizations focus on sustainable practices and responsible governance.
For those struggling to convince senior executives, Paul emphasized communication; not one-way communication, but two-way dialogue that surfaces concerns, builds trust, and grounds decisions in data.
These exchanges highlighted a consistent theme: adoption is not a technical exercise but a human one.
What leaders can do now
Paul closed with three actionable steps every leader can take today:
- Define AI-native metrics that measure the right kinds of value.
- Develop an adoption playbook that codifies successful patterns and roles.
- Dedicate regular time to experimentation. Not once a quarter, but weekly! Touch the technology, test it, and learn from it.
His message was unambiguous: AI will not replace people. But people who ignore AI may well be replaced by those who embrace it.
Paul’s talk was a stark reminder that the frontier of AI is not just about algorithms. It is about leadership. Models will continue to evolve, but the organizations that thrive will be those that cultivate the mindset, metrics, and adoption strategies to turn technology into impact.
At MIT Sloan Executive Education, we see this across industries. The future belongs to leaders who are curious enough to experiment and disciplined enough to scale what works. As Paul concluded, we are not passengers in this journey—we are the crew!
Dive deeper: If you are interested in learning more about how machine learning and data-driven tools are reshaping business, take a look at our roster of courses on Artificial Intelligence, including those led by Paul.