Key Takeaways from: When AI Agents Join the Platform: Implications for Strategy, Pricing, and Governance
To hear the full discussion, watch the webinar recording here.
What Makes a Platform Different?
Platforms serve two distinct functions. First, as a technology architecture, a shared foundation that others build on through standard interfaces. Second, as a market intermediary, creating mechanisms that reduce the frictions preventing participants from interacting.
This distinction is often conflated with network effects, which, as Azoulay notes, assume a certain level of proven success. Instead, he defines platforms in a way that does not depend on success, but on structure.
A critical test is identifying the “sides”: who are you bringing together, and who is going to interact? The goal is to create value for both. As Azoulay explains, “this is usually a very clarifying moment; if you can’t name the sides, you’re probably not a platform.”
Understanding Network Effects
Network effects are best understood as statements of willingness to pay; the value a participant derives from a platform depends on how many others have joined.
These effects take two primary forms. Same-side (direct) network effects occur when value increases with the number of similar users. Cross-side (indirect) network effects arise when value increases through complementary participants on the other side of the platform.
While platforms often aim to cultivate both, network effects should not define the platform itself. Instead, they are a result of effective design and interaction.
Reducing Friction: Search and Transaction Costs
A core role of platforms is to reduce friction, specifically, search costs and transaction costs.
Search costs occur before the match, as participants identify and evaluate potential interactions. Transaction costs arise after the match, shaping how safely, efficiently, and reliably those interactions are completed.
AI is reshaping both of these, but not in a straightforward way. While many assume AI will reduce search costs and increase transaction complexity, Azoulay challenges this view: “my sense is that both categories of cost are in motion under AI and in both directions depending on your situation.”
The implication is not simplification, but reconfiguration, requiring platforms to continuously reassess where friction is being reduced or introduced.
From Experimentation to Impact
AI is accelerating experimentation across platforms, but experimentation alone does not create value. The challenge for leaders is translating increased activity into meaningful outcomes. As AI tools lower the barrier to entry for participation, platforms may see a surge in interactions, options, and potential matches. Yet more activity does not necessarily mean better outcomes.
Instead, the focus shifts to evaluation, selection, and execution. As Azoulay highlights, the real constraint may no longer be generating possibilities, but identifying which ones matter. In some cases, AI may even increase the burden on users, requiring them to navigate more options, assess credibility, and make higher-stakes decisions in less certain environments.
The implication for platform strategy is clear: success depends not just on enabling experimentation, but on structuring pathways from experimentation to impact, through better filtering, stronger signals, and clearer mechanisms for decision-making.
Designing Platforms Before They Exist
Azoulay introduces the concept of coring, the work of designing a platform before any participants have joined. This includes defining the platform’s architecture, rules, access points, and control mechanisms.
Coring sets the foundation for network effects, enabling growth on one side to drive growth on another. It also determines how the platform can explain whether it can capture the value it creates.
As Azoulay emphasizes, “coring decisions are extremely sticky.” Once a platform is launched, changing its core becomes both costly and risky, making early design choices critical to long-term success.
Managing Control and Openness
One of the most difficult challenges in platform strategy is determining how open or closed a system should be. Too much openness can erode value capture, while too much control can limit growth and innovation. The goal is not to maximize either, but to carefully calibrate both.
As Azoulay illustrates, successful platforms often project openness while retaining control over critical components. This allows them to encourage participation and ecosystem growth, while still protecting the elements that differentiate the platform and sustain its competitive advantage.
The strategic tension lies in recognizing that control is not inherently good or bad; it is context-dependent. Platforms must continuously assess where openness creates value and where control is necessary to preserve it. Getting this balance wrong can either stifle the ecosystem or allow value to leak beyond the platform.
Governance and the Role of the Platform Leader
Platforms operate as mini economies. As Azoulay explains, platform leaders “are a judge, a regulator, and a policeman all wrapped into one.”
These roles are essential to managing market failures. Too much competition on one side can erode willingness to pay, requiring active intervention to maintain balance.
If platforms do not perform these functions, others will, whether competitors or regulators, often at the platform's expense.
AI, Data, and the Myth of the Data Moat
Many AI strategies rely on the assumption of a data moat. In Azoulay’s view, most of these assumptions are overstated.
Data is increasingly easy to replicate or generate synthetically, making sustainable advantage harder to maintain. As he notes, “data governance in the AI era is really orders of magnitude harder than is was.”
Data moats still exist, but only under specific conditions: continuous interaction with data unique to the platform, data that can be legally and ethically protected in regulated environments, or data requiring specialized expertise to collect, clean, and structure.
Platforms in the Age of AI Agents
The introduction of AI agents now adds a new layer of complexity to platform design. Platforms must now consider not only human participants, but also autonomous actors that can search, transact, and interact on behalf of a user.
This shift introduces a new set of strategic decisions. Leaders must determine whether to allow external agents, restrict participation to proprietary systems, or adopt hybrid models. Questions to identify, trust, and liability also become central: how are agents authenticated, how do they build reputation, and who is responsible when transactions fail?
As Azoulay notes, these decisions are not easily reversible. Early choices around agent access and governance create long-term constraints, shaping how the platform evolves and how value is created and captured.
Ultimately, platforms are no longer just coordinating human interaction; they are increasingly orchestrating ecosystems of humans and machines, requiring a new level of strategic clarity and foresight.
What This Means for Platform Leaders
Platforms are not new, but AI is reshaping how they function. While core principles remain intact, the decisions surrounding strategy, governance, and design are becoming more complex.
For leaders looking to explore these challenges in greater depth, Pierre Azoulay's course, Platform Strategy: Designing for Humans and AI Agents at MIT Sloan Executive Education, provides a deeper dive into the tools and frameworks needed to navigate this evolving landscape


