- Freestyle
- Posts
- MCP: Big Tech’s Unlikely Alliance and the Path to an AI-Native OS
MCP: Big Tech’s Unlikely Alliance and the Path to an AI-Native OS
Follow for more @GauravAhujaK
Freestyle is where we examine the changing tides of technology from our front-row seats. These are raw, evolving thoughts—half-baked ideas meant to spark conversation. The real refinement happens when you reply, challenge, and build on what we put out there. 🤝
The past year has seen generative AI move from flashy demos to real workloads, and a new layer of infrastructure is emerging to support it. It’s not a bigger model or an API from a single company – it’s a protocol. Model Context Protocol (MCP) is quickly becoming the de facto way to connect AI assistants with the rest of the software world. In an AI industry first, almost every major player (and even some rivals) have rallied behind MCP.
What is MCP (no, it is not an API)
MCP is an open standard introduced by Anthropic in late 2024. In essence, it provides a universal interface for AI agents (like GPT-based assistants) to interact with external tools, services, and data. Crucially, MCP is not an API. It’s more like a language or protocol that any app or service can speak, rather than a proprietary endpoint owned by one vendor. A useful analogy from Anthropic: “Think of MCP like a USB-C port for AI applications.” Just as USB-C standardized how we connect devices, MCP standardizes how AI models connect to different data sources and tools.
Under the hood, MCP uses a client-server architecture. An AI application (the host, e.g. a desktop assistant or IDE plugin) can load multiple MCP clients that talk to various MCP servers. Each MCP server is a lightweight program exposing some capability or data source in a standardized ways. For example, one server might expose a database, another might provide access to a SaaS app’s data, and another could wrap a local file system. The AI agent, as the host, can discover and invoke these servers as needed, using JSON-RPC messages under the hood to exchange commands and data. In simpler terms, MCP lets developers “build once” and connect their AI to any tool that implements the spec, rather than writing custom integrations for every single app.
This approach solves the “N × M integration” nightmare that plagued earlier AI projects. Instead of writing one-off code for each combination of AI system N and external tool M, both sides meet in the middle via MCP. The result is a more composable, plug-and-play ecosystem for AI capabilities. An AI agent can gain new skills (reading emails, querying analytics, updating a calendar, you name it) just by installing the appropriate MCP server module, provided someone has built one for that tool. And many have – thousands of MCP servers have already popped up on GitHub in the past few months.
A Rapid Alignment: Anthropic, OpenAI, Google, Microsoft, and Shopify
Perhaps the most astonishing aspect of MCP’s rise is how quickly and unanimously the industry lined up behind it. In just half a year, we went from one startup’s proposal to a who’s-who of tech endorsing the standard. Consider this timeline of events, which is virtually unheard of in our space:
Nov 2024 – Anthropic’s opening move: Anthropic open-sources the initial MCP specification and reference implementation, planting a flag for a common “AI assistant ↔ tool” protocol. Early adopters in the dev community begin writing MCP servers for everything from GitHub to Slack.
Mar 2025 – OpenAI jumps on board: In a plot twist, OpenAI – Anthropic’s biggest rival – announces support for MCP. Sam Altman integrates MCP into OpenAI’s Agents SDK and ChatGPT plugins, calling out the benefit of letting LLMs perform tasks in external systems. This move was notable not only for the technical merit but also because it signaled OpenAI’s willingness to embrace a standard invented elsewhere.
Apr 2025 – Google says “us too”: Just weeks after OpenAI, Google DeepMind’s CEO Demis Hassabis declares that Google’s upcoming Gemini AI models will support MCP as well. He publicly praises MCP as “a good protocol” that’s “rapidly becoming an open standard for the AI agentic era”. Google folding MCP into its Gemini SDK underscored that this wasn’t going to be a one-company spec.
May 2025 – Microsoft joins the party: At Build 2025, Microsoft (and its GitHub subsidiary) formally join the MCP steering committee. They commit to native MCP support in Windows 11 and Azure, allowing developers to expose Windows features (file system, window management, etc.) as MCP-accessible tools. In effect, Microsoft threw its weight behind MCP and signaled it would make its own software MCP-friendly, accelerating adoption.
Summer 2025 – Shopify goes live with MCP: It’s not just AI companies and platform vendors. Shopify became one of the first major end-user product companies to leverage MCP. They rolled out Shopify’s “Storefront MCP” for developers, enabling any AI shopping assistant to securely connect to Shopify store data and perform actions like product search, cart additions, and checkout. In other words, a merchant can now plug an AI agent into their storefront and have it interact with orders and products via MCP, no custom integration required. This kind of real-world use case demonstrates MCP’s versatility beyond developer tools – it’s powering consumer-facing experiences, too.
people love MCP and we are excited to add support across our products.
available today in the agents SDK and support for chatgpt desktop app + responses api coming soon!
— Sam Altman (@sama)
6:02 PM • Mar 26, 2025
love the feedback! - to MCP it is!
— Sundar Pichai (@sundarpichai)
9:29 PM • Apr 9, 2025
Five events. Six months. MCP went from an obscure idea to a defacto standard connecting AI agents to software. It’s extremely rare in tech to see competitors and incumbents align on something this quickly and comprehensively. Usually, we’d expect years of competing “standards” or proprietary approaches before one wins out. Remember the long wars over connector cables, video formats, or web browser protocols? VHS or Betamax? Alternating Current or Direct Current?
In MCP’s case, the dominoes fell almost in unison. Anthropic lit the spark, and remarkably, everyone from open-source devs to OpenAI, Google, and Microsoft decided it was in their interest to fuel the same flame. Even major software players like Shopify and Stripe are embracing it to stay ahead. This rapid coalition around MCP gives it serious momentum – and a bit of drama, considering these companies aren’t exactly in the habit of singing Kumbaya together.
MCP in Windows Today (and Tomorrow)

It’s telling that Microsoft is weaving MCP into the fabric of Windows itself. At Build 2025, Microsoft and GitHub joined the MCP Steering Committee and announced first-class support for MCP across Windows 11 and Azure. Today, this integration means AI copilots on Windows can access core OS features via MCP. For instance, a Windows-based AI agent can read and write files, manage windows on the desktop, or even invoke Linux commands through WSL (Windows Subsystem for Linux) using MCP. This is a far cry from the browser-bound chatbots of last year – your AI assistant could actually open applications, sift through your directories, or execute scripts on your machine if you allow it, all through standardized MCP interfaces.
What’s important is that current MCP capabilities on Windows are at the system level (files, windows, OS subsystems), not deep inside individual applications. Over time, Microsoft’s vision is clearly to enable richer automation within apps like Office. We might eventually see, say, Excel exposing an MCP server so an AI agent can manipulate spreadsheets natively, or Outlook exposing an MCP interface for email. Third-party software vendors could do the same – imagine Salesforce or Slack providing MCP endpoints so an AI helper could perform complex operations in those tools. That’s the endgame, but it’s not the state of play today. Right now, MCP-on-Windows is about giving AI basic superpowers on your PC, not magically running your Salesforce org. The deeper application-specific integrations will require those app developers to adopt MCP (or to write “MCP servers” for their apps), which will likely come with time. Microsoft is seeding the soil by baking the standard into Windows; the ecosystem still has to grow.
The Benedict Evans Critique
Not everyone is a true believer in MCP’s inevitability. Tech analyst Benedict Evans, for one, has raised a thoughtful critique of the “AI middleware” vision behind MCP. In his view, there are two structural challenges:
A shallow, one-size-fits-all interface: By trying to abstract very different software into one standard layer, MCP could end up only exposing the generic capabilities common to all systems, and not the unique magic of each. Evans warns that this kind of middleware faces an inherent “lowest common denominator” problem – the unified protocol can never support every specialized feature of every underlying tool. For example, if Excel had an MCP server, an agent might manipulate cells and formulas through a standard interface, but could it tap into Excel’s more complex analytics or formatting options? The risk is that MCP becomes a convenient baseline for integrations, but for truly advanced functionality you’d still need custom code outside the protocol. In short, a universal connector might sacrifice depth for breadth.
The “dumb API call” dilemma: Even if MCP works technically, will companies play ball? Evans points out that many platforms won’t want to be reduced to someone else’s back-end. His memorable phrasing: “why would Instacart want to become a dumb API call for someone else’s trillion-dollar company?”. In other words, if AI agents become the new interface that users interact with, then services like Uber, Amazon, or Salesforce risk being commoditized – just data providers with no control over the user experience (and no opportunity to upsell or differentiate). Big consumer brands invest heavily in their apps and UX to avoid exactly that fate. There’s a real question whether certain companies will willingly expose everything via MCP if it means ceding the customer interface to a third-party AI. No one in tech wants to be “just a plugin” in someone else’s ecosystem, especially if that someone else is a potential competitor or toll-collector.
These are valid points grounded in tech history. Middleware efforts have often hit these walls: too generic to be useful, or too threatening to incumbent business models. Yet there are reasons to think this time might play out differently. For one, today’s AI agents are far more capable of handling nuance than past middleware. An AI agent can dynamically figure out how to use a tool (reading its docs on the fly, adapting to errors) in a way earlier “dumb” middleware couldn’t. That raises the ceiling on how much of an app’s functionality can be exposed through a standard interface – the agent can flex to handle the weird edge cases.
As for the business model resistance: that’s a harder nut to crack. It’s true that initial MCP integrations are coming from willing participants – notice that Shopify’s MCP endpoints expose commerce data in a way that benefits Shopify merchants, and tools like Slack or GitHub have developer-focused APIs that translate well into MCP servers. The more strategically guarded platforms (think Facebook’s social graph or Google’s core Search) are not rushing to hand over the keys to an AI agent. We will certainly see a split where certain domains eagerly embrace MCP (enterprise software, developer tools, open data), while others hold out or create their own parallel standards.
That said, the early alignment of heavy hitters behind MCP suggests a strong coalition of interest. When OpenAI, Microsoft, and Google are all championing a standard, it puts pressure on everyone else to go along or risk incompatibility. No big bank or retailer wants to be the one ecosystem that can’t plug into the new AI assistants circulating everywhere. MCP has inertia on its side, but it will have to prove it can be both useful enough (for developers) and profitable enough (for data owners) to truly become ubiquitous.
The Road Ahead: Why MCP Matters
We’re witnessing a kind of birth of a universal connector for software, purpose-built for the age of AI. In previous eras, integration was tedious and fragmented – developers wrangled countless APIs, SDKs, and webhooks to stitch together services. If MCP succeeds, much of that glue can be standardized. An AI agent (or any application, really) will be able to fluidly tap into multiple systems and coordinate tasks across them, without bespoke code for each integration. This is a big deal for software innovation. It lowers the barrier to combine functionalities and data from different sources, which in turn enables new product ideas. Startups can focus on novel user experiences or business logic, knowing that connecting to the existing software world is relatively plug-and-play.
The new battlefield at the AI app layer is integration and execution. The apps that win will be the ones that can seamlessly act on data wherever it lives and orchestrate across services to deliver value. MCP is not the only approach to that (for example, Google is pushing an Agent-to-Agent protocol), but it has the early lead and the widest support. It’s the closest thing we have to an app store for AI skills, with the crucial difference that it’s open and cross-platform.
In the next year or two, success for MCP would look like this: dozens of common apps and services offering MCP interfaces (either officially or via third-party adapters), AI copilots becoming truly agentic (able to perform multi-step tasks across apps) thanks to those integrations, and a robust governance model to handle security and compatibility across the ecosystem. We’ll also find out where MCP’s limits are – perhaps certain high-security or highly specialized systems just won’t open up via a general protocol, and that’s fine. I suspect we will have to see infrastructure startups, especially those building for the enterprise agentic use cases, built on top of MCP who continuously improve the spec so it doesn’t become a lowest-common-denominator straitjacket.
🤙
The opinions expressed in this newsletter are my own, subject to change without notice, and do not necessarily reflect those of Timeless Partners, LLC (“Timeless Partners”). Nothing in this newsletter should be interpreted as investment advice, research, or valuation judgment. This newsletter is not intended to, and does not, relate specifically to any investment strategy or product that Timeless Partners offers. Any strategy discussed herein may be unsuitable for investors depending on their specific objectives and situation. Investing involves risk and there can be no assurance that an investment strategy will be successful. Links to external websites are for convenience only. Neither I, nor Timeless Partners, is responsible for the content or use of such sites. Information provided herein, including any projections or forward-looking statements, targets, forecasts, or expectations, is only current as of the publication date and may become outdated due to subsequent events. The accuracy, completeness, or timeliness of the information cannot be guaranteed, and neither I, nor Timeless Partners, assume any duty to update this newsletter. Actual events or outcomes may differ significantly from those contemplated herein. It should not be assumed that either I or Timeless Partners has made or will make investment recommendations in the future that are consistent with the views expressed herein. We may make investment recommendations, hold positions, or engage in transactions that are inconsistent with the information and views expressed herein. Moreover, it should not be assumed that any security, instrument, or company identified in the newsletter is a current, past, or potential portfolio holding of mine or of Timeless Partners, and no recommendation is made as to the purchase, sale, or other action with respect to such security, instrument, or company. Neither I, nor Timeless Partners, make any representation or warranty, express or implied, as to the accuracy, completeness or fairness of the information contained in this newsletter and no responsibility or liability is accepted for any such information. By accessing this newsletter, the reader acknowledges its understanding and acceptance of the foregoing statement.