Your AI Startup Might Be a Wrapper in Denial
Darren Mowry, VP of Google Cloud's global startup organization, said something last week that a lot of founders are going to want to ignore: LLM wrappers and AI aggregators have their "check engine light" on.
That's not abstract criticism. He told TechCrunch that AI aggregators, platforms routing queries across multiple models through a unified interface (the article pointed to companies like Perplexity and OpenRouter as examples), "typically aren't seeing growth" because users want "some intellectual property built in" to get directed to the right model for their actual needs. And on pure LLM wrappers, he was blunter: "If you're really just counting on the back-end model to do all the work and you're almost white-labeling that model, the industry doesn't have a lot of patience for that anymore."
My read: he's mostly right, but the frame is slightly off. The death sentence isn't being a wrapper. It's staying one.
The Cloud Reseller Playbook, Repeated
Mowry's analogy is worth sitting with. In the late 2000s, as AWS started taking off, a crop of startups built businesses reselling cloud infrastructure. The pitch was simple: we make Amazon's stuff easier to buy, manage, and support. For a moment, that was a real value proposition.
Then Amazon built its own enterprise tools, customers got comfortable managing cloud services directly, and most of those resellers vanished. The survivors? The ones that had layered genuine services on top: security practices, migration expertise, DevOps consulting. The commodity layer disappeared. The differentiation didn't.
We're watching the same film with AI. In 2023, putting a clean chat interface over GPT-4 was a product. In 2026, it's a demo.
What Actually Died (and Why)
Jasper AI is the clearest case study. Once valued at $1.5 billion, Jasper had real revenue (a reported $120M in 2023) and a real user base. Then OpenAI shipped ChatGPT with a subscription tier that did 80% of what Jasper did for free. Revenue reportedly dropped to roughly half its 2023 level in 2024, though exact figures for the private company vary across sources. The company responded with enterprise pivots, layoffs, and custom model work, which looks less like strategy and more like a company trying to un-wrapper itself under pressure.
The pattern: Jasper's core value was "GPT, but easier for marketers." When GPT got easier for everyone, the moat vanished. There was no proprietary data, no workflow depth, no network effect that would make a user think twice before switching.
Builder.ai is a different variety of the same failure. $450M+ raised, Microsoft backing, and claims of $220M in annual revenue that turned out to be $55M. The company went bankrupt in May 2025. This one wasn't purely a wrapper failure; it was also fraud. But the underlying business, a low-code app builder sitting on top of commodity AI, shared the same structural weakness: no real moat, just a story about market size.
The Survivors Bought Time by Going Deep
Cursor is the interesting counterpoint. On paper, it started as a wrapper: VS Code, but with GPT integrated more aggressively than the GitHub Copilot team would move. If you squinted, it looked like a thin interface play.
But Cursor didn't stay there. The team built their own context retrieval, their own codebase indexing, their own model fine-tuning pipeline, and an IDE experience so tight that switching costs started to accumulate. The product became hard to replicate not because the underlying models were exclusive, but because the team's understanding of the developer workflow was encoded into every design decision. By the time GitHub was shipping competitive features, Cursor users were so embedded that "just switch back" wasn't simple.
Harvey is the other example Mowry specifically points to. The company built what's essentially a legal AI platform, but what makes it defensible has nothing to do with which LLM powers the responses. Harvey partnered with LexisNexis to integrate proprietary case law databases directly into its retrieval system. The AI is grounded in verified legal truth, which is the specific thing legal professionals can't tolerate being wrong about. Harvey is now valued at approximately $8 billion with over $190M ARR. The moat isn't the model. It's the proprietary data and the professional trust that data enables.
The Test I Actually Use
When I'm evaluating whether something is a real product or a wrapper in denial, I ask one question: what happens to this product if the underlying model gets 50% better?
For a true wrapper, better models are a catastrophe. If the main model improves enough, the use case gets absorbed into the base product. The wrapper's value proposition disappears as the model catches up to what the wrapper was doing.
For a defensible product, better models are fuel. Cursor gets better when the models improve because the team's work on context, indexing, and workflow integration amplifies the model improvement. Harvey gets better because the proprietary data layer becomes more useful as the model reasoning improves. The depth compounds.
A secondary check: could a technically competent user replicate your product's core value in a weekend? If yes, you're a wrapper. If no, explain why not. The explanation is your moat.
Why This Is Hard to See From the Inside
The wrapper trap is seductive because the early product-market fit is real. Jasper genuinely solved a problem for marketers who didn't know how to prompt GPT-3. Early Cursor users were legitimately more productive than Copilot users. The issue isn't that the initial value was fake. It's that initial value based purely on model access has no ceiling and no floor: no ceiling because you can't build above what the model does, and no floor because the model providers can always eat your use case.
I think about this with MakerPulse. Without the AgentPulse research layer, this is just another AI news site. There are hundreds of those. The content would be derivative because it would be based on the same sources everyone else uses. What makes the publication defensible is original data: benchmark results and pricing histories, methodology that anyone can audit and nobody else has published. If I strip out the research, the writing is the wrapper. With the research, the writing becomes the delivery mechanism for something proprietary.
That's the distinction worth internalizing. Wrappers deliver someone else's capability. Products deliver something the underlying model can't produce alone.
What Mowry Gets Slightly Wrong
His framing implies the judgment is made at founding. Build an aggregator and you're done. That's too deterministic.
Plenty of the most defensible AI products started as wrappers and built their way out. The question isn't "are you a wrapper today?" It's "are you actively building depth, or are you betting the product on the model staying ahead of what users can get directly?"
If you're still doing what you were doing twelve months ago, the model providers have almost surely caught up. If you've built something in the last year that would survive the underlying model getting commoditized, you're probably fine.
If you can't answer that question clearly, that's the check engine light.