Beyond LLMs: Moats, Distribution, and the Value Lifecycle
What makes one AI product defensible against another? Data, specialization, compound moats, and how the whole AI product stack maps to Moore's technology lifecycle.
This is Part 3 of a three-part series exploring AI products beyond the model layer. Part 1 covered the product framework. Part 2 went through the technical anatomy of agentic systems. This final part connects everything to defensibility and the value lifecycle.
We covered the product framework. We covered the technical stack. Now the question is: everyone is building with these same tools and APIs and models. What actually makes one AI product defensible against another?
Connecting Systems to Products
At the center of every AI product sits the model, the brain. Around it, you build the compound system: context, knowledge, gateways, tools, evaluation, operations. These are your features. Then you add activities: testing, legal review, QE, support. Together they form the whole product.
What Is a Moat?
Warren Buffett popularized the "economic moat" concept. Hamilton Helmer's *7 Powers* framework (2016) lays out the types of competitive advantage that endure: scale economies, network effects, switching costs, counter-positioning, branding, cornered resources, and process power.
Everyone can build with AI right now. According to McKinsey's 2024 "State of AI" report, 72% of organizations had adopted AI in at least one function. So the model isn't a moat. If everyone has access to the same intelligence, what sets you apart?
Data — Proprietary data that nobody else has. A healthcare company with patient data approved for AI training. A financial firm with decades of transaction patterns.
Specialization and workflow depth — Domain expertise will become increasingly valuable. A Kubernetes troubleshooting agent built by people who've been doing Kubernetes operations for a decade will outperform one built by a team that just learned Kubernetes.
Network effects and distribution — More users means more data. More data means better products. Better products attract more users.
Distribution advantage — How you get your product to customers matters. ChatGPT is on people's phones. Whoever controls the entry point controls the relationship with the user.
Compound Moats
Individual moats are useful but fragile. What really works is stacking them.
Switching costs alone are a weak moat. Compliance alone is a weak moat. But combine switching costs with compliance requirements with specialized data with network effects with distribution, and you get a compound moat. Much harder to break through.
Harness engineering as a moat deserves its own mention. The moat lives in the model-sensitive and use-case-specific components: the system prompts tuned through hundreds of iterations, the orchestration policies shaped by domain expertise, the tool configurations built around your customers' workflows.
On the builder side: Meta acquired Manus for roughly $2B, not for the model but for the harness. On the user side: every AGENTS.md rule, every custom linter, every test suite tuned to produce LLM-readable errors is an investment that compounds.
Anthropic's engineering blog puts it well: "When a new model lands, re-examine the harness, stripping away pieces that are no longer load-bearing and adding new ones. The space of interesting harness combinations does not shrink as models improve. It moves."
OpenAI's internal team built roughly one million lines of production software over five months, orchestrated entirely through their Codex agent across approximately 1,500 pull requests. The model alone could not have sustained that kind of coherent output. The harness made it possible.
Case Studies
[NVIDIA](https://www.nvidia.com/) — Core product is GPUs and hardware. Market cap crossed $3 trillion in 2024. CUDA is the platform layer with over 4 million developers, creating near-lock-in.
[OpenAI](https://openai.com/) — First mover advantage with ChatGPT, reaching 100 million weekly active users by early 2025. $6.6 billion raised at a $157 billion valuation in October 2024. Distribution through mobile changed the game.
[Anthropic](https://www.anthropic.com/) — They didn't have first mover advantage, so they play a different game. $2 billion from Google in 2023, $4 billion total from Amazon. Constitutional AI, safety-first branding, enterprise trust through strategic partnerships. Claude 3.5 Sonnet carved out a reputation for reliability in coding and analysis.
Notice the pattern: the enablers are roughly the same for everyone. What differs is the differentiators.
The Value Lifecycle
The AI product evolution follows a recognizable pattern. You start with models (enough for innovators). Add data pipelines, retrieval, tools (early adopters). Add QE, testing, security, compliance (early majority). Add differentiation and moats (late majority).
This cycle is still in its early stages. Most AI products today are somewhere between the innovator and early adopter phase. The companies that get this right will define the next era.
One More Thing
Are we heading toward a "Kubernetes moment" for AI, a standardization layer that prevents monopolization and enables interoperability?
Open source models are one such layer. The APIs are beginning to standardize. OpenAI published the Open Responses spec in 2025. Linux Foundation AI published the Model Openness Framework in 2024.
But businesses still have to build. Standards create the floor, not the ceiling. On top of the standards, you differentiate. The standard prevents lock-in. The differentiation generates revenue.
That's the whole picture. Start with the model. Build the compound system. Apply constraints for your customers. Stack your moats. Ship the whole product.














