January 20, 2026

The Moat Just Moved: What Defensible AI-Native Apps Look Like Now

Three years ago, my biggest fear as an AI investor was funding a “thin wrapper” company. Apps that were a nice UI on top of a prompt. Easy to build, easy to copy, no moat.

The Moat Just Moved: What Defensible AI-Native Apps Look Like Now
by
Phil Bronner
Industry Analysis

Three years ago, my biggest fear as an AI investor was funding a “thin wrapper” company. Apps that were a nice UI on top of a prompt. Easy to build, easy to copy, no moat.

Our thesis was that defensible AI-native apps required deep domain expertise, persistent memory, and workflows spanning structured and unstructured data. The infrastructure to do that didn’t exist, so the ability to build it yourself was part of the moat. That thesis served us well.

What qualified as a thin wrapper just shifted.

The Infrastructure Just Caught Up

Claude Code gives engineers an agent that reads codebases, executes commands, spins up subagents for parallel work, and connects to dozens of external tools through MCP. Cowork, released last week, extends that same architecture to knowledge workers. You grant it access to a folder, describe the outcome you want, and it autonomously plans and executes — sorting files, generating spreadsheets from receipt images, compiling reports from fragmented notes. The interface is chat; the capability is agentic.

This raises the bar.

Where the Moat Moved

A generalist with Cowork can build memory-enabled, agentic workflows in a weekend. Persistence and workflows used to differentiate you. They won’t anymore.

The companies that win in vertical AI-native software will be those that:

1. Capture unwritten domain rules

Automate the task the way experts actually think. The edge cases. The exceptions. The judgment calls that never make it into documentation.

2. Integrate data in ways only insiders would know to combine

A legal tech company that doesn’t just process contracts, but connects them to case precedents, billing history, and client communication patterns. That’s years of knowing what matters.

3. Rethink interfaces around multimodal interaction

AI gets incorporated into applications as a chatbot bolted onto a SaaS platform, or as a standalone chat interface. Both force users into a single modality when human communication adapts to context.

Input should match the friction: voice for speed and context, video for demonstration, text for precision, GUI for constraints. Output should match complexity: a sentence when the answer is simple, a generated interface when the result requires interaction, an approval card when the system already knows what to do.

This is generative UI: the interface is constructed based on intent, context, and the response. The AI builds the optimal interface for the specific task at hand.

4. Build memory that learns vertical-specific patterns

Most products treat memory as a nice-to-have: saved preferences and conversation history. AI-native products treat memory as the engine.

When memory is designed well, context compounds. Each input becomes more valuable. Prior context helps interpret it, and storing it well makes every future task smarter. Once the memory architecture is right, the bottleneck shifts. You’re no longer bound by what the agent can do — you’re bound by what it gets exposed to.

This creates two design challenges. First, knowing what to capture and how to interpret it for a given industry and workflow. Second, expanding the surface area of what the agent sees. An agent that only learns from direct interactions will plateau. One that absorbs context from adjacent workflows — meetings, Slack threads, documents — compounds faster.

When you build memory well, the interface collapses. You don’t need 30 configuration options if the system remembers how you approve, what you prioritize, what you ignore, and what outcomes you care about. Memory turns software from a tool you configure into a partner that adapts.

Memory is also a liability. AI-native apps must get serious about permissioning, inspectability (“what does the system believe about me?”), editability (users must be able to correct memory), and audit trails.

5. Be genuinely agentic

In AI-native software, the product isn’t a set of screens. All of the buttons and drop-down menus that enable a human to execute a task are not needed. The agent can do that. The new paradigm is a continuous loop: intent, plan, execute, observe, remember, and improve. The user’s role shifts from executing a workflow via clicks to supervising the autonomous process.

Our portfolio company Superposition serves as an example of an agentic application that leverages memory and offers its solution through a multi-modal interface. The company offers an AI-native recruiting platform for engineers and sales talent at early and growth-stage companies.

Gathering requirements for the job description is essential. The subtle details of the role enable Superposition to identify non-obvious but well-matched candidates. The company has trained a voice agent to converse with employers to uncover those subtleties, just like a world-class recruiter would. Once they have a job profile, the agent searches online for profiles that match it. Employers can approve or reject these profiles, refining the search criteria. After refinement, the agent continues its work, sharing interested candidates on Slack. The agent then arranges interviews with those approved by the employer and schedules them on the employer’s calendar. As candidates progress through the recruitment funnel, the employer provides feedback to the agent. This continuous interaction allows the agent to learn the employer’s specific preferences.

What does the employer see? A daily update on candidates and their status in the hiring process, along with suggested actions to approve or modify right inside the tool they already use to do their work. Edge cases are flagged for human judgment. The employer makes high-leverage decisions, and over time, the system becomes simpler because it learns what matters.

The New Litmus Test

If you’re building or evaluating AI-native products, add these questions to your checklist:

  • Can the user achieve outcomes without opening the app?
  • Does the product get simpler over time?
  • Does it safely take actions across systems with appropriate approvals?
  • Does memory change defaults in ways users can inspect and edit?
  • Could a generalist builder with Cowork replicate this in a weekend?