February 17, 2026

The Moat Just Moved: Areas of Opportunity in AI Native Software

I’ve written about the relentless push of model companies into the application tier. Traditional moats in enterprise software are weakening as models enable workflow automation, memory management, and multi-agent orchestration.

The Moat Just Moved: Areas of Opportunity in AI Native Software
by
Phil Bronner
Why we invest

I’ve written about the relentless push of model companies into the application tier. Traditional moats in enterprise software are weakening as models enable workflow automation, memory management, and multi-agent orchestration.

For two decades, enterprise software defensibility followed a predictable pattern. Build a product, lock in customer data, make switching painful, and watch annual recurring revenue compound. Once embedded in a customer’s workflows, inertia kept you there.

Today, that old playbook is breaking. Consider what happened when we built a custom portfolio-tracking system at Ardent using Claude Projects and Lovable. In three weeks, two people replaced months of manual work with a first-generation system that ingests meeting notes, tracks action items across portfolio companies, generates pre-meeting briefs and follow-up tasks, and maintains context across quarters without configuration. It’s not production-grade software. It’s a working prototype that a human validates before it acts. And it works.

We didn’t need to buy software. We didn’t need a dev team. Two people with AI tools built exactly what we needed. This is the future of enterprise software: custom solutions that cost 10% of off-the-shelf products and fit workflows perfectly.

If we can do this, so can your customers. The question isn’t whether AI will commoditize traditional SaaS. The question is: what moats survive this shift?

What Opportunities Survive the Shift

Opportunity 1: AI Applications that solve complex workflows with deep domain knowledge

Better models don’t solve every problem. We still believe opportunity remains at the application tier and wrote about it here. The bar for what qualifies as defensible has risen — basic memory, workflows, and agent orchestration are now table stakes — but applications that encode unwritten domain rules, integrate structured and unstructured data in ways only insiders would know to combine, and handle the corner cases that never make it into documentation still require expertise that model providers can’t ship as a feature. Consider construction project management: a generalist could build a task tracker with AI in a weekend, but they wouldn’t know that change order disputes require cross-referencing original bid documents against daily field reports and subcontractor communication logs in a specific sequence that mirrors how claims actually get resolved. That kind of workflow knowledge takes years of industry experience to accumulate. The startup advantage is vertical depth and speed.

Opportunity 2: Data Integrity

Agents need trustworthy facts to act. A bad decision made by a human is one failure. A bad decision made by an agent operating at machine speed across distributed systems is catastrophic. As agents gain autonomy, the systems that keep data correct over time become essential infrastructure.

Most enterprise value resides in structured data — databases, ERP systems, financial records — but LLMs struggle to reason reliably over it without a translation layer. They misinterpret column relationships, lose context across joins, and hallucinate when schema complexity increases. The gap between agent demo and agent deployment stems from this mismatch: context fragmentation, semantic inconsistencies, governance gaps, and state management are each “almost solved” yet never quite resolved. The deployments that succeed today are narrow, carefully scoped, and heavily supervised.

This gap creates a significant opportunity for companies that bridge the structured-unstructured divide. The winning products won’t sit on one side. They’ll integrate both, combining the semantic richness of unstructured data with the precision of structured records in domain-specific ways that require genuine expertise to get right.

Trust at the data layer requires several components working together:

Semantics: standardized definitions so agents understand what data means across systems. When an agent reads “revenue” in three different databases, does it mean gross revenue, net revenue, or recognized revenue? Semantic layers enforce consistency.

Validation and provenance: knowing where data came from and whether it’s correct. Agents must trace decisions back to source data and flag when quality degrades.

Drift resilience: detecting when data quality erodes over time. Systems that worked in January may produce garbage by June if no one is monitoring data drift.

Workflow distribution with write-back: data that flows into action and back. Agents don’t just read data; they take actions that generate new data. The loop must be trustworthy at every stage.

Incumbents like Salesforce and Microsoft own massive data ecosystems, but their data layers were designed for human consumption, not agent consumption. Rebuilding semantic layers and validation infrastructure across decades of accumulated technical debt is a multi-year project that legacy platforms struggle to prioritize against quarterly earnings pressure. Startups building greenfield, agent-native data infrastructure don’t carry that burden.

Why this compounds: Every workflow executed generates more data. More data improves validation rules and strengthens the semantic layer. Better validation enables more complex workflows. The structured-unstructured bridge becomes more accurate with each transaction as the system learns domain-specific patterns: which fields correlate, which data relationships are meaningful, and which edge cases break standard mappings. A competitor starting from scratch would need to relearn everything.

Opportunity 3: High-Stakes Decisions, Regulated Industries

In domains where moving money, making medical decisions, or taking legal actions is involved, humans must remain in the loop. This isn’t a limitation to work around. It’s a structural moat that creates defensible positions.

Fintech provides the clearest example. If there’s a .01% chance an agent hallucinates and changes a $1,000 payment into a $1,000,000 payment, humans won’t allow full automation. And to move money, humans are required by law. You can automate the analysis, the recommendation, and even the approval workflow. But the final authorization must come from a person. Compliance isn’t an obstacle to automation. It’s what makes the product problem worth solving.

The same pattern holds across other regulated industries. Utah recently became the first state to allow AI to autonomously renew prescriptions — and the immediate backlash from medical associations underscores how contested this boundary remains. Even that narrow pilot requires malpractice insurance, physician review of the first 250 prescriptions per drug class, and exclusions for controlled substances. In every regulated domain — healthcare, legal, financial services, insurance — AI can accelerate analysis, but a human with domain authority makes the final call.

Will AI eventually practice medicine or law autonomously? In narrow domains, over a long enough timeline. But “eventually” is doing a lot of work in that sentence. The regulatory, liability, and trust barriers aren’t just technical problems waiting for better models. They’re institutional structures that move on decade-long timescales. For the foreseeable investment horizon, the human-in-the-loop requirement creates a last-mile complexity problem and a real opportunity for product design. Agentic systems reduce complexity in some ways but increase it in others. When that’s the case, creating an elegant product that understands the domain and is easy to use is a moat.

Companies that encode deep domain expertise into systems that learn from every transaction build defensibility that compounds over time. The right audit trails, the right interfaces for the expert in the loop, the right exception-routing logic — these aren’t features you copy. They’re design choices that emerge from years of understanding how decisions actually get made in a domain.

Why this compounds: More transactions generate better risk models. Better risk models enable more nuanced exception routing. Smarter exception routing builds deeper trust with users and regulators, which drives more transactions through the system. Meanwhile, the regulatory landscape is getting more complex, not simpler. Over 70% of banking firms now use agentic AI to some degree, but regulators are responding by tightening requirements rather than loosening them. The FCA is issuing new guidance on audit trails and human-in-the-loop protocols in 2026. The EU AI Act is layering risk-tiered requirements onto financial services. The compliance moat isn’t eroding. It’s deepening as the regulatory surface area expands alongside AI adoption.

Opportunity 4: Security Perimeter

Security for AI agents can’t be solved by better models alone — it requires external controls that sit outside the model itself.

The opportunity here is larger than “AI security tools.” As Will Quist and Yoni Rechtman argue at Slow Ventures, AI is forcing a cybersecurity architectural reset.

The dominant cybersecurity companies (Palo Alto at $130B, CrowdStrike at $115B, Okta at $16B, ZScaler at $35B) were built under a shared worldview: attacks are discrete events, malicious behavior follows identifiable patterns, humans can review and decide in time, and network, device, and identity boundaries are meaningful. These assumptions produced systems optimized for visibility, alerting, investigation, and after-the-fact response.

AI-native threats break every one of those assumptions. The marginal cost of attack approaches zero. Threats probe continuously and adapt in real time. There are no stable or reusable patterns to detect. AI exploits workflows, trust relationships, and identity, operating inside the perimeters these companies were designed to enforce. And the threats learn across attempts and environments.

The result: the architectures that were right for the prior era now create structural limits. Palo Alto enforces policy at network boundaries, but AI-driven threats don’t respect network edges. They operate inside trusted workflows. Crowdstrike built massive device visibility with human-led response, but response latency and human arbitration are exactly the bottlenecks AI-driven attacks exploit. Okta treats identity as the core trust primitive, but AI excels at operating as a legitimate user. These systems observe and explain attacks. AI-native threats require systems that constrain and contain behavior in real time.

OpenClaw — the open-source agent that went viral in recent weeks — illustrates the identity problem vividly. It runs locally, inherits the user’s full permissions, and operates autonomously inside Slack, WhatsApp, and other platforms. It’s not a user and not a service account — it’s something existing identity frameworks weren’t built to govern. Users have already leaked API keys and exposed cloud environments because the blast radius of agent access is fundamentally different from human access. Organizations want to deploy agents at scale but lack the security infrastructure to govern them safely. The most capable agents are bottlenecked not by intelligence but by trust.

Incumbents are responding by layering AI onto existing workflows to improve efficiency. But adding intelligence to legacy systems improves efficiency, not resilience. When attack cost goes to zero, and learning is automatic, alerting, investigation, and human judgment become bottlenecks, not solutions. The new security primitives look different from what exists today. Defense becomes continuous rather than incident-based. Detection and response merge into a single control loop. Systems act autonomously within tight constraints. Control moves closer to execution, not observation. Success is measured in speed of containment, not alert accuracy.

Why this compounds: Every attack blocked teaches new attack vectors. Every containment action refines the autonomous response model. The security layer gets smarter with usage in ways that legacy detect-and-respond systems cannot, because the learning loop feeds forward into prevention rather than backward into investigation.

How Moats Compound: The Multipliers

The best companies don’t just occupy a durable opportunity area; they also have mechanisms that create exponential separation over time.

When evaluating any company across the four opportunity areas, we assess three core compounding factors.

1. Intelligence Flywheel

This is the first thing we look for after fitting into opportunity areas. If a founder can’t articulate their intelligence flywheel, it’s a disqualifier. Every user interaction must make the product smarter. Data compounds into personalization that competitors can’t match without equivalent usage history.

The architecture: user action → captured data → model/system improvement → better UX → more usage → more data. A data integrity company gains stronger validation rules as usage increases. A compliance company builds better risk models with more transactions. A security company improves threat detection with more attack data. A coordination platform strengthens multi-party workflows with more participants. A marketplace orchestration layer improves matching as more transactions occur.

The key question: Does the product architecture capture learning loops? Is there a clear path from user action to data to improvement to better UX to more usage?

2. Domain Expertise Density

Founders understand the unwritten rules, edge cases, and judgment calls that never make it into documentation. They know what data matters before customers tell them.

This is especially critical for AI Native Applications, Data Integrity, and Trust & Compliance. You can’t build a system that keeps data correct over time without knowing what “correct” means in that domain. You can’t build compliance infrastructure without understanding the unwritten rules that auditors actually care about.

Domain expertise shows up in the depth of workflow design: how you implement the human-in-the-loop, not just that it exists. The accumulated knowledge of where humans need to intervene, what they need to see, and how to route exceptions to the right people. This is a compounding advantage — every transaction teaches the system more about how decisions actually get made in a domain, and that knowledge gets encoded into the product. Edge case awareness, workflow intimacy, decision-making context, customer trust — these compound into design choices that a technical team without years in the trenches simply won’t make.

3. Velocity

Speed of iteration compounds. A team that ships weekly and learns from each cycle accumulates more edge cases, more data, and more flywheel turns than a competitor shipping monthly. Over 24 months, that gap isn’t linear. And in AI specifically, velocity matters more than in traditional software — the rate of model improvement means the window to accumulate proprietary learning before the next capability leap is shorter.

The key question: How fast does the team move from customer signal to shipped improvement — days or quarters? Velocity without a flywheel is just speed. Velocity feeding a flywheel is how the lead compounds.

The Updated Litmus Test

The strongest investments lie in a durable opportunity area where all three compounding factors work together. A company in a durable layer without them has a temporary advantage that erodes as competitors catch up.

We used to ask: “Could a generalist builder with modern tools replicate this in a weekend?”

Now we ask: “If foundation models improve 10x in the next 24 months, does this company’s moat get stronger or weaker?”

At Ardent, we’re actively investing in companies building in these four opportunity areas. If you’re a founder working on AI-native applications deep in the domain, data integrity systems, trust and compliance infrastructure, or security perimeters for AI, we’d love to hear from you.