Most institutional investors approach artificial intelligence the way they approach any new technology cycle: with a carefully calibrated mix of curiosity and caution, a small percentage allocation, and the expectation that the dust will settle and the winners will reveal themselves. We think this is the wrong posture. AI is not a sector to allocate to. It is a substrate that will, within a decade, sit underneath every business we already own — and the question is not whether to participate but where, in that substrate, the cash flows accrue.

What we mean by an AI thesis.

A thesis is not a position. A thesis is a coherent set of beliefs about how a market will evolve, rigorous enough to be wrong about, and specific enough to act on. A family office that does not have one — that says only 'we are interested in AI' or 'we have selectively invested in AI' — is at the mercy of whatever pitch crosses its desk.

We have spent two years building ours. It has three core beliefs.

Belief one: the model layer will commoditise faster than the market expects

The frontier model layer is technically remarkable, capital-intensive, and ultimately a place where economics are unkind to all but the very largest participants. The cost curve is collapsing on the order of 70–80% per year. The ability to fine-tune open-weight models for narrow applications is improving at a similar rate. We do not see this as a market that will sustain durable economics for more than a small handful of platforms — and even those will be commoditised relative to the value above them.

This is not a controversial view. It is, however, a view that our own underwriting takes more seriously than most. We have not made a single direct investment in a frontier model company at the valuations being offered. We may yet, but only at structures and terms that price the commoditisation we believe is coming.

Belief two: applied AI in regulated industries is where the next decade of returns lives

The most interesting investment opportunities we are seeing are not at the model layer but at the layer immediately above: applied AI in industries where regulation, distribution, and accumulated proprietary data create durable advantages. Financial services. Healthcare. Defence. Legal. Insurance. Pharmaceutical research.

These are industries where the value of an AI capability is not the model itself but the integration into a regulated workflow that the incumbent already controls. The model is a fungible input. The distribution and the data and the regulatory permission are not. This is where we have been concentrating our diligence and our capital.

"The model is fungible. The distribution, the data, and the regulatory permission are not. That is where the cash flows accrue."

Belief three: the infrastructure layer is more attractive than the consumer layer

The compute, data centre, networking, and inference infrastructure that powers AI is, in our analysis, more attractive on a risk-adjusted basis than most consumer applications. Why? Because it has the economics of utilities — long-duration contracted revenues, capital-intensive but defendable, rewarded for scale and operational excellence — combined with end-market demand growth that is unlike anything we have observed in adjacent infrastructure cycles.

We have written separately about why inference economics will look like utilities. The short version is that the marginal cost of generating intelligence is collapsing at the model layer but the total cost of delivering it at scale — compute, power, networking, latency optimisation, geographic distribution — is not. We believe this is where a meaningful fraction of the long-term value capture sits.

The framework: three filters.

For every AI-related opportunity that crosses our desk, we apply three filters before we take the meeting seriously.

Filter one: durable advantage

What does this business own that another business — funded equally well — cannot replicate within twenty-four months? If the answer is 'a better model', we are not interested. Models will be replicated. If the answer is 'a regulated relationship', or 'fifteen years of proprietary training data', or 'distribution into ninety percent of an industry', then we are interested.

Filter two: economics that compound

Does this business have unit economics that improve as it grows? AI businesses are particularly susceptible to looking like they have favourable economics during early growth, then revealing scale-related cost structures that erode them later. We model carefully for this. The businesses we like have customer acquisition that gets cheaper, gross margin that expands, and net revenue retention that compounds — not just in year one but in year five.

Filter three: management that has built before

This filter is mundane and it is the one we are strictest about. Many AI companies are run by people who are technically brilliant and operationally inexperienced. We have a strong preference for founders and operators who have built and scaled at least one previous business — even, particularly, a business that failed for instructive reasons. The work of running an AI company is in many respects identical to the work of running any technology company. Doing it well requires the skills that come from having done it before.

What an AI portfolio looks like for us

Our AI exposures sit across three buckets, in roughly equal weights:

What we are avoiding.

Three categories of AI investment that we currently regard as systematically overpriced relative to the durable economics they will produce:

Frontier model companies at headline valuations. The compute economics, the talent dynamics, and the commoditisation curve all argue against the multiples being offered. We are happy to be wrong here, but we are not happy to pay to find out.

Vertical AI tooling without proprietary data or distribution. Many of the apparent winners of the current cycle are wrappers around the model layer. As the model layer commoditises, the wrappers commoditise faster.

Consumer AI applications without subscription persistence. The economics of consumer AI are characterised by extreme experimentation costs, low switching costs, and rapid product iteration. We have not yet identified a consumer category where the long-term economics look defensible.

The bigger point

What we are arguing for is not aggression but discipline. AI is, in our view, a once-in-a-generation technology shift. The temptation is to allocate broadly to ensure participation. The discipline is to allocate narrowly, to the precise places where durable advantage exists, and to avoid the much larger surface area where the cycle has temporarily inflated economics that will not hold.

A family office without an AI thesis is, by default, taking a position. They are taking the position that the cycle will pass them by, or that they will pick up the winners later at higher prices. We do not believe either is consistent with stewarding generational capital.

The right posture, we believe, is the harder one: develop a thesis, hold it with conviction, and act on it with the patience that distinguishes capital that compounds from capital that merely participates.

End of essay.
Maximus Rogers, Portfolio Manager