This is my first newsletter, or article of any kind. I'm not much of a writer and I’ve always been a bit tentative about putting my thoughts out into the world, not least because with today’s surplus of digital content I doubt its originality. I mostly thought that this exercise would be useful for sharpening a thesis that's been forming in my head over the past few weeks, but consider this an attempt at thinking out loud rather than polished analysis.
Where we've been
Over the last 6 years at Blue Wire, I’ve spent a decent amount of time ‘overthinking’ defensibility, at least when measured by either other investors’ judgements, or by missed opportunities that have generated great outcomes for others (e.g. I still fundamentally don’t understand the defensibility of Legora or Harvey and, not that I had the chance to invest in them, I’d imagine these are two of many companies that would have easily been fund-returners for us).
This mostly constituted focussing on a relatively narrow set of things I believe make businesses durable: chief among them the most oft-mentioned of moats in network effects and economies of scale. I've also rarely believed that technology itself constitutes a moat in the long term; an early proprietary tech build can give you a good headstart, but in most markets it never really bought you permanence. If the prize is big enough, someone will rebuild it, or the platform underneath you will shift, or the whole game will change. (Side note: I've also long felt since my brief stints at Goldman Sachs / SIG that terminal values in SaaS have been fundamentally overvalued. When you look at the actual dividend schedules these companies are likely to produce, the multiples become very hard to justify.).
This isn’t meant to be a pat on the back or a pretence that I was unique in focussing on these elements when investing. Now that the world seems to be converging on a similar thesis, with software stocks getting hammered, I’m simply asking myself what’s next.
The consensus “next”
The smart conversation right now is about what else protects a business from competition in the age of AI beyond network effects and scale, and two candidates keep coming up.
The first is regulatory complexity: industries where the compliance burden is so heavy, so jurisdictionally fragmented, and so constantly shifting that AI can't simply leapfrog the incumbents. Healthcare, financial services, even defence - the argument is that the regulatory surface area itself becomes a moat. In time you might be able to rebuild the Monzo or Revolut apps with Claude, but you can’t generate the licence / sponsor bank relationships / etc. Shoutout to our friends at Form Ventures who have been focussing on this since 2019.
The second is physical-world presence. “Atoms, not bits”. If your business requires warehouses, fleets, equipment, or human beings standing in specific places at specific times, the argument goes, you're harder to disrupt than a pure-software company. How would AI-empowered competition build a 10x better coworking space, or shipping fleet?
I feel you could almost group both of these under a single heading: ‘structural inefficiencies’ from AI's standpoint. These are things that are hard to automate not because they're intellectually difficult, but because the world is messy. Regulations are inconsistent. Physical infrastructure takes years to build. Human relationships can't be serialised into a training set.
These are real sources of protection. But they're also becoming consensus. And I think there's a third form of defensibility emerging that is more interesting and less discussed.
Competition enabling value-accrual.
Peter Thiel's famous line (“competition is for losers”) has resonated in the past with how we have invested at Blue Wire. We've always looked for n=1 companies building in non-obvious or non-consensus markets, where the execution risk is manageable precisely because the competitive threat is low. If you're the only one doing something, you don't need to be the fastest or the most well-capitalised, you mostly just need to be right about the market existing.
This has been a good framework, but I've started to wonder whether, in an AI-disrupted world, the inverse might also hold.
AI is exceptionally good at solving optimisation problems - systems with stable inputs, clear objectives, and a correct answer / a solvable solution. Given enough data and a well-defined goal, it will converge on a solution faster and more cheaply than any human team. This is precisely why it's so threatening to businesses that compete on efficiency, speed, or analytical rigour within a stable problem space (a decent example here is an “AI personal tax accountant”, where there should be absolutely correct way to optimise and file an individual’s taxes).
But AI is much less suited to solving adversarial problems - environments where the optimal move depends on what your opponent does, where the landscape shifts beneath you in real time, where there is no stable equilibrium to converge on. Game theory, not optimisation theory / the dynamics of a duel, not the dynamics of a production line.
Take the public stock trading market, as an illustrative counter to the AI tax accountant case above; despite extensive and exceptional use of AI & ML systems, there is clearly no all-powerful recommender that will tell you the ‘optimal’ trade. Or if there were, it would render the whole system useless.
Companies operating in genuinely zero-sum or adversarial markets - where you win precisely because someone else loses, and both sides are constantly adapting - may carry a kind of built-in immunity to AI disruption. Not because they're using better technology, or because they're protected by regulation, or because they have physical infrastructure. But because the competitive environment itself is inherently unsolvable, and the problem keeps changing. The moat isn't a wall, it's the chaos itself.
To be clear, I'm not arguing that AI won't be used extensively in these markets - of course it will. I'm arguing that AI is unlikely to settle them. In a market where the game is fundamentally adversarial, deploying AI doesn't end the competition; it just raises the stakes. Both sides adopt it, and you're back to needing human judgement, timing, relationships, and risk appetite to win. The tool becomes table stakes, the duel continues.
The expanding universe of duels
I believe the number of markets with these adversarial, zero-sum dynamics is growing.
As new infrastructure makes previously illiquid or opaque markets tradeable, those markets begin to develop the competitive characteristics I'm describing. Consider what's happening in compute - companies like Compute Desk are building infra to allow compute capacity to be bought and sold dynamically, turning what was an opaque procurement process into something that looks much more like a trading market. I believe there will be many $XBn companies built on top of this layer, given the new duels enabled. Or similarly look at what Dirac is doing in excess stock trading, opening up new dimensions of tradeability in markets that were previously locked up in point-to-point contacts, and where more players arguably increases the size of the opportunity. I have a generally sceptical view on the current state of prediction markets, but this plays into the same trend too. Perhaps there is some overlap with this thesis and a belief in the increase the ‘financialisation’ of previously untradeable assets.
More liquidity means more participants. More participants mean more adversarial price discovery. More adversarial price discovery means more game-theoretic complexity. The very infrastructure that makes these markets more efficient also makes them more "duel-like" - and therefore harder for AI to simply solve.
And there are other (likely more) non-financial examples where new framing / a new north star can rule someone into the conversation here, or at least pique my interest where previously I might have been uninterested. When I picture, say, a down-the-middle agentic marketing early-stage company pitch, I imagine either a workflow optimisation tool or a one-size-fits all package, not a company selling a super-competitive future where their system will be benchmarked and judged on its ability to outperform other similar systems. Or an M&A dealmaking tool, where actually getting the more favourable outcome may depend on having your AI ‘beat’ the AI from the other side (as opposed to converging on the true fair value). I believe that the direction of travel of these elements will be more ‘build your own system’ vibes if it’s core enough to your business, but perhaps start-ups can sell the dream of limited seats and use of their system.
Weirdly, I also think we at Blue Wire have been almost subconsciously investing in a form of this thesis for a while. Perhaps the beehiiv case (you should ‘own your audience’, instead of renting distribution rails on a platform like Substack, putting your best foot forward in the competition for attention), or the Ignota Labs thesis (use AI systems to buy and rescue promising but failing drugs ahead of others) are perfect examples of this in action before I even considered articulating the concept.
This trend is only going to accelerate. As the tools for market creation improve, I expect we'll see tradeable & adversarial markets emerge in categories that don't look like markets today, and aggressively competitive dynamics come to the fore in sectors that have previously been about enablement. And as they do, the companies facilitating and operating within those markets may turn out to have a form of defensibility that we haven't been paying enough attention to.
What this means
This doesn't overturn our existing framework. Network effects and economies of scale still matter enormously, and we'll keep looking for them. But it adds a new lens: when evaluating a company, we should be asking whether this business is defensible, and whether the competitive environment itself is a source of defence.
It's a slightly uncomfortable idea for us, because it cuts against the Thiel-influenced instinct to avoid competitive markets entirely. But I think the honest answer is that some of the messiest, most competitive, most adversarial markets might turn out to be the most durable places to build in this age. Not despite the competition - because of it.
This is early thinking. I'm sure there are holes in it, and I'd welcome anyone poking at them. But writing this down has helped me see the shape of the idea more clearly, which was partly the point.