Efficiency First

Start with what AI actually improves.

Markets are information systems. The edge comes from processing speed, pattern recognition, and execution efficiency. AI pushes all three forward.

It scans data faster than any human desk. It identifies discrepancies across assets, venues, and timeframes. It executes without hesitation. Traders don't need to "see" the market anymore. The model does.

In that sense, AI pulls markets closer to the textbook version of efficiency. Prices adjust faster. Mispricings close quicker. Information gets embedded almost instantly. Risk systems tighten. Fraud detection sharpens. Credit decisions become more data-driven. Customer service scales without headcount.

In normal conditions, this looks like progress. And it is.

But a tighter system is not necessarily a stronger one.

Where It Breaks

The problem starts where AI's competence ends.

AI learns from data. Financial data is overwhelmingly drawn from stable regimes. Orderly markets, small shocks, repeated patterns. But markets are defined by their discontinuities, and discontinuities are exactly what stable data cannot teach.

Crises live in the tails. And the tails are exactly where AI has no training data.

Scene from Margin Call (2011)
Scene from Margin Call (2011)

So the system acquires a structural blind spot. Strong in the centre of the distribution, weak at the edges where it matters most.

AI does not understand uncertainty. It interpolates within what it has seen. It cannot price what it has never experienced. Its confidence is highest precisely where its exposure to genuine disorder is lowest.

From Efficiency to Crowding

Once AI is widely adopted, the issue is no longer individual model failure. It becomes system behavior.

Most firms are not building fundamentally different models. They are using similar datasets, training on similar histories, optimizing for similar objectives. That produces alignment. Mechanical similarity, even without intentional coordination.

Positions begin to converge. Same signals. Same trades. Same risk triggers.

This is not new. LTCM was built on convergence trades. Everyone saw the same relative value and crowded into it. 2008 was built on correlated leverage. Everyone held assets assumed to be uncorrelated, until they weren't.

AI follows the same logic. The difference is scale and speed.

When every participant's model ingests the same regime and outputs the same positioning, the market doesn't become more efficient. It becomes more brittle. Diversity of opinion is how stress gets absorbed. AI erodes that diversity through convergence alone.

The Compression of Time

Human traders introduce friction. Hesitation, disagreement, delayed execution. These are inefficiencies in the textbook sense, but they serve a structural purpose. They slow the transmission of shocks. They create space for reassessment.

By Kena Betancur/Getty Images.
By Kena Betancur/Getty Images.

AI removes all of that. Reactions become instantaneous, automated, synchronised. What used to unfold over days now unfolds in minutes.

Feedback loops tighten. Prices move. Models update. Trades execute. Prices move further. And because the models are similar, they move together.

That is how local efficiency turns into global instability.

The Illusion of Liquidity

AI changes how liquidity looks, and how it behaves.

In calm markets, spreads tighten, depth increases, execution improves. Everything appears healthy. But that liquidity is conditional. It exists as long as models are willing to provide it.

The moment volatility spikes, models widen or withdraw. Market-making capacity doesn't thin gradually. It vanishes. The system flips from liquid to illiquid without transition.

That gap between apparent liquidity and realized liquidity is where instability lives. The liquidity was always algorithmic. And algorithms have exit conditions.

The Tail Problem

When the world behaves in ways that are not in the data, models break.

You don't need theory for this. You can see it in practice. When geopolitical risk becomes path-dependent, when policy becomes inconsistent, when outcomes are driven by individuals rather than distributions, models struggle. The structure of the problem changes underneath them.

A statistical model cannot price something that has no statistical precedent. If outcomes are driven by events that do not repeat, the model has nothing to learn from.

AI cannot know what it has never seen. And when that gap matters, it matters all at once, because every model discovers the same blind spot at the same time.

∆S — Distribution of outcomes
Normal distribution
AI regime
Tail concentration
Fewer small dislocations. Higher probability of large ones. Efficiency compresses the centre and fattens the tails.

Stability That Breeds Instability

Put it together and the pattern is clear.

AI improves execution, pricing, and efficiency. But it also aligns behavior, compresses reaction time, and amplifies feedback loops.

So the system evolves toward a state that looks safe but isn't. Calm on the surface, pressure building underneath.

In normal times, volatility is suppressed. Markets appear stable. Risk looks manageable.

Under stress, positions unwind simultaneously. Liquidity disappears. Moves become discontinuous.

The system doesn't fail gradually. It fails collectively.

The Real Trade-Off

AI does not make markets safer or riskier in any simple sense. It changes the distribution of outcomes. Fewer small dislocations, higher probability of large ones.

That is the trade. And it is not new.

Every major financial innovation has followed the same path. Improved efficiency, increased interconnectedness, greater systemic exposure. Derivatives did it. Securitisation did it. High-frequency trading did it. AI just accelerates the cycle.

Final Thought

AI doesn't destabilise markets by making bad decisions.

It destabilises them by making the same decision, at scale, at speed.

That only shows up when the system is tested. Which is exactly when the models stop working.