01

What DeepSeek Actually Did

DeepSeek claimed it took just months and under $6 million in compute to train a model that punches at GPT-4o or o1 levels, using Nvidia's export-restricted H800 chips.

The efficiency angle is real: smarter architectures, better data curation, mixture-of-experts tricks — all squeezing more performance per flop.

No wonder the knee-jerk reaction was "If China can do this cheap and fast without our best chips, do we really need trillions in data center buildout?"

02

The Sell-Off: Fear Over Fundamentals

January 27 was brutal. NVIDIA led the rout, but it dragged everything down — AMD, Broadcom, ASML, TSMC, even the hyperscalers. Over a trillion got erased across tech at the lows.

Add in geopolitics — U.S. export controls were supposed to keep China behind, yet here's DeepSeek closing the gap on downgraded hardware. That stung.

But here's where the reaction feels overdone. Scaling laws haven't been repealed. Frontier performance still demands massive training runs.

03

Why the Infrastructure Thesis Isn't Dead

Efficiency gains are awesome — they've been happening since day one. But DeepSeek didn't invent a magic shortcut around physics; they optimized ruthlessly within constraints.

Hyperscalers aren't slowing down — Microsoft, Amazon, Google all reaffirmed huge AI spend. NVIDIA's full stack — CUDA, software ecosystem, Blackwell roadmaps — is years ahead.

Jensen Huang called it straight: this isn't a threat — it's validation that AI is becoming more accessible, which only accelerates adoption.

04

Bottom Line

DeepSeek's drop was a genuine shock — a reminder that innovation can come from anywhere. The sell-off was painful and exposed how concentrated the AI trade had become.

But calling this the peak of the infrastructure cycle feels premature. We've been adding on weakness. NVIDIA looks mispriced here — the multi-year runway is longer than ever.