How Far Behind Is China in the AI Race?
By Ben Lee | 28 Apr, 2026
The big hurdle for China is production inefficiency from being unable to access Dutch chipmaking equipment.
(Image by ChatGPT)
One number tells us almost everything we need to know about the AI hardware gap between the US and China: 40%.
That's the current yield rate on Huawei's most advanced AI chip, the Ascend 910C — meaning that for every ten chips that go into production, six are scrapped. Nvidia's equivalent chips, made at Taiwan Semiconductor Manufacturing Company (TSMC) using the world's most advanced lithography, yield upward of 90%. That single statistic encapsulates a manufacturing disadvantage that's wider, deeper, and more structurally entrenched than most coverage of the "AI race" lets on.
China isn't losing the AI race because it lacks ambition or engineers. It's losing — at least on hardware — because it can't get its hands on a machine made by a Dutch company called ASML.
The Machine that Changes Everything
ASML's extreme ultraviolet (EUV) lithography systems are the linchpin of advanced chip manufacturing. These machines use light with a wavelength of just 13.5 nanometers to etch transistors onto silicon with extraordinary precision. Without them, you can't reliably manufacture chips at the 4nm or 5nm process nodes that define today's most powerful AI accelerators. TSMC uses EUV to build Nvidia's H100, H200, and the latest Blackwell B200. Samsung uses it too.
China's top foundry, Semiconductor Manufacturing International Corporation (SMIC), doesn't have access to EUV machines. The Dutch government, under sustained pressure from Washington, has blocked ASML from exporting its newest systems to China. SMIC is therefore stuck using older deep ultraviolet (DUV) equipment, which limits the company to a functional equivalent of around 7nm — and even then, it achieves that node with lower precision and at far greater cost than TSMC achieves at 4nm.
This isn't a temporary setback. EUV machines take years to build, cost roughly $380 million each, and require a global supply chain of over 5,000 components that China has no near-term path to replicate domestically. The Dutch export ban isn't just a trade restriction — it's a structural ceiling on what Chinese chipmakers can produce.
The Numbers Behind China's Best Chips
Huawei's HiSilicon division is China's closest equivalent to Nvidia's chip design operation, and its Ascend series is the country's flagship AI accelerator line. The most capable chip currently in production is the Ascend 910C, which entered mass shipment in mid-2025.
On raw compute, the 910C delivers approximately 800 teraflops of FP16 performance — which sounds impressive until you note that this puts it roughly on par with Nvidia's H100, a chip that debuted in 2022. Nvidia's current-generation B200 delivers around 2,250 teraflops of FP16, making it about 2.3 times more powerful than the H100 and nearly three times more powerful than China's best. In real-world AI inference benchmarks, the 910C achieves only about 60% of the H100's throughput — not 80%, despite the headline TFLOPS figure being close. Architecture, memory bandwidth, and software efficiency all degrade the practical gap beyond what raw specs suggest.
On memory, the 910C offers 96 to 128 gigabytes of HBM2e or HBM3 memory with roughly 3.2 terabytes per second of bandwidth. The H100 provides 80GB of HBM3 at 3.35 TB/s — so the 910C is actually comparable here. But the B200 leaves both behind with 192GB of HBM3e at a staggering 8.0 TB/s. China's best chip is competitive with America's chip of two years ago and is well behind what's being deployed today.
What's more, the 910C is achieved through a workaround: it packages two Ascend 910B dies together rather than designing a single, larger chip. This dual-die approach delivers more performance but introduces 10 to 20 times less die-to-die bandwidth than Nvidia's integrated designs. It's a clever engineering solution to a manufacturing constraint, but it's not a substitute for access to better fabrication.
A Vast Production Gap
Even if China's chips were more competitive in specs, the volume gap would still be damning. In 2025, Nvidia is projected to ship somewhere between 6.5 and 7 million data center GPUs — a figure that includes both its legacy Hopper chips and the ramping Blackwell line. Huawei, in the most optimistic estimates, expects to ship 700,000 to 1 million Ascend chips across the 910B and 910C. That's roughly one-seventh of Nvidia's volume.
The low yield rate makes this worse than it first appears. With a 40% yield, SMIC has to process roughly 2.5 times as many wafers to produce the same number of working chips that TSMC would get at 90%+ yield. That's 2.5 times the silicon consumption, 2.5 times the manufacturing time, and 2.5 times the cost — for a chip that's still two generations behind. Huawei's production line only turned profitable for the first time in early 2025, after yield rates climbed from 20% to 40%. The company's target is 60%, which is still well below industry norms for leading-edge manufacturing.
SMIC's total output capacity sits at around 50,000 wafers per month. TSMC's capacity — across all nodes — is an order of magnitude larger. China simply doesn't have the manufacturing infrastructure to compete at scale, and building it requires equipment it can't currently buy.
China's Compensating Strategy
None of this means China's AI ambitions are stalled. Beijing has responded to hardware constraints with a combination of system-level engineering, software optimization, and sheer national will.
Huawei's answer to the single-chip deficit is to deploy more chips in larger clusters. Its CloudMatrix 384 system links 384 Ascend 910C chips into a single super node — and by some measures, that system outperforms Nvidia's GB200 NVL72 rack at the system level, even though each individual chip is weaker. This is the "strength in numbers" strategy: compensate for inferior silicon with superior interconnect design and cluster architecture.
Chinese AI labs have also gotten remarkably good at working around hardware limitations. DeepSeek's V3 and R1 models shocked the industry not just with their capabilities but with their efficiency — trained at a fraction of the compute cost of comparable US models, partly through innovations in model architecture that reduce the hardware burden. This software-layer adaptability is real and shouldn't be dismissed.
Chinese cloud providers — Alibaba, Baidu, Tencent, ByteDance — have committed heavily to Ascend-based infrastructure, in part because US export controls have cut off access to Nvidia's most capable chips. The H100 and H200 are effectively unavailable in China. Even the downgraded H20, designed specifically to comply with export rules, was banned from sale to China in April 2025. China's tech industry is being forced to build on domestic silicon, and it's managing — but the ceiling is lower.
How Many Years to Close the Gap?
Estimating China's catch-up timeline requires separating chip design from chip fabrication, because the two have very different trajectories.
On chip design, China's gap is probably two to three years. Huawei's Ascend 910C matches the H100 of 2022. Its next chip, the 910D, is targeting a 5nm process with mass production expected in 2026. If it achieves its targets, it would roughly correspond to where Nvidia was in 2023–2024 — still behind, but closing. HiSilicon's engineers are skilled, and design iteration is constrained more by fabrication access than by engineering talent.
On chip fabrication, the gap is far more daunting — and here's the hard truth: without EUV, SMIC can't realistically get below 5nm at competitive yields for the foreseeable future. China's domestic EUV program exists but remains far behind ASML's technology, and analysts broadly estimate that China is 10 to 15 years behind the global frontier in lithography. Even if export controls were relaxed tomorrow, building the manufacturing ecosystem to use EUV at scale would take the better part of a decade.
On volume and ecosystem, the gap may be the most persistent of all. Nvidia's CUDA software platform has a 15-year head start and millions of developers writing code for it. Huawei's CANN framework is functional but thin. Replacing CUDA lock-in isn't a hardware problem — it's a decade-long developer adoption challenge.
Put it together, and a realistic assessment is this: China's chip design capability will approach parity with the current US frontier somewhere around 2028 to 2030, assuming continued domestic investment and no major breakthroughs in domestic lithography. But fabrication parity — the ability to make those chips at scale, at competitive yields, on leading-edge nodes — is more likely a 2035 problem at the earliest, and only if China makes significant and currently uncertain progress on domestic semiconductor equipment.
The AI race isn't just about who has the best model or the most data. It's about who can manufacture the silicon to run those models, and at what cost, and at what volume. On that dimension, China's deficit isn't a gap — it's a chasm. And the Dutch machine standing between China and the frontier is, for now, holding the line.
Recent Articles
- How Far Behind Is China in the AI Race?
- Iran Willing to Share Savvy in Fighting US with Asian Partners
- Sherpas Open Everest Route Closed by Icefall
- Taylor Swift Seeks to Trademark Voice and Likeness to Discourage AI Deepfakes
- S. Korea's Ex-First Lady Sentenced to 4 Years
- Chery Aims to Be the World's 'Toyota Plus Tesla'
- SpaceX Ties Musk Compensation to Mars Colony with 1 Million Population
- GM Beats Big, Lifts Outlook on Strong US Truck Sales
- Russian Superyacht Crossed Strait of Hormuz
- Coca-Cola Raises Profit Forecast, Plays Down Oil Impact
