The Strategic Calculus Behind Selling Chips to China
What began as a policy of strategic denial has quietly become managed, monetised access. The H200 saga reveals the central contradiction of American technology policy — and the cost of resolving it.
The architecture of American semiconductor export controls is undergoing its most significant transformation since the restrictions were first introduced in October 2022. What began as a policy of strategic denial — blocking China's access to advanced AI chips — has quietly evolved into something different: a framework of managed, taxed access.
On the same day that federal prosecutors in Texas unsealed documents revealing a $160 million chip smuggling network funnelling Nvidia H100 and H200 GPUs to China through shell companies and falsified shipping labels, the U.S. administration announced that Washington would now legally permit Nvidia to sell those same chips to Chinese buyers, provided a 25 percent surcharge was paid to the U.S. Treasury.
Two months later, in February 2026, the visit of NVIDIA CEO Jensen Huang to China culminated in Beijing's approval for the sale of 400,000 H200 chips to China's leading AI firms. Chinese technology companies have reportedly placed orders exceeding 2 million H200 processors at approximately $27,000 per unit, far surpassing Nvidia's available inventory of roughly 700,000 chips. TSMC is expected to begin additional H200 production in the second quarter of 2026 to meet demand.
From Presumption of Denial to Case-by-Case Licensing
The Biden administration's semiconductor export controls operated on a straightforward principle: advanced AI chips should not reach China. The Bureau of Industry and Security imposed a presumption of denial on license applications for high-performance AI chips destined for the People's Republic. Nvidia was compelled to design downgraded variants — the A800, H20, and others — specifically to comply with performance thresholds set by Washington.
In January 2026, the Department of Commerce published a new regulation that upended this framework. License applications for specific AI chips, including the Nvidia H200 and AMD MI325X, are now evaluated on a case-by-case basis, subject to supply, security, and testing conditions. A parallel 25 percent tariff was imposed under Section 232 of the Trade Expansion Act.
The Council on Foreign Relations described the resulting framework as "strategically incoherent." The regulation openly acknowledges that exporting advanced AI chips to China poses serious national security risks while simultaneously creating a legal pathway to permit their sale. If implemented strictly, it would block most exports. If implemented loosely, it addresses none of the concerns that motivated controls in the first place.
Analysts estimate that the regulation would permit the export of up to 1 million H200 chips and potentially another million H100s. For context, 1 million H200s would be sufficient to build the largest AI data centre in the world — larger than xAI's Colossus facility in Memphis, Tennessee.
What H200 Exports Mean in Practice
The significance of the policy shift becomes clearer when measured against China's domestic compute capacity: without imported AI chips, China's infrastructure relies on Huawei's Ascend accelerators and a small number of emerging chipmakers. The output is growing but remains, under current conditions, limited.
Huawei plans to manufacture approximately 600,000 Ascend 910C chips in 2026 — as much as twice its 2025 output — with total Ascend production reaching up to 1.6 million chips. Cambricon, another Chinese chipmaker, is targeting 500,000 AI accelerators in the same period.
Yet these figures face two critical bottlenecks. The first is manufacturing quality. China's leading foundry, SMIC, produces advanced chips with a yield rate of just 5 to 20 percent — meaning that for every hundred chips manufactured, only five to twenty work correctly. By comparison, TSMC achieves yields of 60 to 80 percent. The second bottleneck is memory. Modern AI chips require a specialised component called High-Bandwidth Memory (HBM) — stacked memory modules soldered directly onto the processor that allow it to handle the massive data flows required for AI training. China has almost no domestic HBM production.
The United States and allied nations currently hold a roughly 70x advantage over China in HBM production capacity: global HBM shipments are projected at approximately 30 billion gigabits in 2026, virtually all produced by Samsung, SK Hynix, and Micron; China's sole significant HBM producer, CXMT, is expected to fabricate around 2.2 million HBM stacks in 2026 — equivalent to roughly 0.42 billion gigabits. This memory shortage caps the number of functional AI chips China can deploy, regardless of how many processor units SMIC produces.
Against this backdrop, the impact of H200 and H100 imports is transformative, significantly increasing the total AI compute installed in China in 2026 relative to what China could achieve with domestic chips alone.
[TABLE: U.S.–China AI Chip Landscape: Key Metrics (2026 Estimates)]
The compute-advantage figures are perhaps the most revealing metric in this debate. Without AI chip exports to China, the U.S. would hold a 21–49x advantage in 2026-produced AI compute. If H200 exports proceed at the volumes currently permitted, the Institute for Progress estimates this advantage could compress to as little as 1.2x — a figure derived from modelling the additional compute capacity that 1 million H200 chips would deliver to Chinese data centres, measured against projected U.S. Blackwell deployments and adjusted for FP4 and FP8 performance assumptions. That is the distance between strategic dominance and effective parity.
Beijing's Double Bind
China's response to the H200 opening has been more cautious than many anticipated. Beijing deliberated a list of conditions to be satisfied before greenlighting purchases: only commercial-sector buyers are eligible; military and state-owned enterprises remain barred; and regulators considered requiring that for every H200 imported, buyers must also purchase a set number of Chinese-made chips — a protectionist measure designed to ensure domestic chipmakers are not sidelined by the influx of American silicon.
To understand the reasoning of the Chinese authorities, it is necessary to consider the differences among chips currently available in China and the H200. The H20 is a downgraded processor that Nvidia designed specifically to sell legally in China. It has significantly lower performance than Nvidia's mainstream products, with the H200 roughly six times more powerful than the H20. The Ascend 910C, the most advanced AI chip produced by Huawei, also falls well short of the H200 in both raw processing power and memory bandwidth. According to Huawei's own public roadmap, the company will not produce a chip matching the H200's performance until the Ascend 960 arrives in late 2027 at the earliest. More notably, Huawei's next-generation chips planned for 2026 — the Ascend 950PR and 950DT — actually decrease in performance relative to the current 910C.
Chinese strategists have framed the H200 offer as a potential "strategic trap": if Chinese AI firms become dependent on imported American silicon, the urgency driving China's domestic semiconductor self-sufficiency campaign diminishes. Beijing's compromise — a "two-track" approach using imported H200s for high-end training and domestic chips for inference workloads — attempts to navigate this strategic tension, but senior officials have signalled that the risk of renewed dependency on American technology remains a primary concern.
The DeepSeek Factor
The H200 policy reversal arrives at the precise moment when Chinese AI labs have demonstrated that frontier-competitive results are achievable under severe compute constraints.
DeepSeek's V4 model, built substantially on domestic hardware and optimised for efficiency through architectural innovations such as the Engram Conditional Memory system, challenged a foundational assumption of Western AI strategy: that frontier performance requires unrestricted access to the most advanced silicon. The model's competitive benchmarks suggested that algorithmic innovation can partially offset hardware disadvantage.
U.S. export controls were originally justified by the argument that restricting China's access to advanced compute would slow its AI development; the constraints appear instead to have promoted a wave of architectural efficiency that may prove more durable than hardware advantages alone.
By reopening the flow of high-performance chips, Washington now risks providing Chinese labs with the best of both worlds: the efficiency techniques developed under scarcity, combined with the raw compute power to deploy them at scale.
The Enforcement Gap
According to a bipartisan group of members of Congress, enforcement remains the central unresolved challenge for export control. Their argument was direct: entity-specific controls are ineffective because, once equipment crosses the Chinese border, the U.S. government has virtually no ability to enforce end-use restrictions.
The smuggling data supports this concern. The Center for a New American Security estimates that many thousand AI chips were smuggled to China in 2024. Nvidia GPUs were relabelled as "adapter modules" and shipped through intermediaries in Singapore and Malaysia. In this environment, the proposition that case-by-case licensing will provide adequate oversight requires considerable faith in bureaucratic capacity.
The group also identified a troubling trend in equipment sales: Dutch exports of advanced lithography tools to China doubled from 2022 to 2023, and doubled again from 2023 to 2024, increasing Chinese manufacturing capacity and consequently representing a permanent loss of American leverage.
What to Watch
The H200 saga encapsulates the central contradiction of American technology policy in the AI era. The United States simultaneously seeks to maintain its compute advantage, satisfy its most valuable technology companies, generate revenue from chip exports, and prevent China from achieving AI parity. These objectives are not all compatible.
The variables that will determine the trajectory are clear: the pace and volume of actual H200 shipments; Beijing's domestic-chip bundling requirements; SMIC's yield trajectory and CXMT's HBM production ramp; and, most critically, whether the case-by-case licensing framework is extended to Blackwell — Nvidia's latest and most advanced AI chip architecture, currently restricted from export to China — and its successors.
The chip war is no longer a story of simple denial. It has become something more complex and more ambiguous: a managed, monetised form of technological competition where the stakes grow with every new generation of silicon.
This is part of Tech Cold War's analysis of the AI competition between the United States and China. Subscribe to receive analysis directly in your inbox.
Sources
U.S. Department of Justice, "U.S. Authorities Shut Down Major China-Linked AI Tech Smuggling Network," December 2025 (justice.gov)
The Diplomat, "Nvidia's H200 Chips Re-enter China — But Beijing Isn't Giving up on Huawei," February 2026 (thediplomat.com)
Reuters, "Nvidia aims to begin H200 chip shipments to China by mid-February," December 2025 (reuters.com)
Bloomberg, "Nvidia's Return to China Faces Major Risks," February 2026 (bloomberg.com)
Bloomberg, "Huawei to Double Output of Its Advanced AI Chip Ascend," September 2025 (bloomberg.com)
Council on Foreign Relations, "The New AI Chip Export Policy to China: Strategically Incoherent and Unenforceable," January 2026 (cfr.org)
Council on Foreign Relations, "China's AI Chip Deficit: Why Huawei Can't Catch Nvidia," December 2025 (cfr.org)
Council on Foreign Relations, "The Consequences of Exporting Nvidia's H200 Chips to China," December 2025 (cfr.org)
Institute for Progress, "Should the US Sell Blackwell Chips to China?" November 2025 (ifp.org)
The Substrate, "Where Will China Get Its Compute in 2026?" February 2026 (the-substrate.net)
SemiAnalysis, "Huawei Ascend Production Ramp: Die Banks, TSMC Continued Production, HBM is The Bottleneck," September 2025
Bureau of Industry and Security, Final Rule on Advanced Computing Semiconductors, January 2026 (commerce.gov)
The Register, "Lawmakers Want Tighter Chip Tool Curbs on China," February 2026 (theregister.com)
CNBC, "How $160 million worth of export-controlled Nvidia chips were allegedly smuggled into China," December 2025 (cnbc.com)
CNBC, "U.S. approves Samsung, SK Hynix chipmaking tool shipments to China for 2026," December 2025 (cnbc.com)
Tom's Hardware, "Cambricon targets 500,000 AI chips in 2026," December 2025 (tomshardware.com)
Asia Times, "America's chip export controls are working," January 2026 (asiatimes.com)
Semiconductor Industry Association, Global Sales Projections, 2025–2026