China has brought its most powerful scientific AI computing infrastructure to full operation, deploying 60,000 AI accelerator cards at the core node of its national supercomputing network in Zhengzhou. The chips were produced by Sugon, a supercomputer developer affiliated with the Chinese Academy of Sciences. State broadcaster CCTV reported the milestone on Tuesday, describing the node as the country’s most advanced intelligent computing infrastructure for scientific research.
The cluster’s chip count doubled from 30,000 units when trial operations began in early February 2026 to 60,000 at full activation, which is a scale-up achieved in approximately two months. The expansion is notable both for its speed and for the complete absence of US-origin components, a deliberate posture amid tightening American export controls that have restricted China’s access to advanced processors from companies including NVIDIA.
CCTV framed the development in strategic terms, calling it a breakthrough that would help China “seize the commanding heights of AI industrial applications.” The language reflects Beijing’s broader push to build sovereign AI infrastructure that is insulated from foreign supply chain dependencies, a priority that has intensified following successive rounds of US chip export restrictions since 2022.
Chinese researchers working in the “AI for science” domain, applying machine learning to fields including drug discovery, materials science, climate modelling, and physics simulationm, have faced documented constraints including computing shortages, software limitations, and reliance on foreign toolchains. The Zhengzhou cluster is positioned as a national resource to address those gaps, centralizing high-performance AI compute for scientific workloads under domestic control.
The Sugon accelerator cards powering the cluster are part of a growing portfolio of Chinese-designed AI chips developed in response to US restrictions. Sugon, formally known as Dawning Information Industry, has longstanding ties to the Chinese Academy of Sciences and has been on a US Entity List since 2019, restricting its access to American technology and components. The company has continued developing AI hardware independently since that designation.
The wider competitive context is significant. Independent analysts have noted that the China-US gap in AI model development has narrowed considerably over the past year, driven by domestic talent, rapid iteration cycles, and growing inference infrastructure. The performance gap in underlying chip hardware remains larger, with Chinese accelerators generally trailing NVIDIA’s current-generation Blackwell products in raw computational throughput.
Beijing’s response has been to compensate through scale and integration, building clusters large enough to offset per-chip performance differences, and investing in software optimization to extract maximum efficiency from available domestic hardware.
The Zhengzhou node’s designation as a core node in China’s national supercomputing network signals that this infrastructure is intended as shared national resource rather than a facility for a single institution. The national supercomputing network architecture is designed to distribute compute access to research institutions across the country, reducing the dependence of any single lab or university on procuring its own hardware.
No independent verification of the cluster’s benchmark performance has been published at the time of writing. The Chinese government has not released technical specifications beyond chip count and the institutional affiliations involved.

