Chinese tech giant Huawei has broken its silence on future AI silicon plans, revealing an ambitious three-year roadmap that positions the company as a serious domestic alternative to NVIDIA's dominance in artificial intelligence computing. The announcement, made at Huawei Connect 2025 by rotating chairman Xu Zhijun, marks the first official long-range strategy for the company's Ascend chip family.
First Self-Built HBM Technology Debuts in 2026
The centerpiece of Huawei's roadmap is the Ascend 950PR, scheduled for Q1 2026 release. This chip represents a significant milestone as Huawei's first processor to feature entirely self-developed High Bandwidth Memory (HBM) technology. The 950PR will incorporate Huawei's HiBL 1.0 HBM, delivering 128GB capacity with 1.6TB/s bandwidth. This inference-focused chip targets prefill and recommendation performance, offering 1 PFLOPS of FP8 compute power and 2 PFLOPS of FP4 processing capability.
Huawei's In-House HBM Technology Generations
HiBL 1.0 (First Generation)
- Capacity: 128GB
- Bandwidth: 1.6TB/s
- Target: Ascend 950PR
HiZQ 2.0 (Second Generation)
- Capacity: 144GB
- Bandwidth: 4TB/s
- Target: Ascend 950DT and later models
Training-Focused Variant Follows Later in 2026
Complementing the 950PR, Huawei plans to launch the Ascend 950DT in Q4 2026. This training-oriented processor will feature the company's second-generation HiZQ 2.0 HBM technology, boasting increased specifications of 144GB capacity and 4TB/s bandwidth. The enhanced memory subsystem positions the 950DT as Huawei's answer to high-performance AI model training requirements.
Roadmap Extends Through 2028 with Massive Performance Gains
Looking further ahead, Huawei's Ascend 960 is slated for Q4 2027, featuring substantial improvements including 2.2TB/s interconnect bandwidth, 288GB of memory capacity, and 9.6TB/s memory bandwidth. The chip will deliver 2 PFLOPS FP8 and 4 PFLOPS FP4 compute performance. The roadmap culminates with the Ascend 970 in 2028, promising even more significant upgrades in memory and compute capabilities.
Huawei Ascend Chip Roadmap Specifications
| Model | Release Date | Memory Type | Memory Capacity | Memory Bandwidth | Compute Performance | Focus |
|---|---|---|---|---|---|---|
| Ascend 950PR | Q1 2026 | HiBL 1.0 HBM | 128GB | 1.6TB/s | 1 PFLOPS FP8, 2 PFLOPS FP4 | Inference |
| Ascend 950DT | Q4 2026 | HiZQ 2.0 HBM | 144GB | 4TB/s | Not specified | Training |
| Ascend 960 | Q4 2027 | HiZQ 2.0 HBM | 288GB | 9.6TB/s | 2 PFLOPS FP8, 4 PFLOPS FP4 | General AI |
| Ascend 970 | 2028 | Not specified | Not specified | Not specified | Not specified | General AI |
Manufacturing Challenges and Market Reality
Despite the ambitious specifications, Huawei faces substantial manufacturing hurdles. Under U.S. sanctions, the company cannot access TSMC's advanced nodes or CoWoS packaging technology that NVIDIA relies on for its Hopper and Blackwell GPUs. Working with domestic foundries like SMIC may result in yield and bandwidth limitations that could impact real-world performance.
Scale Ambitions Meet Software Reality
Alongside the chip roadmap, Huawei unveiled plans for massive supernodes housing thousands of Ascend processors. The Atlas 950 and 960 systems aim to rival NVIDIA's GB200 NVL72 configurations, supporting up to 15,488 Ascend accelerators in a single deployment. However, matching NVIDIA's performance requires more than raw chip count – it demands optimized software stacks and interconnect technologies that keep large clusters efficiently utilized across complex AI workloads.
The roadmap arrives as the Chinese government pushes for domestic silicon production and bans procurement of NVIDIA components. While Huawei's plans demonstrate technical ambition, success will ultimately depend on delivering a proven end-to-end platform that can match NVIDIA's ecosystem in training efficiency and model throughput.
