Nvidia continues to dominate the AI chip market with record-breaking financial performance and an ambitious roadmap for its next-generation GPUs. The tech giant has not only posted extraordinary earnings but has also confirmed its product pipeline extending through 2026 and beyond, signaling continued innovation in the rapidly evolving AI computing landscape.
Record Financial Performance
Nvidia reported an astounding USD $39.3 billion in earnings for fiscal Q4 2025, representing a 78% year-on-year increase. The company's Blackwell GPU sales were particularly impressive, accounting for USD $11 billion of the total revenue. CEO Jensen Huang attributed this success to the surging demand for AI chips, especially in the datacenter and machine learning sectors. Despite some production challenges with the initial Blackwell rollout, the architecture has achieved what Huang described as the fastest product ramp in Nvidia's history.
Nvidia Financial Highlights (Fiscal Q4 2025)
- Total Revenue: USD $39.3 billion (78% year-on-year increase)
- Blackwell GPU Sales: USD $11 billion
- Gaming Revenue: USD $2.54 billion (22% sequential decline, 11% year-over-year decline)
Gaming Revenue Decline
While Nvidia's AI business flourishes, its gaming division saw a notable decline. Gaming revenue reached only USD $2.54 billion, down 22% sequentially and 11% year-over-year. This downturn isn't unexpected, as GPU purchases typically slow significantly when a new generation is announced. The recent launch of the GeForce RTX 50 Series has been accompanied by various issues, including shipment delays and reports of partially missing ROPs (Render Output Units) in high-end models like the RTX 5090, 5080, and 5070 Ti.
Blackwell Ultra Coming in 2025
Despite earlier design flaws that delayed the initial Blackwell rollout for data centers, Nvidia has confirmed that its mid-cycle refresh, codenamed Blackwell Ultra (B300-series), remains on track for the second half of 2025. These enhanced GPUs will feature new networking capabilities, new processors, and significantly upgraded memory configurations with 12-Hi HBM3E memory stacks, providing up to 288GB of onboard memory. Unofficial reports suggest these improvements could deliver approximately 50% better performance compared to the current B200-series products.
Vera Rubin Architecture and Beyond
Looking further ahead, Nvidia has confirmed that its next-generation Vera Rubin architecture is scheduled for 2026. This entirely new GPU design promises what Huang described as a big, big, huge step up in performance. The Rubin platform will incorporate eight stacks of HBM4E memory (up to 288GB), Vera CPUs, NVLink 6 switches operating at 3600 GB/s, CX9 network cards supporting 1600 Gb/s, and X1600 switches.
Nvidia GPU Roadmap
- 2025 (H2): Blackwell Ultra (B300-series) - 12-Hi HBM3E memory, up to 288GB memory
- 2026: Vera Rubin Architecture - 8 stacks of HBM4E memory (up to 288GB), NVLink 6 switches (3600 GB/s)
- 2027: Potential "Rubin Ultra" - 12 stacks of HBM4E memory
Future Roadmap Teased
In a surprising move, Huang also mentioned that Nvidia plans to discuss post-Rubin products at the upcoming GPU Technology Conference (GTC) in March. While details remain scarce, he hinted at a potential Rubin Ultra that could arrive in 2027 with an impressive 12 stacks of HBM4E memory. This would require Nvidia to master the use of 5.5-reticle-size CoWoS interposers and 100mm × 100mm substrates manufactured by TSMC.
AI Driving Future Growth
The explosive growth in AI applications continues to fuel Nvidia's success. As Huang noted, Demand for Blackwell is amazing as reasoning AI adds another scaling law—increasing compute for training makes models smarter and increasing compute for long thinking makes the answer smarter. With the rise of agentic AI and physical AI applications, Nvidia is positioning itself to remain at the forefront of what it sees as the next revolutionary wave in artificial intelligence technology.