The GPU market is witnessing a potential disruption as startup Bolt Graphics emerges with bold claims about its new RISC-V-based Zeus GPU platform. In an industry dominated by three major players—Nvidia, AMD, and Intel—this California-based newcomer is promising unprecedented performance in specific workloads, particularly path tracing rendering, where it claims a tenfold advantage over Nvidia's flagship GeForce RTX 5090.
Revolutionary Architecture Based on RISC-V
Bolt Graphics has taken a fundamentally different approach to GPU design. Unlike the proprietary architectures used by established players, the Zeus GPU is built on the open-source RISC-V instruction set architecture. It features an out-of-order RVA23 scalar core combined with FP64 ALUs and RISC-V Vector Extension Version 1.0, supporting various data types from 8-bit to 64-bit. The company has also implemented proprietary extensions specifically designed to accelerate scientific workloads.
What makes Zeus particularly interesting is its chiplet-based design. The entry-level Zeus 1c26-032 features a single processing unit, while more advanced configurations like the Zeus 2c26-064/128 and Zeus 4c26-256 incorporate two and four processing units respectively. This modular approach allows Bolt to scale performance significantly while maintaining power efficiency.
Specialized for Path Tracing and Scientific Computing
Zeus appears to be purpose-built for path tracing rendering and compute-intensive scientific applications rather than traditional gaming. Unlike conventional GPUs, Zeus doesn't include traditional fixed-function graphics hardware like texture units (TMUs) and raster operation units (ROPs), instead relying on compute shaders for texture sampling and graphics output. This design choice allocates more silicon real estate to compute elements, optimizing the GPU for its target workloads.
To leverage its hardware capabilities, Bolt has developed its own path tracing rendering engine called Glowstick. The company claims this in-house solution offers up to 2.5x faster performance on single-chip variants compared to existing solutions, with even greater performance scaling across multiple GPUs.
Performance Claims and Specifications
According to Bolt Graphics, even the entry-level single-chiplet Zeus 1c26-32 significantly outperforms Nvidia's GeForce RTX 5090 in FP64 compute (5 TFLOPS vs. 1.6 TFLOPS) and path tracing (77 Gigarays vs. 32 Gigarays). The Zeus also features larger on-chip cache (128 MB vs. 120 MB) while consuming substantially less power (120W vs. 575W).
However, the RTX 5090 maintains a clear advantage in AI workloads with 105 FP16 TFLOPS and 1,637 INT8 TFLOPS compared to Zeus's 10 FP16 TFLOPS and 614 INT8 TFLOPS. For traditional rendering tasks measured in FP32 performance, the Zeus would fall far behind the RTX 5090's 105 TFLOPS with just 10 TFLOPS.
The most powerful Zeus configuration, the quad-chiplet 4c26-256, is designed for server implementation rather than as a discrete card. It integrates four processing units, four I/O chiplets, 256 GB of LPDDR5X memory, and supports up to 2 TB of DDR5 memory. This variant is specifically optimized for electromagnetic field modeling, photonics research, and FFT calculations.
Zeus GPU Configurations
Model | Processing Units | Memory | Power Consumption |
---|---|---|---|
Zeus 1c26-032 | 1 | 32 GB LPDDR5X + up to 128 GB DDR5 | 120W |
Zeus 2c26-064/128 | 2 | 64/128 GB LPDDR5X | Not specified |
Zeus 4c26-256 | 4 | 256 GB LPDDR5X + up to 2 TB DDR5 | Less than 575W |
Performance Comparison (Single-chiplet Zeus vs RTX 5090)
Metric | Zeus 1c26-32 | RTX 5090 |
---|---|---|
Path Tracing | 77 Gigarays | 32 Gigarays |
FP64 Compute | 5 TFLOPS | 1.6 TFLOPS |
FP32 Compute | 10 TFLOPS | 105 TFLOPS |
FP16 Compute | 10 TFLOPS | 105 TFLOPS |
INT8 Compute | 614 TFLOPS | 1,637 TFLOPS |
On-chip Cache | 128 MB | 120 MB |
Power Consumption | 120W | 575W |
Memory and Connectivity Innovations
Zeus takes an unconventional approach to memory architecture, prioritizing capacity over bandwidth to handle larger datasets. The entry-level model combines 32 GB of LPDDR5X memory at 273 GB/s with up to 128 GB of DDR5 memory via two SO-DIMMs at 80 GB/s. This hybrid memory system could provide significant advantages for large-scale simulations and rendering tasks.
Another distinctive feature is Zeus's built-in networking capabilities. Each GPU includes an I/O chiplet with a QSFP-DD port supporting 400GbE/800GbE, two PCIe Gen5 x16 slots with CXL 3.0 (enabling efficient memory sharing across multiple cards), and a GbE port for BMC. These networking features clearly position Zeus for data center applications where multiple GPUs need to communicate efficiently.
Software Ecosystem Challenges
Despite impressive hardware specifications, Bolt Graphics faces significant challenges in software support. Unlike Nvidia's mature CUDA ecosystem or AMD's ROCm, Zeus lacks an established software platform. While its RISC-V foundation could potentially leverage existing open-source tools and libraries, widespread adoption will depend on Bolt's ability to provide strong developer support.
It remains unclear whether Zeus will support industry-standard frameworks such as OpenCL, Vulkan, or CUDA-translation layers—essential components for gaining traction in professional and scientific computing markets. The company's in-house Glowstick path tracing engine shows promise, but broader software compatibility will be crucial for success.
Market Positioning and Availability
Bolt Graphics is targeting professional rendering and scientific computing markets rather than consumer gaming. The company plans to release developer kits in late 2025, with full production scheduled for late 2026. This timeline gives software developers time to adapt their applications to the new architecture.
While Zeus may not challenge Nvidia in the gaming market, its specialized focus on path tracing and scientific computing could carve out a valuable niche. If the company delivers on its performance promises and develops adequate software support, Zeus could become a compelling alternative for specific high-performance computing applications, particularly those involving rendering farms and scientific simulations.