Rust CUDA Project Reboots After Years of Dormancy, Faces Competition from Cudarc

BigGo Editorial Team
Rust CUDA Project Reboots After Years of Dormancy, Faces Competition from Cudarc

The Rust CUDA Project, an ambitious initiative aimed at making Rust a tier-1 language for GPU computing using NVIDIA's CUDA toolkit, has been rebooted after years of dormancy. This development comes at a time when the Rust community has been seeking reliable solutions for GPU programming, with mixed results across different projects.

The project aims to provide tools for compiling Rust to PTX code and libraries for using existing CUDA libraries. However, community discussions reveal significant challenges and competing solutions that have emerged during its inactive period.

Project History and Current Status

The Rust CUDA Project has had a rocky history according to user comments. For years, it remained in what users describe as an unusable and unmaintained state, requiring specific, several-years-old variants of both Rust compiler (rustc) and CUDA to function properly. The recent reboot announcement signals an attempt to revive the project, though there appears to be no official release yet that works with current versions of Rust and CUDA.

This dormant period created a gap in the ecosystem that other projects have attempted to fill. The project's structure is quite broad, encompassing multiple crates including rustc_codegen_nvvm (a rustc backend targeting NVVM IR), cuda_std (for GPU-side functions), cudnn (for deep neural networks), and cust (for CPU-side CUDA features), among others.

Competition from Cudarc

While the Rust CUDA Project has been inactive, another library called Cudarc has gained significant traction in the community. Multiple users report successfully using Cudarc in professional environments, praising its compatibility with recent Rust and CUDA versions.

Summary, from someone who uses CUDA on rust in several projects: The Cudarc library is actively maintained, and works well. It does not, however, let you share host and device data structures; you will [de]serialize as a byte stream, using functions the lib provides. Works on any (within past few years at least) CUDA version and GPU.

The key difference appears to be that Cudarc requires serialization between host and device data structures, while the Rust CUDA Project aims to allow shared types between host and GPU. This distinction represents a fundamental tradeoff between immediate usability and a more seamless programming experience.

Platform Independence Concerns

A significant debate within the community centers around the project's exclusive focus on NVIDIA's CUDA. Some users argue that tying Rust GPU programming to a single vendor's technology creates a dead end that limits broader adoption across different hardware platforms.

Proponents of CUDA point to its superior tooling ecosystem, including IDE integration, graphical debugging, and extensive libraries. They argue that alternatives like OpenCL, Vulkan compute shaders, and SYCL lack the polyglot support and developer experience that CUDA provides.

Others advocate for platform-independent approaches that would work across NVIDIA, AMD, Intel, and Apple hardware, suggesting that Rust should target an intermediate representation that could then be compiled to various GPU architectures. This approach would prioritize cross-platform compatibility over the specialized optimizations that CUDA offers.

Key Rust GPU Computing Options

  • Rust CUDA Project

    • Status: Recently rebooted after years of dormancy
    • Goal: Allow shared data structures between host and GPU
    • Components: rustc_codegen_nvvm, cuda_std, cudnn, cust, gpu_rand, optix
    • License: Dual-licensed under Apache 2.0 and MIT
  • Cudarc

    • Status: Actively maintained
    • Compatibility: Works with recent Rust and CUDA versions
    • Limitation: Requires serialization between host and device data
    • GitHub: https://github.com/coreylowman/cudarc
  • Other Related Projects

    • rust-gpu: Compiler backend to compile Rust to SPIR-V for shaders
    • glassful (2016): Subset of Rust that compiles to GLSL
    • inspirv-rust (2017): Experimental Rust MIR -> SPIR-V Compiler
    • nvptx (2018): Uses LLVM PTX backend
    • accel (2020): Higher-level library using nvptx mechanism
    • risl (2020): Experimental Rust -> SPIR-V compiler

Industry Adoption and Future Prospects

The community discussion reveals interesting insights about NVIDIA's potential interest in Rust. One user mentioned a conversation with someone from the CUDA Core Compute Libraries team who hinted that in the next 5 years, NVIDIA could support Rust as a language to program CUDA GPUs. Another noted that NVIDIA is already using Rust in Dynamo, their high-throughput low-latency inference framework, though the public API is Python-based.

The question of why NVIDIA hasn't invested more heavily in the Rust ecosystem remains open, with some suggesting that the company might be waiting to see sufficient business value before committing resources.

As the project reboots, its maintainers are actively seeking contributors, acknowledging that there's a lot of work ahead and that they all have day jobs. The success of this revival will likely depend on building sufficient community momentum to overcome the technical challenges that previously stalled the project.

For developers needing GPU computing capabilities in Rust today, the community consensus seems to favor Cudarc for practical applications, while keeping an eye on the Rust CUDA Project's progress toward its more ambitious goals of seamless host-device integration.

Reference: The Rust CUDA Project