Ampere Quietly Launches 192-Core AmpereOne M CPUs with Enhanced Memory Architecture

BigGo Editorial Team
Ampere Quietly Launches 192-Core AmpereOne M CPUs with Enhanced Memory Architecture

Ampere Computing has expanded its server processor lineup with minimal fanfare, introducing a new series of high-performance Arm-based CPUs. The company, recently acquired by Softbank, has released the AmpereOne M family without the typical press announcements, bringing significant memory improvements aimed at data center and AI workloads.

This image highlights the AMPERE component, embodying the technological advancements introduced by Ampere Computing in their new processor lineup
This image highlights the AMPERE component, embodying the technological advancements introduced by Ampere Computing in their new processor lineup

Enhanced Memory Architecture for Data-Intensive Applications

The standout feature of the new AmpereOne M processors is their 12-channel DDR5 memory subsystem, a substantial upgrade from the 8-channel configuration found in previous models. This enhancement allows the CPUs to support up to 3TB of addressable DDR5-5600 memory with one DIMM per channel. The memory architecture incorporates robust error correction capabilities through SECDED and Symbol ECC protection, making these processors particularly suitable for cloud datacenter environments where memory reliability is crucial.

Core Specifications and Performance

The AmpereOne M family includes six different processor models, all featuring Ampere's custom single-threaded Armv8.6+ cores. The lineup ranges from 96 to 192 cores, with clock speeds reaching up to 3.60 GHz. Each core is equipped with 2MB of L2 cache, and the processors feature a shared 64MB system-level cache. The flagship model, the AmpereOne A192-32M, packs 192 cores running at 3.2 GHz with a 346W TDP, positioning it as a direct competitor to AMD's 192-core EPYC 9965 processors.

Table of specifications for the AmpereOne M processor family, showcasing various models and their performance attributes
Table of specifications for the AmpereOne M processor family, showcasing various models and their performance attributes

Advanced I/O Capabilities

On the connectivity front, the AmpereOne M processors offer 96 PCIe 5.0 lanes with bifurcation capabilities down to x4 configurations. The chips include 24 device controllers designed to connect various high-performance components such as accelerators, SSDs, network cards, and other peripherals essential for AI and cloud deployments. This robust I/O architecture ensures that the processors can handle the demanding data movement requirements of modern workloads.

Manufacturing and Power Efficiency

Despite the enhancements, the AmpereOne M processors continue to use TSMC's N5 process technology, the same as their predecessors. To manage their considerable power consumption of up to 348W, these CPUs implement several advanced power management features, including dynamic voltage and frequency scaling, adaptive voltage control, and fine-grained thermal sensors. These technologies help maintain efficiency while delivering the performance needed for high-performance computing applications.

Future Roadmap

The release of the AmpereOne M series appears to be setting the stage for Ampere's next-generation processors. The company has already announced plans for AmpereOne MX processors, which will feature up to 256 cores and maintain the 12 DDR5 memory channels. These upcoming CPUs will be manufactured using TSMC's more advanced N3 process, promising a 40 percent performance improvement over competing models while enhancing power efficiency.

Market Positioning

While Ampere's processors don't support simultaneous multi-threading like AMD's EPYC chips (which can handle up to 384 threads on their 192-core models), the company is clearly targeting specific market segments where memory capacity and bandwidth are paramount. The quiet launch following Softbank's acquisition suggests a strategic repositioning in the competitive server CPU market, with a particular focus on memory-intensive AI workloads where Arm architecture's efficiency can provide advantages.