Up to 819 GB/s and an operating voltage lowered to 1.1 V.
JEDEC has published the official standards for HMB3 DRAM (High Bandwidth Memory). According to the organization, the HBM3 brings gains on three aspects: flow, capacity and consumption.
The JEDEC lists several key attributes. In terms of bandwidth, HMB3 DRAM doubles the speeds per pin compared to HBM2: they now reach 6.4 Gbit/s, or 819 GB/s per device (418 GB/s for HMB2).
The HBM3 also doubles the number of channels from 8 to 16. It thus supports up to 32 virtual channels.
This memory supports TSV stacks (through silicon via) of 4, 8 and 12 layers. The JEDEC does not exclude the possibility of a future extension authorizing 16 layers.
Also improved on the density side, with a range now ranging from 8 to 32 Gb, for devices with a capacity of 4 GB (8 Gb, 4 layers) to 64 GB (32 Gb, 16 layers; 64 GB remains therefore hypothetical at this stage).
Finally, the JEDEC informs a reduced operating voltage of 1.1 V (instead of 1.2 V for the HBM2) and a low oscillation signaling (0.4 V) on the host interface. The HMB3 benefits from a reinforced ECC (Error Correction Code) according to JEDEC, which qualifies this memory as a RAS solution (Reliability, Availability, Serviceability).
896 GB/s finally for the SK Hynix HBM3
Declarations
The release is accompanied by statements from representatives of SK Hynix, Micron, Synopsys and NVIDIA. Here are a few.
“With its improved performance and reliability, HBM3 will support new applications requiring significant bandwidth and memory capacity” said Barry Wagner, director of technical marketing at NVIDIA and chairman of the JEDEC HBM subcommittee.
Mark Montierth, vice president and general manager of high performance memory at Micron: “The HBM3 will enable the industry to achieve even higher performance thresholds with improved reliability and lower power consumption.”
Source : TechPowerUp, JEDEC