an area of ​​1000 mm²?

The current GA100 has an area of ​​826 mm².

While NVIDIA continues to decline its Ampere architecture (in addition to the recent GeForce RTX 3080 Ti mobile, we are still waiting for the GeForce RTX 3090 Ti ; and this is an assumption, but the gap that separates the recent RTX 3050 and the RTX 3060 potentially calls for a GeForce RTX 3050 Ti…), there is already a lot of speculation surrounding Hopper. According to kopite7kimi, a usually well-informed source, the GH100 GPU would have a surface area of ​​around 1000 mm². Knowing that the current GA100 GPU has an area of ​​826 mm², this would represent an increase of around 21%. With an etching fineness of 5 nm (TSMC), the number of cores should thus increase sharply.

Moreover, according to kopite7kimi, Hopper retains a so-called monolithic architecture. However, in December 2020, he argued that Hopper would benefit from an MCM design (Multi-Chip Module). Some believe, however, that NVIDIA could develop an MCM variant of the GH100, called GH102.

After a year of waiting, here are the first photographs of the GeForce GT 1010

Ada Lovelace for the RTX 4000s

As you will have understood, the GH100 GPU would succeed the GA100 GPU used for the current A100s. On the other hand, you may remember the first mentions of Ada Lovelace in 2020; at the time we thought that this architecture would fit between Ampere and Hopper. While NVIDIA’s plans remain unclear for the time being, we learned last December that Ada Lovelace would focus more on consumer GPUs, i.e. RTX 4000, while Hopper would be used for GPUs dedicated to data centers.

According to previous data, the AD102 GPU would allow 18,432 CUDA cores against 10,752 for the GA102 GPU. Finally, in terms of consumption, rumors mention up to 1000 W for cards equipped with a GH100 GPU, up to 650 watts for the very high-end RTX 4000…

GPU TU102 GA102 AD102
Architecture Turing Ampere Ada Lovelace
Engraving knot TSMC 12 nm NFF Samsung 8 nm 5 nm
Graphics Processing Clusters (GPC) 6 7 12
Texture Processing Clusters (TPC) 36 42 72
Streaming Multiprocessors (SM) 72 84 144
CUDA Cores 4608 10 752 18 432
Powerful 16,1 TFLOPs 37,6 TFLOPs ~90 TFLOPs?
Memory type GDDR6 GDDR6X GDDR6X
Memory bus 384-bit 384-bit 384-bit
Memory capacity 11 Go (2080 Ti) 24 Go (3090) 24 Go (4090?)
Flagship of the range RTX 2080 Ti RTX 3090 RTX 4090?
TGP 250 W 350 W 450-650 W?
launch Sep. 2018 Sept. 20 2022 (TBC)

Source : VideoCardz

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.