2023-06-12 16:18:45
Managing PCI-Express has always been a bit complicated, and when announcing the Mac Pro 2023 and its six slots, a question arose: how can Apple manage so many? The answer is unfortunately simple: bandwidth is widely shared.
The problem of PCI-Express lines
In PCI-Express, the basic unit is the line. A 1x connector uses one line and a 16x connector uses 16 (yes, that makes sense). The total number of lines depends mainly on the processor, which has been integrating the PCI-Express controller for years. And Apple’s chips are quite limited on this point.
If we take the 2019 Mac Pro, its Xeon processors can support 64 PCI-Express 3.0 lanes (1 GB/s per lane), which can be configured in different modes. Apple, in its M2 Ultra chips, is limited to 16 PCI-Express 4.0 lines per block, or 32 lines in total (the M2 Ultra is an assembly of two M2 Max). Is it sufficient ? No.
A Mac Pro has big needs
First, a line block is reserved for storage: 8 lines out of the 32 are used only for this purpose and are not visible by the OS. Then the Mac Pro needs different rows: 1 for the internal USB connector, 2 for the internal SATA connectors, 4 for the I/O card (which manages the Thunderbolt), 2 for the two 10 Gb/s Ethernet interfaces and 1 for Wi-Fi. If you counted, the Mac Pro therefore requires 10 lines for its basic connectivity. Except that in practice, it has six PCI-Express slots physically in 16x, four of which are limited to 8 lines, for a total of 64 lines. Do you see the problem? There are 24 accessible lines and Apple should allocate 74.
And Apple’s solution, selon Hector Martin (one of the developers of Asahi Linux), is to share the lines. The first block of 8 lines will manage all the connectors of the Mac Pro and one of the PCI-Express 8x slots. The different controllers must therefore share a bandwidth of 16 GB/s, which can be a problem in some cases. Let’s be honest, that’s probably rarely the case and some chips support sharing well. Wi-Fi does not need the bandwidth of a line (2 GB/s), just like Ethernet ports. Even using all the internal controllers in parallel, it seems impossible to approach the limit… if you don’t connect an expansion card. A simple PCI-Express SSD (which can reach around 7 GB/s) may indeed reduce the performance of all the other components.
Similarly, all other PCI-Express slots share 16 lines. The two 16x slots and three 8x slots can therefore only provide – cumulatively – 32 GB/s. And this is not insignificant: those who want to use the PCI-Express slots for storage can quickly reach the limits. Without even going through high-end cards that integrate several PCI-Express SSDs, five SSDs in five slots will be restricted. The example given by Apple is also eloquent: an OWC Accelsior 8M2 card can reach 26 GB/s, but probably only if it is alone. With two cards, the total throughput cannot physically exceed 32 GB/s.
An “Extreme” M2 would have solved the problem
Switching to an “Extreme” M2 chip (that is to say the assembly of four M2 Max) would probably have solved the problem for the most part. Apple’s plan A would have made it possible to dedicate 16 lines to each 16x connector, with the sharing of 16 lines for the three 8x connectors and the same configuration for the rest, which constrains the whole considerably less.
Remember, however, that line sharing remains quite common in workstations, if only because some controllers do not really require the bandwidth of a line. The case of Ethernet or Wi-Fi is the most obvious: WI-Fi 6 or Ethernet at 1 Gb/s does not require 2 Gb/s. But in the case of the Mac Pro 2023, the total bandwidth still seems too low compared to the number of slots.
1686590480
#Apple #restricts #speeds #PCIExpress #Mac #Pro