Intel has announced the next family of Xeon processors that it plans to ship in the first half of next year. The new parts represent a substantial upgrade over current Xeon chips, with up to 48 cores and 12 DDR4 memory channels per socket, supporting up to two sockets.
These processors will likely be the top-end Cascade Lake processors; Intel is labelling them “Cascade Lake Advanced Performance,” with a higher level of performance than the Xeon Scalable Processors (SP) below them. The current Xeon SP chips use a monolithic die, with up to 28 cores and 56 threads. Cascade Lake AP will instead be a multi-chip processor with multiple dies contained with in a single package. AMD is using a similar approach for its comparable products; the Epyc processors use four dies in each package, with each die having 8 cores.
The switch to a multi-chip design is likely driven by necessity: as the dies become bigger and bigger it becomes more and more likely that they’ll contain a defect. Using several smaller dies helps avoid these defects. Because Intel’s 10nm manufacturing process isn’t yet good enough for mass market production, the new Xeons will continue to use a version of the company’s 14nm process. Intel hasn’t yet revealed what the topology within each package will be, so the exact distribution of those cores and memory channels between chips is as yet unknown. The enormous number of memory channels will demand an enormous socket, currently believed to be a 5903 pin connector.
Intel, notably, is listing only a core count for these processors, instead of the usual core count/thread count combination. It’s not clear whether this means that the new processors won’t have hyperthreading at all, or if the company is preferring to emphasize physical cores and avoid some of the security concerns that hyperthreading can present in certain usage scenarios. Cascade Lake silicon will contain hardware fixes for most variants of the Spectre and Meltdown attacks.
Overall, the company is claiming about a 20 percent performance improvement over the current Xeon SPs and 240 percent over AMD’s Epyc, with bigger gains coming in workloads that are particularly memory bandwidth intensive. The new processors will include a number of new AVX512 instructions designed to enhance the performance of running neural networks; Intel reckons that this will improve the performance of image matching algorithms by as much as 17 times faster than the current Xeon SP family. The smallprint for the performance comparisons notes that hyperthreading/simultaneous multithreading is disabled on both the Xeon SP and Epyc systems.
At the other end of the performance spectrum, Intel said that its latest crop of Xeon E-2100 processors is shipping today. These are single socket chips intended for small servers, offering up to 6 cores and 12 threads per chip. Functionally, they’re Xeon-branded versions of the mainstream Core processors, with the only notable difference being that they support ECC memory, and use a server variant of the chipset.