Zen 3 vs zen 48/7/2023 ![]() ![]() These jobs tend to be less sensitive to 元 cache as well. For newer cloud native architectures that are based on microservices – meaning smaller chunks of code – that are linked over service busses and other mechanisms, having a big beefy core does not help improve system performance. To a certain extent, the heftiness of the X86 core is due to the need to support fatter chunks of code in legacy Windows Server and Linux workloads. Sierra Forest is Intel’s answer to Bergamo, and those future server CPUs will use its “energy efficient” E-cores rather than the “high performance” P-cores used in its general purpose Xeon SPs. And we have no doubt that many of us will mix up the ”Siena” and “Sierra Forest” codenames from AMD and Intel, respectively. We will see the “Siena” Zen 4 server chips later this year, aimed at telcos and other service providers and aimed straight at the Xeon D that is one of the bright spots in Intel’s Network and Edge Group, or NEX. We have four at this point, starting with the general purpose “Genoa” chips launched in November 2022 and moving into the “Bergamo” and “Genoa-X” chips launched this week, aimed at hyperscalers and cloud builders and technical computing workloads, respectively. But if it does, that would be five variations in an Epyc generation. Who knows if this will come to pass? It might be too much of a niche product for AMD to go for it. Such an Instinct MI300C – or MI400C based on the future Zen 5 cores – would certainly give the current HBM version of the “Sapphire Rapids” Xeon SP, known as the Max Series CPU, a run for the HPC and possibly AI host money. ![]() The HBM3 specification allows for 16-high DDR5 chip stacks running at 6.4 GT/sec data transfer rates at a density of 64 GB, so in theory one could build a monster MI300 series package with 512 GB of memory and 8.3 TB/sec of aggregate bandwidth – all against a potential twelve chiplets with eight Zen 4 cores per chiplet for a total of 96 cores. The rumored MI300C would have only Genoa compute chiplets, likely based on the Zen 4 cores, not the Zen 4c cores, married to somewhere between 128 GB and 256 GB of HBM3 memory, depending on if AMD used four-high or eight-high memory stacks and anywhere from four to eight stacks were activated on the package. This is what Lisu Su, chairman and chief executive officer at AMD, called the company’s “Epyc journey” in her keynote opening up the Data Center & AI Technology Premiere hosted by the company in San Francisco yesterday.įrom this point on, we can expect for every Epyc generation to have at least four variations, and if AMD makes an Instinct MI300C, as is rumored, that would make five. The Epyc family of server CPUs using the Zen 4 core types will have four distinct instantiations compared to one for the “Naples” Epyc 7001 generation from 2017 and the “Rome” Epyc 7002 generation from 2019 and two from the “Milan” Epyc 7003 generation that started rolling out in 2021. Six new CPUs in the “Zen 4” family of chips – that is the fourth generation of the Zen core used in the Epyc processors – were unveiled this week, three with a streamlined Zen 4c core aimed at hyperscalers and cloud builders and three with 3D V-Cache to boost the 元 cache on the devices and thereby boost certain HPC workloads by around 1.7X. So while AMD is all polite-like in its presentations, rest assured that with the ever enwidening and embiggening Epyc server chip lineup, AMD is absolutely meaning to bring offense to Intel, the Arm collective, and any RISC-V upstarts that think it is a pushover. The best defense is a good offense, and as it turns out, the best offense is also a good offense. ![]()
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |