Intel is rolling out a new form of memory that requires no changes to existing hardware or software, is usable by the competition, and promises up to a 40% increase in memory performance for certain workloads.
The product is called Multiplexed Rank Dual Inline Memory Module, or MRDIMM. It uses the same memory chips as standard DDR5 memory and the same size DIMM module, so it fits into existing motherboards.
And unlike Intel’s ill-fated Optane memory, MRDIMM will be an open standard that Intel turns over to the JEDEC memory standards body, so AMD servers can also use it. And AMD will use it. “AMD Epyc processors support current JEDEC standards of DDR memory. As JEDEC standardizes on the JEDEC MRDIMM specification, we will absolutely consider adding it as AMD has a long history of being a strong supporter of standards-based technologies,” an AMD company spokesman told me via email.
Intel is rolling out MRDIMM with its new generation of Xeon 6 high-performance servers, which have up to 288 cores. Keeping them fed with data was becoming a challenge, said Ronak Singhal, senior fellow and developer of MRDIMM at Intel.
“As you’re growing the core count, it’s obviously critical to be able to feed those cores with data. And so, we have many use cases [with] an insatiable demand for higher memory bandwidth,” he said. “All of these kinds of workloads, particularly on the HPC side or on the AI side, drive a heavy use of data. And because of that, their bandwidth needs are high.”
Thanks to one extra chip on the DIMM, the top speed of MRDIMM can reach 8800 gigatransfers per second (GT/s). The best standard DDR5 memory can do is 6400 GT/s, and not all the memory runs at that speed. That’s a 30% increase in the amount of memory bandwidth from existing hardware, which means the DIMM socket itself has much greater throughput capacity than is being used.
Already, memory maker Renesas Electronics has announced it is working on its second-generation MRDIMM with 12,800 GT/s, which is double what the best standard DDR5 memory can do. The next-gen memory is expected to ship in the first half of 2025.
Singhal believes that AMD and other server hardware vendors will adopt MRDIMM over time. The chief memory makers – Samsung, SK Hynix, and Micron – are on board and supporting MRDIMM and plan to release products, as is Rambus, which designs and licenses memory products to the industry.
“We worked closely with the memory vendors to develop this, to validate it, and make sure that everything could work here. So, we’re the first ones to do this, but we’re also working with the memory suppliers to take this through the standards bodies like JEDEC for memory. So, this will be a standard memory technology that anybody can use,” Singhal said.
Intel says classical HPC applications, which include weather simulations and computational fluid dynamics simulations, typically have the biggest demand for memory bandwidth. Depending on the workload, Intel has been seeing between a 10% and 20% increase in performance with MRDIMM for things like database applications and up to 40% improvements in AI applications.
In the big picture, hyperscale data center operators will be most likely to adopt MRDIMM for their servers first.
Jim Handy, president of semiconductor market research firm Objective Analysis, believes MRDIMM will succeed where Optane failed. “Optane was driven by Intel. This is being driven by the data centers. You know, it’s always a good idea to let the customers call the shots,” he said.
Handy notes that this is how CXL got its start. Big data center operators wanted to have DDR4 memory work in a DDR5 system and found a way to make the DDR5 system communicate with DDR4 by using a subset of CXL. “And then they said, hey, this is good stuff. Let’s turn it into an industry standard,” he said.
Neither Intel nor Handy expects to see MRDIMM make its way to the desktop client. There’s no real benefit to the desktop user like there is for servers.
Source:: Network World