December 2, 2024

power-tools-pro

Technology will be Here

Intel lands Dept of Energy contract to develop memory tech • The Register

Intel lands Dept of Energy contract to develop memory tech • The Register

Table of Contents

The US Department of Energy’s Sandia National Labs believes that novel memory tech may be the secret to faster, more accurate nuclear weapon simulations.

That’s why this week the agency awarded a research and development contract to Intel – an outfit that has systematically dismantled its memory business over the past few years.

The multi-year Advanced Memory Technology (AMT) program is being funded by the DoE’s National Nuclear Security Administration (NNSA), which is tasked with maintaining the reliability and extending the lifespan of the US strategic arsenal by simulating their designs, degradation, and destructive potential using supercomputers.

And it’s not just nuclear weapon physics and materials analysis NNSA is interested in. For example, the agency has developed models to simulate the flow of turbulent air over a hypersonic missile delivering bad news in the form of a warhead to a city. Because these simulations often require countless parameters to accurately predict the physics at play, Sandia believes they’re likely to benefit from improved memory performance.

With the help of Los Alamos and Lawrence Livermore National Labs, the program will explore the use of “several technologies that have the potential to deliver more than 40x the application performance of our forthcoming NNSA exascale systems,” said Thuc Hoang, director of Advanced Simulation and Computing, in a statement.

Why Intel?

The decision to work with Intel on the project is an interesting one to say the least. The chipmaker is no stranger to novel memory architectures, with several generations of Xeon Scalable Processors now supporting tiered and persistent memory by way of the Optane line.

Funnily enough, though, Intel killed that division this summer, just a couple of years after its partner Micron stopped producing the 3D XPoint memory modules used in the products. Chipzilla doesn’t have a NAND flash business, either. It sold that division to SK hynix in 2020.

To be fair to Intel, one doesn’t need a family of commercial products to be able to perform R&D in that area.

According to Intel fellow Josh Fryman, much of the program will be spent exploring ways to extract more performance out of standard DRAM memory.

“Our goal with the AMT program is to change how DRAM is organized and to help the DRAM vendors to design and deliver superior products,” he told The Register. “The growth of parallelism of compute devices exceeds the parallelism growth of DRAM structures. That should change for current and future platforms to deliver greater performance and be more energy efficient.”

The DoE program gets underway just as the industry prepares to make the jump from DDR4 to DDR5 DRAM, with the launch of the first compatible datacenter-grade processors from AMD this fall and Intel early next year. In addition to supporting much larger capacities – potentially as high as 768GB per DIMM – DDR5 is also significantly faster than the previous generation.

The growth of parallelism of compute devices exceeds the parallelism growth of DRAM structures

Clocking in at 4,800 megatransfers per second at a minimum, this memory has been on the consumer market for more than a year, and its bandwidth per DIMM is at least 50 percent more than that of DDR4. Memory vendors including Micron expect to boost that eventually as high as 8,800MTps.

Intel plans to contribute findings from the research program back to the JEDEC industry consortium that oversees DRAM memory standards, Fryman said.

Beyond DRAM

Faster, lower latency DRAM isn’t the only technology that could play into the program. “We expect technologies resulting from our AMT work will be orthogonal to CXL,” Fryman said.

Intel has been instrumental to the development of the Compute Express Link (CXL) interconnect, the first applications of which include memory expansion, pooling, and tiered memory use cases.

For instance, CXL memory expansion modules from the likes of Astera Labs, Samsung, and others promise similar functionality to Intel’s now defunct Optane persistent memory modules – albeit at much lower latencies and higher bandwidth.

Intel is also among the first to bake high-bandwidth memory (HBM) onto a mainstream CPU. The corporation’s recently announced Xeon Max CPUs attach up to 64GB of HBM2e directly to the package, giving it roughly 1TB/sec of memory bandwidth. The processors are being integrated into Argonne National Labs’ Aurora supercomputer.

However, this doesn’t necessarily mean either of these technologies will actually see use under the DoE program. ®