Intel Launches Optane DIMMs Up To 512GB: Apache Pass Is Here!by Ian Cutress & Billy Tallis on May 30, 2018 2:15 PM EST
Intel today announced the availability of their long-awaited Optane DIMMs, bringing 3D XPoint memory onto the DDR4 memory bus. The modules that have been known under the Apache Pass codename will be branded as Optane DC Persistent Memory, to contrast with Optane DC SSDs, and not to be confused with the consumer-oriented Optane Memory caching SSDs.
The new Optane DC Persistent Memory modules will be initially available in three capacities: 128GB, 256GB and 512GB per module. This implies that they are probably still based on the same 128Gb 3D XPoint memory dies used in all other Optane products so far. The modules are pin-compatible with standard DDR4 DIMMs and will be supported by the next generation of Intel's Xeon server platforms.
The Optane DC Persistent Memory modules Intel is currently showing off have heatspreaders covering the interesting bits, but they appear to feature ten packages of 3D XPoint memory. This suggests that the 512GB module features a raw capacity of 640GB and that Optane DC Persistent Memory DIMMs have twice the error correction overhead of ECC DRAM modules.
The Optane DC Persistent Memory modules are currently sampling and will be shipping for revenue later this year, but only to select customers. Broad availability is planned for 2019. In a similar strategy to how Intel brought Optane SSDs to market, Intel will be offering remote access to systems equipped with Optane DC Persistent Memory so that developers can prepare their software to make full use of the new memory. Intel is currently taking applications for access to this program. The preview systems will feature 192GB of DRAM and 1TB of Optane Persistent Memory, plus SATA and NVMe SSDs. The preview program will run from June through August. Participants will be required to keep their findings secret until Intel gives permission for publication.
Intel is not officially disclosing whether it will be possible to mix and match DRAM and Optane Persistent Memory on the same memory controller channel, but the 192GB DRAM capacity for the development preview systems indicates that they are equipped with a 16GB DRAM DIMM on every memory channel. Also not disclosed in today's briefing: power consumption, clock speeds, specific endurance ratings, and whether Optane DC Persistent Memory will be supported across the Xeon product line or only on certain SKUs. Intel did vaguely promise that Optane DIMMs will be operational for the normal lifetime of a DIMM, but we don't know what assumptions Intel is making about workload.
Intel has been laying the groundwork for application-level persistent memory support for years through their open-source Persistent Memory Development Kit (PMDK) project, known until recently as NVM Library. This project implements the SNIA NVM Programming Model, an industry standard for the abstract interface between applications and operating systems that provide access to persistent memory. The PMDK project currently includes libraries to support several usage models, such as a transactional object store or log storage. These libraries build on top of existing DAX capabilities in Windows and Linux for direct memory-mapped access to files residing on persistent memory devices.
Optane SSD Endurance Boost
The existing enterprise Optane SSD DC P4800X initially launched with a write endurance rating of 30 drive writes per day (DWPD) for three years, and when it hit widespread availability Intel extended that to 30 DWPD for five years. Intel is now preparing to introduce new Optane SSDs with a 60 DWPD rating, still based on first-generation 3D XPoint memory. Another endurance rating increase isn't too surprising: Intel has been accumulating real-world reliability information about their 3D XPoint memory and they have been under some pressure from competition like Samsung's Z-NAND that also offers 30 DWPD with a more conventional flash-based memory.
Post Your CommentPlease log in or sign up to comment.
View All Comments
deil - Friday, June 1, 2018 - linkits an extension, NOT REPLACEMENT. If your server needs more, and I know a lot of use-cases that would benefit from more... Highest capacity I've found is around 64GB per slot. with 512 GB dims even if its not as fast, a lot of servers can go into same old config we know with fast/cold storage. We will have a lot more server ram for real-time operations, while inactive, less important data stay inside optane. This is great news.
peevee - Thursday, May 31, 2018 - link"The PCI-E 3.0 x4 bus would be limited to 2GB/s"
It is not.
repoman27 - Thursday, May 31, 2018 - linkAchievable throughput after accounting for protocol overhead is largely dependent on the TLP maximum payload size, but for 128B it would be right around 3.24 GB/s
At IDF back in 2015 Intel suggested ~6 GB/s per channel and ~250 ns latency for 3D XPoint DIMMs: https://www.kitguru.net/components/memory/anton-sh...
emvonline - Friday, June 1, 2018 - linkThat is a great article to look at for perspective! Xpoint DIMMS are about 10x slower in latency compared to DRAM. Which is fine since the CPU will not be writing directly to the DIMM. Latency is only one variable... DDR4 has slower latency than DDR3 but its much faster
tomatotree - Wednesday, May 30, 2018 - linkFrom the previous page of that very same article, the P4800X can hit <10us latency in many scenarios -- and that still includes all of the overhead of the PCIe link, the filesystem, and the OS call/storage driver. Low-mid single digit us latencies are not unthinkable when those factors are removed.
As for bandwidth, that could likely be scaled out just by increasing the number of channels on the controller, just like NAND drives do.
In short, we don't know enough about it yet to make any claims about what performance will be. A fixed percentage boost from the current product might make sense if it were just a gen 2 SSD, but this is a very different product, with a different architecture on a different bus.
jordanclock - Wednesday, May 30, 2018 - linkRight, but for ballpark numbers, we're still looking a very large gulf in throughput and latency of even the best case scenario for these XPoint modules vs DDR4 modules.
sor - Wednesday, May 30, 2018 - linkIf I were to start guessing, I'd go back to Intel's original slides on Xpoint, not an SSD implementation. That admits there's a gap, but you know, there's that whole persistency thing. It's still meant as storage, not RAM.
This honestly has me interested because it's the first time we can actually see what it's capable of in the best scenario (short of being on-die with the processor).
Dr. Swag - Wednesday, May 30, 2018 - linkIf you look at this slide from the liveblog it's supposed to be 2-3x faster and has around 2-3x lower latency than a p4800x
frenchy_2001 - Thursday, May 31, 2018 - linkThis slide is about their hypervisor, used with the Optane SSD...
Nothing about the Optane Persistent Memory.
CheapSushi - Wednesday, May 30, 2018 - linkIt's not replacing DRAM you idiots. It's supplemental.