At Intel Developer Forum this week in San Francisco, Intel is sharing a few more details about its plans for their Optane SSDs using 3D XPoint memory.

The next milestone in 3D XPoint's journey to being a real product will be a cloud-based testbed for Optane SSDs. Intel will be giving enterprise customers free remote access to systems equipped with Optane SSDs so that they can benchmark how their software runs with 3D Xpoint-based storage and optimize it to take better advantage of the faster storage. By offering cloud-based access before even sampling Optane SSDs, Intel can keep 3D XPoint out of the hands of their competitors longer and perhaps make better use of limited supply while still enabling the software ecosystem to begin preparing for the revolution Intel is planning. However, this won't do much for customers who want to integrate and validate Optane SSDs with their existing hardware platforms and deployments.

The cloud-based Optane testbed will be available by the end of the year, suggesting that we might not be seeing any Optane SSDs in the wild this year. But at the same time, the testbed would only be worth providing if its performance characteristics are going to be pretty close to that of the final Optane SSD products. Having announced the Optane testbed like this, Intel will probably be encouraging their partners to share their performance findings with the public, so we should at least get some semi-independent testing results in a few months time.

In the meantime, Intel and ScaleMP will be demonstrating a use that Optane SSDs will be particularly well-suited for. ScaleMP's vSMP Foundation software family provides virtualization solutions for high performance computing applications. One of their specialities is providing VMs with far more virtual memory than the host system has DRAM, by transparently using NVMe SSDs—or even the DRAM and NVMe storage of other systems connected via Infiniband—to cache what doesn't fit in local DRAM. The latency advantages of 3D XPoint will make Optane SSDs far better swap devices than any flash-based SSDs, and the benefits should still be apparent even when some of that 3D XPoint memory is at the far end of an Infiniband link.

ScaleMP and Intel have previously demonstrated that flash-based NVMe SSDs can be used as a cost-effective alternative to building a server with extreme amounts of DRAM, and with a performance penalty that can be acceptably small. With Optane SSDs that performance penalty should be significantly smaller, widening the range of applications that can make use of this strategy.  

Intel will also be demonstrating Optane SSDs used to provide read caching for cloud application or database servers running on Open Compute hardware platforms.

Comments Locked

36 Comments

View All Comments

  • zodiacfml - Tuesday, August 16, 2016 - link

    No expert but something is amiss here. I saw somewhere that Samsung plans competing using traditional NAnd but with higher parallelism and probably SLC mode.
    The advantages seems to be coming from the interface and avoiding storage protocols as Ive read with Nand on DIMMS.
  • K_Space - Tuesday, August 16, 2016 - link

    "ScaleMP and Intel have previously demonstrated that flash-based NVMe SSDs can be used as a cost-effective alternative to building a server with extreme amounts of DRAM".
    To highligh cost-effectiveness as a punch line in this statement is odd since Optane will cost more than NVMe SSDs? Intel is positioning Optane as some sort of a medium between flash based SSDs and DRAM, however I very much suspect that pricing wise it will lie closer to DRAM pricing...
  • Billy Tallis - Tuesday, August 16, 2016 - link

    The cost effectiveness comes primarily from not having to buy a ton of DRAM in order to get sufficient performance. Not only should you be able to save a lot of money on the DIMMs themselves, but there's potential for a lot of platform cost savings by not buying a server that is designed to hold a TB of DRAM and instead just needing a full set of PCIe lanes. Yes, Optane won't be as cheap as flash SSDs on a per-capacity basis, but in terms of application performance it might come out ahead by allowing the use of even less DRAM or fewer machines, or by making NVMe virtual memory viable for a wider range of big data tasks where flash might not be fast enough.
  • iwod - Wednesday, August 17, 2016 - link

    I still dont grasp ere this is heading. It is assumed it will have an advantage when compared to DRAM in large dataset we will soon have stacked DRAM as well, giving us potentially 4-32x DRAM capacity, why not use DRAM then.
  • Dmcq - Thursday, August 18, 2016 - link

    For servers the advantage is that it should allow large amounts of data to be stored cheaper than DRAM and faster than current SSD or disk.

    For mobile the saving is they would be able to go to sleep and wake up faster and yet use up less power by using less DRAM. Or just allow it to use up a lot more storage for running programs.

    In the longer term it may allow large databases to be stored without having to worry about fitting data into blocks but more like in DRAM. This would help both servers and mobiles.
  • doggface - Saturday, August 20, 2016 - link

    Its simple. Some databases need to be stored in memory to maximize speed of access. Endurance is not a problem because it is more about read latency than write endurance. Buying a TB of DRAM. Eg, 1000GBs of DRAM is very very expensive. XPoint has a lower latency than NAND, and a relatively high endurance, so it should be possible to put your DB in XPoint (especially Xpoint DIMMs) and keep a relatively fast performance, while having either larger Databases and/or lower costs.

    On the other hand... This will not make Crysis perform significantly better.

Log in

Don't have an account? Sign up now