The Intel Optane SSD DC P4800X (375GB) Review: Testing 3D XPoint Performanceby Billy Tallis on April 20, 2017 12:00 PM EST
Intel's new 3D XPoint non-volatile memory technology, which has been on the cards publically for the last couple of years, is finally hitting the market as the storage medium for Intel's new flagship enterprise storage platform. The Intel Optane SSD DC P4800X is a PCIe SSD using the standard NVMe protocol, but the use of 3D XPoint memory instead of NAND flash memory allows it to deliver great throughput and much lower access latency than any other NVMe SSD.
The potential significance of 3D XPoint memory is immense. When it was first publicly announced by Intel and Micron in 2015, 3D XPoint memory was a fundamentally different storage technology from the flash memory that dominates the market. It is the first new truly mass market, high-density solid state storage medium to hit the market since NAND flash itself. It comes at a time where the NAND market is booming like never before, but also at a time when we know that there is a definite end of the line for NAND. The ongoing transition to 3D NAND flash is just a temporary postponement of the fundamental limitations of flash memory. Once NAND can no longer scale in density and cost-per-bit, it will fall to paradigm changes and next-generation memory technologies (one of which will be 3D XPoint) to continue to carry the industry forward. There are many other new memory technologies that may compete alongside flash memory and 3D XPoint in the coming years, but 3D XPoint is the one that's ready to go mainstream now.
In the near term, 3D XPoint is important because it offers a new set of performance tradeoffs entirely unlike NAND; tradeoffs that, for the right applications, can deliver performance far in excess of today's NAND products. By being able to read and write at the bit or word level - and not the 4K+ page level of NAND - 3D XPoint has the potential to deliver excellent performance across a wide range of workloads, but especially in minimally parallel workloads, which are common in the consumer and enterprise spaces.
The drawback here is that, due to various factors regarding time, production, and scope, 3D XPoint is more expensive than NAND. It also comes in as less dense, to aid in ease of production in this first stage, but this also adds to the cost. For now, due to scale and other factors, it won't be able to replicate the sheer capacity and cost effectiveness that has made NAND storage so popular in all market segments. Due to the scale, especially as a first-generation version of the technology, the first 3D XPoint products are being aimed at speciality and high-margin markets: enterprise performance, consumer caching, etc. Future products promised from Intel should add non-volatile DIMMs to the mix, and then later on, if everything goes to plan, a potential wholesale replacement of NAND flash (or at least a strong competitor).
The Intel Optane SSD DC P4800X
The new storage drive, and the focus of today's review, is the Intel Optane SSD DC P4800X. It uses a new NVMe controller Intel developed specifically for use with 3D XPoint memory. Where Intel's enterprise NVMe SSDs like the P3700 use a controller with 18 channels for interfacing to their flash memory, the Optane SSD's controller has only 7 channels. In order to achieve at least parity on peak performance, each of those channels has to provide much higher throughput than on a flash SSD, and it shows that each 3D XPoint memory die is delivering much higher performance than a die of flash memory.
The first capacity of the Optane SSD DC P4800X to ship and the model we've tested here offers a usable capacity of 375GB from a total of 28 3D XPoint memory dies (four per channel) for a raw capacity of 448GB. 3D XPoint memory has better endurance than NAND flash, but not enough to get away without wear levelling. The fine-grained accessibility of 3D XPoint memory gets rid of a lot of the wear leveling and write amplification headaches caused by flash pages and erase blocks being larger than the sector sizes exposed by the drives, but the drive still needs some spare area plus storage for error correction overhead, metadata for tracking the mapping between logical blocks and physical addresses, and potential replacement of bad sectors, similar to normal SSDs.
As with most NVMe SSDs, the Optane SSD DC P4800X supports a configurable sector size. Out of the box it emulates 512B sectors for the sake of compatibility, but using the NVMe FORMAT command it can be switched to emulate 4kB sectors. The larger sector size reduces the amount of metadata the SSD controller has to juggle, so it usually allows for slightly higher performance. The NVMe FORMAT command is also the mechanism for triggering a secure erase of the entire drive, and for flash SSDs the format usually consists of little more than issuing block erase commands to the whole drive. 3D XPoint memory does not have large multi-megabyte erase blocks, so a low-level format of the Optane SSD needs to directly write to the entire drive, which takes about as long as filling it sequentially. Thus, while a 2.4TB flash SSD can perform a low-level format in just over 13 seconds, the 375GB Optane SSD DC P4800X takes six minutes and 47 seconds. This is long enough that unsuspecting software tools or SSD reviewers will give up and assume that the drive has locked up.
|Intel Optane SSD DC P4800X Specifications|
|Capacity||375 GB||750 GB||1.5 TB|
|Form Factor||PCIe HHHL or 2.5" 15mm U.2|
|Interface||PCIe 3.0 x4 NVMe|
|Memory||128Gb 20nm Intel 3D XPoint|
|Typical Latency (R/W)||<10µs|
|Random Read (4 kB) IOPS (QD16)||550,000||TBA||TBA|
|Random Read 99.999% Latency (QD1)||60µs||TBA||TBA|
|Random Read 99.999% Latency (QD16)||150µs||TBA||TBA|
|Random Write (4 kB) IOPS (QD16)||500,000||TBA||TBA|
|Random Write 99.999% Latency (QD1)||100µs||TBA||TBA|
|Random Write 99.999% Latency (QD16)||200µs||TBA||TBA|
|Mixed 70/30 (4kB) Random IOPS (QD16)||500,000||TBA||TBA|
|Warranty||5 years (3 years during early limited release)|
|Release Date||HHHL||March 19||Q2 2017||2H 2017|
|U.2||Q2 2017||2H 2017|
So far, Intel has only started shipping the 375GB Optane SSD DC P4800X to select customers, and they have not released detailed specifications for the larger capacities that will ship later this year.
It is worth noting that the performance specifications for the P4800X, as provided in the product specification sheets, cover a different set of metrics than Intel usually reports for their enterprise SSDs. Sequential performance is not mentioned at all, but the product brief has quite a bit to say about latency: average latency for QD1 reads and writes, and 99.999th percentile latency for both reads and writes at QD1 and QD16. The fact that Intel is publishing a five-nines QoS metric at all suggests that they plan to set a new standard for performance consistency.
The throughput claims are also remarkable: half a million IOPS or more for reads, writes and a 70/30 read/write mix. There are already drives on the market that can deliver more than 550k random read IOPS, but those SSDs are far larger than 375GB and they require very high queue depths to hit 550k IOPS. There are even a few multi-TB drives that can beat 500k random write IOPS, but they can't sustain that performance indefinitely. The Optane SSD DC P4800X is promising an unprecedented level of storage performance both in absolute terms and relative to its capacity, so it is interesting to see where Intel is going to lay down its line in the sand.
The P4800X will not really occupy the same niche as the multi-TB monsters that offer comparable throughput. With limited capacity but the highest level of performance, this Optane SSD most closely fits the role of SLC NAND based SSDs. SLC has disappeared from the SSD market as virtually all customers preferred to sacrifice a little bit of performance to double their capacity by using MLC NAND. One of the last high-performance SLC SSDs was the Micron P320h, a PCIe SSD from 2012 that slightly pre-dated NVMe and used 34nm SLC NAND flash. Anyone still using a P320h for its consistent low latency performance will be very interested in the P4800X. Outside of that niche, the Optane SSD will obviously be desirable for its raw throughput, but the low capacity may be problematic for some use cases.
One of the unique and most notable performance advantages of the Optane SSD DC P4800X is that it does not require extremely high queue depths to reach full throughput. Enterprise customers have long had to design their systems around the fact that getting full performance out of the fastest PCIe SSDs requires loading them down with queue depths of 128 or higher, sometimes requiring applications to use dozens of threads for I/O. In the client space achieving such queue depths is outright impossible, and in the enterprise space it doesn't happen for free. The P4800X's high performance at low queue depths makes it a much easier drive to get great real-world performance out of.
Intel originally introduced 3D XPoint memory as having far higher write endurance than NAND flash—on the order of 1000x higher. The Optane SSD DC P4800X is rated for 30 drive writes per day (DWPD) for five years, and the current models shipping during this early limited availability period are only rated for three years, rather than the five years it expects the support for the full retail models. Intel says they're being extremely conservative with a new and unproven technology, and doing the math means that 30 DWPD doesn't provide any endurance advantage over the most highly over-provisioned flash-based enterprise SSDs. In terms of total petabytes written, the P4800X only has four-fifths the endurance of the SLC-based Micron P320h. Even allowing for Intel's original comparisons possibly having been relative to lower-endurance contemporary MLC or TLC flash, it seems like this first generation of 3D XPoint memory is not as durable as originally planned - the headline number of 30 DWPD is aimed at alleviating that issue, however for Intel to match its original intentions then the second and third generation parts will have to be a step up, and we look forward to testing them.
The MSRP for the 375GB P4800X is $1520, though it will be quite some time before it can readily be ordered from major online retailers. At slightly more than $4/GB, the P4800X will be almost twice as expensive per GB as Intel's next most pricey SSD, the P3608 (which is really two drives in one plus a PCIe switch). Compared to Intel's fastest single SSD (the P3700), the P4800X will be more than three times as expensive per GB. In the broader SSD market, $4/GB is not completely unprecedented, but most companies selling drives in this price range don't even pretend to have a retail price.
For this review of the Intel Optane SSD DC P4800X, first, we are going to take a deeper dive into what 3D XPoint actually is. Then we go through our testing suite for enterprise drives, testing Intel's claims on performance.
It is worth noting that there is no such thing as a general-purpose enterprise SSD. Enterprise storage workloads are far more varied than client workloads and it is impossible to make general statements about whether random or sequential performance is more important, what kind of mix of reads and writes to expect, or what queue depth is apporpriate to test with. Real-world application benchmarks are difficult to construct and typically end up being far more narrowly applicable than we would hope. Our strategy for this review is to provide a very broad range of synthetic tests with the knowledge that not all results will be relevant to all use cases. Enterprise customers must know and understand their own workload. Since this is our first time testing anything with 3D XPoint memory, this review includes some new benchmarks that would probably not be applicable to a flash SSDs review, making for some interesting numbers.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Ninhalem - Thursday, April 20, 2017 - linkAt last, this is the start of transitioning from hard drive/memory to just memory.
ATC9001 - Thursday, April 20, 2017 - linkThis is still significantly slower than RAM....maybe for some typical consumer workloads it can take over as an all in one storage solution, but for servers and power users, we'll still need RAM as we know it today...and the fastest "RAM" if you will is on die L1 cache...which has physical limits to it's speed and size based on speed of light!
I can see SSD's going away depending on manufacturing costs but so many computers are shipping with spinning disks still I'd say it's well over a decade before we see SSD's become the replacement for all spinning disk consumer products.
Intel is pricing this right between SSD's and RAM which makes sense, I just hope this will help the industry start to drive down prices of SSD's!
DanNeely - Thursday, April 20, 2017 - linkEstimates from about 2 years back had the cost/GB price of SSDs undercutting that of HDDs in the early 2020's. AFAIK those were business as usual projections, but I wouldn't be surprised to see it happen a bit sooner as HDD makers pull the plug on R&D for the generation that would otherwise be overtaken due to sales projections falling below the minimums needed to justify the cost of bringing it to market with its useful lifespan cut significantly short.
Guspaz - Saturday, April 22, 2017 - linkHard drive storage cost has not changed significantly in at least half a decade, while ssd prices have continued to fall (albeit at a much slower rate than in the past). This bodes well for the crossover.
Santoval - Tuesday, June 6, 2017 - linkActually it has, unless you regard HDDs with double density at the same price every 2 - 2.5 years as not an actual falling cost. $ per GB is what matters, and that is falling steadily, for both HDDs and SSDs (although the latter have lately spiked in price due to flash shortage).
bcronce - Thursday, April 20, 2017 - linkThe latency specs include PCIe and controller overhead. Get rid of those by dropping this memory in a DIMM slot and it'll be much faster. Still not as fast as current memory, but it's going to be getting close. Normal system memory is in the range of 0.5us. 60us is getting very close.
tuxRoller - Friday, April 21, 2017 - linkThey also include context switching, isr (pretty board specific), and block layer abstraction overheads.
ddriver - Friday, April 21, 2017 - linkPCIE latency is below 1 us. I don't see how subtracting less than 1 from 60 gets you anywhere near 0.5.
All in all, if you want the best value for your money and the best performance, that money is better spent on 128 gigs of ecc memory.
Sure, xpoint is non volatile, but so what? It is not like servers run on the grid and reboot every time the power flickers LOL. Servers have at the very least several minutes of backup power before they shut down, which is more than enough to flush memory.
Despite intel's BS PR claims, this thing is tremendously slower than RAM, meaning that if you use it for working memory, it will massacre your performance. Also, working memory is much more write intensive, so you are looking at your money investment crapping out potentially in a matter of months. Whereas RAM will be much, much faster and work for years.
4 fast NVME SSDs will give you like 12 GB\s bandwidth, meaning that in the case of an imminent shutdown, you can flush and restore the entire content of those 128 gigs of ram in like 10 seconds or less. Totally acceptable trade-back for tremendously better performance and endurance.
There is only one single, very narrow niche where this purchase could make sense. Database usage, for databases with frequent low queue access. This is an extremely rare and atypical application scenario, probably less than 1/1000 in server use. Which is why this review doesn't feature any actual real life workloads, because it is impossible to make this product look good in anything other than synthetic benches. Especially if used as working memory rather than storage.
IntelUser2000 - Friday, April 21, 2017 - linkddriver: Do you work for the memory industry? Or hold a stock in them? You have a personal gripe about the company that goes beyond logic.
PCI Express latency is far higher than 1us. There are unavoidable costs of implementing a controller on the interface and there's also software related latency.
ddriver - Friday, April 21, 2017 - linkI have a personal gripe with lying. Which is what intel has been doing every since it announced hypetane. If you find having a problem with lying a problem with logic, I'd say logic ain't your strong point.
Lying is also what you do. PCIE latency is around 0.5 us. We are talking PHY here. Controller and software overhead affect equally every communication protocol.
Xpoint will see only minuscule latency improvements from moving to dram slots. Even if PCIE has about 10 times the latency of dram, we are still talking ns, while xpoint is far slower in the realm of us. And it ain't no dram either, so the actual latency improvement will be nowhere nearly the approx 450 us.
It *could* however see significant bandwidth improvements, as the dram interface is much wider, however that will require significantly increased level of parallelism and a controller that can handle it, and clearly, the current one cannot even saturate a pcie x4 link. More bandwidth could help mitigate the high latency by masking it through buffering, but it will still come nowhere near to replacing dram without a tremendous performance hit.