AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.

ATSB - Heavy (Data Rate)

When the Heavy test is run on an empty drive, the large SLC cache of the Crucial P1 enables it to deliver an average data rate that is competitive with most high-end NVMe SSDs. When the drive is full and the SLC cache's size is greatly reduced, the performance drops to well below that of a typical mainstream SATA SSD. This behavior is essentially the same as that shown by the Intel 660p.

ATSB - Heavy (Average Latency)ATSB - Heavy (99th Percentile Latency)

The empty-drive run of the Heavy test doesn't push the latency of the Crucial P1 up any higher than is typical for high-end NVMe drives. Things get more interesting on the full-drive test run, where the average latency from the P1 increases by a factor of 12 and the 99th percentile latency increases by a factor of 58. The average latency from the P1 is only slightly worse than the Intel 660p, but the 99th percentile score is three times that of the 660p.

ATSB - Heavy (Average Read Latency)ATSB - Heavy (Average Write Latency)

The average read latency of the Crucial P1 is significantly affected by whether the test is run on a full or empty drive, but even the worse of the two scores is still clearly better than what the Crucial MX500 manages. On the write side of thing, filling the drive has an almost catastrophic effect on latency, driving the average up by a factor of 17, an even more severe impact than the Intel 660p shows.

ATSB - Heavy (99th Percentile Read Latency)ATSB - Heavy (99th Percentile Write Latency)

The 99th percentile read latency from the Crucial P1 is typical for high-end NVMe SSDs for both the full and empty drive test runs. The high overall 99th percentile latency is due entirely to the write portion, where filling the drive increases 99th percentile write latency by almost two orders of magnitude.

ATSB - Heavy (Power)

The total energy consumption of the Crucial P1 during the Heavy test is significantly higher than for the Intel 660p, and the difference between the empty and full drive test runs is larger for the P1 than for any other drive.

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
POST A COMMENT

68 Comments

View All Comments

  • Oxford Guy - Thursday, November 8, 2018 - link

    "That's what I want ... not another race to the bottom."

    That's what consumers want: value.

    That's not what companies want. They want the opposite. Their wish is to sell the least for the most.
    Reply
  • Mikewind Dale - Thursday, November 8, 2018 - link

    "[Companies] want the opposite. Their wish is to sell the least for the most."

    Not true. Companies want to maximize net revenue, i.e. total revenue minus cost.

    Depending on the elasticity of demand (i.e. price sensitivity), that might mean increasing quantity and decreasing price.

    A reduction in quantity and an increase in price will increase net revenue only if demand is elastic.

    But given the existence of HDDs, it makes sense that demand for SSDs is elastic, i.e. price-sensitive. These aren't captive consumers with zero choice.

    Of course, nothing stops a company from catering to BOTH markets, i.e. high performance AND low cost markets.
    Reply
  • Mikewind Dale - Thursday, November 8, 2018 - link

    Sic:

    "A reduction in quantity and an increase in price will increase net revenue only if demand is elastic."

    That should be "inelastic."
    Reply
  • limitedaccess - Thursday, November 8, 2018 - link

    The transition to TLC drives was also shortly followed with the transition to 3D NAND using higher process (larger) from smaller planar litho process. While smaller litho allowed more density it also came with the trade off of worse endurance/higher decay. So the transition to 3D NAND effectively offset the issues of MLC->TLC which is where we are today. What's the equivalent for TLC->QLC?

    Low litho planar TLC drives were the ones that were poorly received and performed worse then they reviewed in reality due to decay. And decay is the real issue here with QLC since no reviewer tests for it (it isn't the same as poor write endurance). Is that file I don't regularly access going to maintain the same read speeds or have massively higher latency to access due to the need for ECC to kick in?
    Reply
  • 0ldman79 - Monday, November 12, 2018 - link

    I may not be correct on the exact numbers, but I think the NAND lithography has stopped at 22nm as they were having issues retaining data on 14nm, just no real benefit going to a smaller lithography.

    They may tune that in a couple of years, but the only way I can see that working with my rudimentary understanding of the system is to keep everything the same size as the 22nm (gates, gaps, fences, chains, roads, whatever, it's too late/early for me to remember the correct terms), same gaps only on a smaller process. They'd have no reduction in cost as they'd be using the same amount of each wafer, might have a reduction in power consumption.

    I'm eager to see how they address the problem but it really looks like QLC may be a dead end. Eventually we're going to hit walls where lithography can't improve and we're going to have to come at the problem (cpu speed, memory speeds, NAND speeds, etc) from an entirely different angle than what we've been doing. For what, 40 years, we've been doing major design changes every 5 years or so and just relying on lithography to improve clock speeds.

    I think that is about to cease entirely. They can probably go farther than what we're seeing but not economically.
    Reply
  • Lolimaster - Friday, November 9, 2018 - link

    Youre not specting a drive limited to 500MB to be as fast as a PCI-E 4x SSD with full support for it...

    TLC vs MLC all goes to endurance and degraded performance when the drive is full or the cache is exhausted.
    Reply
  • Lolimaster - Friday, November 9, 2018 - link

    Random performance seems the land of Optane and similar. Even the 16GB optane M10 absoluletely murders even the top of the line NVME Samsung MLC SSD. Reply
  • PaoDeTech - Thursday, November 8, 2018 - link

    Yes, price is still too high. But it will come down. I think that the conclusions fail to highlight the main strength of this SSD: top performance / power. For portable devices, this is the key metric to consider. In this regard is far ahead any SATA SSD and almost all PCIe out there. Reply
  • Lolimaster - Friday, November 9, 2018 - link

    Exactly. QLC should stick to big multiterabyte drives for avrg user or HEDT.

    Like 4TB+.
    Reply
  • 0ldman79 - Monday, November 12, 2018 - link

    I think that's where they need to place QLC.

    Massive "read mostly" storage. xx layer TLC for a performance drive, QLC for massive data storage, ie; all of my Steam games installed on a 10 cent per gig "read mostly" drive while the OS and my general use is on a 22 cent per gig TLC.

    That's what they're trying to do with that SLC cache, but I think they need to push it a lot farther, throw a 500GB TLC cache on a 4 terabyte QLC drive. That might be able to have it fit into the mainstream NVME lineup.
    Reply

Log in

Don't have an account? Sign up now