AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.

ATSB - Heavy (Data Rate)

When the Heavy test is run on an empty drive, the large SLC cache of the Crucial P1 enables it to deliver an average data rate that is competitive with most high-end NVMe SSDs. When the drive is full and the SLC cache's size is greatly reduced, the performance drops to well below that of a typical mainstream SATA SSD. This behavior is essentially the same as that shown by the Intel 660p.

ATSB - Heavy (Average Latency)ATSB - Heavy (99th Percentile Latency)

The empty-drive run of the Heavy test doesn't push the latency of the Crucial P1 up any higher than is typical for high-end NVMe drives. Things get more interesting on the full-drive test run, where the average latency from the P1 increases by a factor of 12 and the 99th percentile latency increases by a factor of 58. The average latency from the P1 is only slightly worse than the Intel 660p, but the 99th percentile score is three times that of the 660p.

ATSB - Heavy (Average Read Latency)ATSB - Heavy (Average Write Latency)

The average read latency of the Crucial P1 is significantly affected by whether the test is run on a full or empty drive, but even the worse of the two scores is still clearly better than what the Crucial MX500 manages. On the write side of thing, filling the drive has an almost catastrophic effect on latency, driving the average up by a factor of 17, an even more severe impact than the Intel 660p shows.

ATSB - Heavy (99th Percentile Read Latency)ATSB - Heavy (99th Percentile Write Latency)

The 99th percentile read latency from the Crucial P1 is typical for high-end NVMe SSDs for both the full and empty drive test runs. The high overall 99th percentile latency is due entirely to the write portion, where filling the drive increases 99th percentile write latency by almost two orders of magnitude.

ATSB - Heavy (Power)

The total energy consumption of the Crucial P1 during the Heavy test is significantly higher than for the Intel 660p, and the difference between the empty and full drive test runs is larger for the P1 than for any other drive.

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
POST A COMMENT

68 Comments

View All Comments

  • Flunk - Thursday, November 8, 2018 - link

    MSRP seems a little high, I recently picked up an HP EX920 1TB for $255 and that's a much faster drive. Perhaps the street price will be lower. Reply
  • B3an - Thursday, November 8, 2018 - link

    That latency is APPALLING and the performance is below par. If this was dirt cheap it might be worth it to some people, but at that price it's a joke. Reply
  • DigitalFreak - Thursday, November 8, 2018 - link

    At this rate, by the time they get to H(ex)LC you'll only be able to write 1GB per day to your drive or risk having it fail. Reply
  • PeachNCream - Thursday, November 8, 2018 - link

    Please don't give them any ideas! The last thing we need is NAND that generously handles a few dozen P/E cycles before dying. We've already gone from millions of P/E cycles to a few hundred in the last 15 years and data retention has dropped from over a decade to under six months. Sure you can get a lot more capacity for the price, but NAND needs to be replaced with something more durable sooner rather than later. (And no, I'm not advocating for Optane either, just something that lasts longer and has room for density improvements - don't care what that something is.) Reply
  • MrCommunistGen - Thursday, November 8, 2018 - link

    I was expecting the extra DRAM to provide a more meaningful advantage over the Intel 660p... I guess it makes sense that Intel left it off to save on BOM. Reply
  • Ratman6161 - Thursday, November 8, 2018 - link

    This could be a very good standard desktop drive if 1) the price is right and 2) you can accept that the 1 TB drive is really only good for up to 900 GB. You would just partition the drive such that there is 100 GB free (or make sure you always just keep that much space free) so you always have the maximum SLC cach available. For the price to be right, it has to be lower. Taking the prices from the article, the 1 TB P1 is only $8 cheaper than a 970 EVO. Now if they could get the price down to the same territory as the current MX 500 they might have something. Reply
  • Billy Tallis - Thursday, November 8, 2018 - link

    Leaving 10% of the drive unpartitioned won't be enough to get the maximum size SLC cache, because 1GB of SLC cache requires 4GB of QLC to be used as SLC. However, 10% manual overprovisioning would definitely reduce the already small chances of overflowing the SLC cache. Reply
  • mczak - Thursday, November 8, 2018 - link

    On that note, wouldn't it actually make sense to use a MLC cache instead of a SLC cache for these SSDs using QLC flash (and by MLC of course I mean using 2 bits per cell)? I'd assume you should still be able to get very decent write speeds with that, and it would effectively only need half as much flash for the same cache size. Reply
  • Billy Tallis - Thursday, November 8, 2018 - link

    Cache size isn't really a big enough problem for a 2bpc MLC write cache to be worthwhile. Using SLC for the write cache has several advantages: highest performance/lowest latency, single-pass reads and writes (important for Crucial's power loss immunity features), and your SLC cache can use flash blocks that are too worn out to still reliably store multiple bits per cell. A slower write cache with twice the capacity would only make sense if consumer workloads regularly overflowed the existing write cache. Almost all of the instances where our benchmarks overflow SLC caches are a consequence of our tests giving the drive less idle time than real-world usage, rather than being tests representing use cases where the cache would be expected to overflow even in the real world. Reply
  • idri - Thursday, November 8, 2018 - link

    Why don't you guys include the Samsung 970 PRO 1TB in your charts for comparison? It's one of the most sought after SSDs on the market for HEDT systems and for sure it would be useful to have your tests results for this one too. Thanks. Reply

Log in

Don't have an account? Sign up now