Burst IO Performance

Our burst IO tests operate at queue depth 1 and perform several short data transfers interspersed with idle time. The random read and write tests consist of 32 bursts of up to 64MB each. The sequential read and write tests use eight bursts of up to 128MB each. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

QD1 Burst IO Performance
Random Read Random Write
Sequential Read Sequential Write

For quite a while, NVMe SSDs with Silicon Motion controllers have been delivering some of the best QD1 burst random read performance scores. The Intel SSD 670p pushes this even further, when the test is only hitting the SLC cache. When testing against an 80% full drive, the burst random read performance is faster than most other QLC drives but slower than any good TLC drive.

For QD1 random writes, the 670p is actually slightly slower than the 660p when testing a mostly-full drive, though it is again competitive with more high-end TLC drives when writing to the SLC cache. For both sequential reads and sequential writes, the 670p offers very good QD1 throughput for a PCIe 3.0 drive, and is much improved over the 660p that is seriously bottlenecked by its low-end controller.

Sustained IO Performance

Our sustained IO tests exercise a range of queue depths and transfer more data than the burst IO tests, but still have limits to keep the duration somewhat realistic. The primary scores we report are focused on the low queue depths that make up the bulk of consumer storage workloads. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Sustained IO Performance
Random Read Throughput Power Efficiency
Random Write Throughput Power Efficiency
Sequential Read Throughput Power Efficiency
Sequential Write Throughput Power Efficiency

As with the burst IO scores, the longer sustained IO tests show the Intel 670p doing very well with sequential reads or writes: the performance doesn't betray the fact that it's using QLC NAND, and the power efficiency is typical of a last-generation controller. For random reads or writes, the performance at low queue depths is similarly great when testing the SLC cache, but testing across an 80% full drive knocks performance down to typical entry-level NVMe and mainstream SATA territory. Random writes in particular are disappointing on the mostly-full drive: it's slower than the 660p and the Phison E12-based Corsair MP400, though still several times faster than the DRAMless Mushkin Helix-L.

Random Read
Random Write
Sequential Read
Sequential Write

The Intel 670p is fairly well-behaved through the sustained IO tests as the queue depth ramps up. Random reads saturate around QD32, random writes around QD8, and sequential transfers at QD2. Performance is very consistent after the drive reaches its full speed; the only big drop comes at the very end of the sequential write test on a mostly-full drive, when the SLC cache finally runs out while testing at QD128. This is pretty much never going to happen during ordinary consumer workloads.

Random Read Latency

This test illustrates how drives with higher throughput don't always offer better IO latency and Quality of Service (QoS), and that latency often gets much worse when a drive is pushed to its limits. This test is more intense than real-world consumer workloads and the results can be a bit noisy, but large differences that show up clearly on a log scale plot are meaningful. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Taking a closer look at random read throughput and latency, the Intel 670p fares better than most of the other QLC drives, save for the 8TB Sabrent Rocket Q. The 670p has slightly worse performance than the DRAMless TLC Mushkin Helix-L. The throughput achieved by the 670p is similar to mainstream TLC SATA drives, but the latency is considerably higher: the SATA drives are bottlenecked by the SATA link itself, while the 670p's bottleneck is on the NAND side, which causes latency to spike as the drive is pushed to its limit.

Trace Tests: AnandTech Storage Bench and PCMark 10 Advanced Synthetic Tests: Block Sizes and Cache Size Effects
Comments Locked

72 Comments

View All Comments

  • superjim - Monday, March 1, 2021 - link

    The 1TB 665p has been going for $80-90 since last November. The increased performance of the 670 is nowhere near the relative price increase even with the current chip shortages. SSD prices have stagnated for nearly 2 years now. I bought a 2TB Sabrent Rocket for $220 back in July of 2019.
  • XacTactX - Monday, March 1, 2021 - link

    I agree with you, and the way I see it, QLC is supposed to be 2x denser than TLC, so manufacturers should be able to offer a significant discount for a QLC drive instead of a TLC drive. When Intel is selling the 665p for $80-90 it is reasonable for a person to buy QLC, but for $150 it's kinda crazy. At $150 I would recommend the Phison E12 or Samsung 970 Evo / Evo Plus, they have more consistent performance and higher write endurance than the 670p.
  • cyrusfox - Monday, March 1, 2021 - link

    They are going 3 bits per cell to 4 bits per cell. That is not double the density but only 1/3 denser.

    That said I would expect them to be priced competitively as the 660p are priced. I feel like the 665P were on cleareance and I also picked up a 1TB model for $90.

    Eventually they will be priced retail to whatever the market will bear I guess. I was able to pick up a lot of optane 16GB drives for $8-10 recently. I remember when they launched at $40 new...
  • XacTactX - Monday, March 1, 2021 - link

    Thanks for the information, this whole time I was under the impression that MLC has 4 bits per cell, TLC has 8 bits, and QLC has 16 bits. I thought the number of phases per cell is the same as bits per cell. It turns out that MLC is 2 bpc, TLC is 3 bpc, and QLC is 4 bpc. I've been reading about SSDs since 2010, only took me 11 years to figure it out :P

    Yeah I want to try Intel Optane as well, I noticed that the 16 GB version is super cheap, I think it's because OEMs were buying them and now they are liquidating their supply. I want the 32 GB version but the pricing is too expensive, I can't justify $70 for a 32 GB Optane drive
  • Zizy - Monday, March 1, 2021 - link

    Nah, SLC needs to distinguish 2 voltage levels for 1 bit, MLC needs to distinguish 4 voltage levels to read those 2 bits, TLC needs to distinguish 8 for 3 bits etc. Density improvements are going up at a slow pace, while complexity doubles every next step. That's why every next step took longer before it made sense and became successful on the market. We are still at TLC->QLC transition and it seems it will take a while longer before we are close to done. Especially if such overpriced QLC products get launched - you can get a pretty good TLC for the money.
  • kpb321 - Monday, March 1, 2021 - link

    Time will tell but so far QLC hasn't really provided a benefit in most cases for consumers except at possibly the largest size. If you remember back to the planar TLC it was not very popular and didn't work too well either. Going to the 3d flash manufacturing and subsequently going back to much larger lithography as part of that process is what really made TLC usable and popular. I don't see a similar transition coming to help out QLC so we will have to see if it can slowly improve enough to make it worth while for the typical consumer.
  • ichaya - Tuesday, March 2, 2021 - link

    The 960EVO did pretty well for TLC and PCIE3, and a QLC drive that can do PCIE4 speeds would do as well IMO. This endurance is about where the 960EVO was... 370TBW vs 400TBW for 1TB. Just get those speeds up, and it would easily be worth the asking price or more. Maybe in another generation or I hope not two.
  • ksec - Monday, March 1, 2021 - link

    Yes. I dont mind QLC or even OLC. I want cheaper, and larger SSD.
  • meacupla - Monday, March 1, 2021 - link

    So Intel's CPUs are a dud
    Then their SSDs turned into dud

    If intel somehow screws up their network and wifi chips, that will be something to see.
  • Slash3 - Tuesday, March 2, 2021 - link

    *cough*

    https://www.pugetsystems.com/labs/support-hardware...

Log in

Don't have an account? Sign up now