Mixed IO Performance

For details on our mixed IO tests, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Mixed IO Performance
Mixed Random IO Performance Efficiency
Mixed Sequential IO Performance Efficiency

The mixed random IO test is still a significant weakness for the Intel SSD 670p; it's clearly faster than the 660p, but still far slower than either of the Phison E12-based QLC SSDs shown here (Corsair MP400, Sabrent Rocket Q). Power efficiency is consequently also poor, and the 670p falls behind even the slower Samsung 870 QVO; at least when Samsung's SATA QLC drive is being so slow, it's not using much power.

The mixed sequential IO test is a very different story: the 670p's overall performance is competitive with mainstream TLC SSDs, and even slightly higher than the HP EX950 with the SM2262EN controller. Power efficiency is also decent in this case.

Mixed Random IO
Mixed Sequential IO

The Intel 670p's performance across the mixed random IO test isn't quite as steady as the 660p, but there's still not much variation and only a slight overall downward trend in performance as the workload shifts to be more write-heavy. On the mixed sequential IO test the 670p shows a few drops where SLC cache space apparently started running low, through most of the test the 670p maintains a higher throughput than the 660p could deliver for any workload even under ideal conditions.

 

Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Intel SSD 670p 2TB
NVMe Power and Thermal Management Features
Controller Silicon Motion SM2265G
Firmware 002C
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 77 °C
Critical Temperature 80 °C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Supported

The Intel 670p supports the usual range of power and thermal management features. The only oddity is the exit latency listed for waking up from the deepest idle power state: 11.999 milliseconds sounds like the drive is trying to stay under some arbitrary threshold. This might be an attempt to work around the behavior of some operating system's NVMe driver and its default latency tolerance settings.

Intel SSD 670p 2TB
NVMe Power States
Controller Silicon Motion SM2265
Firmware 002C
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 5.5 W Active - -
PS 1 3.6 W Active - -
PS 2 2.6 W Active - -
PS 3 25 mW Idle 5 ms 5 ms
PS 4 4 mW Idle 3 ms 11.999 ms (?!)

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but rarely achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

The active idle power of the 670p is clearly lower than the 660p with the SM2263 controller, but not quite as low as the Mushkin Helix-L with the DRAMless SM2263XT. So Silicon Motion has made some power optimizations with the SM2265, but it's still not in the same league as the controller SK hynix built for the Gold P31.

The desktop and laptop idle states we test have appropriately low power draw. However, when activating the laptop idle configuration (PCIe ASPM L1.2) the 670p would crash and not wake up from idle. This kind of bug is not unheard-of (especially with other Silicon Motion NVMe controllers), and the Linux NVMe driver has a list of drives that can't be trusted to work properly with their deepest idle power state enabled. Sometimes this can be narrowed down to a particular host system configuration or specific SSD firmware versions. But until now, this particular machine hasn't run into crashes with idle power modes on any of the drives we've tested, which is why we've trusted it as a good proxy for the power management behavior that can be expected from a properly-configured laptop. It's disappointing to see this problem show up once again with a new controller where the host system is almost certainly not at fault. Hopefully Intel can quickly fix this with a new firmware version.

Idle Wake-Up Latency

Advanced Synthetic Tests: Block Sizes and Cache Size Effects Conclusion: Great QLC, Way Overpriced
Comments Locked

72 Comments

View All Comments

  • superjim - Monday, March 1, 2021 - link

    The 1TB 665p has been going for $80-90 since last November. The increased performance of the 670 is nowhere near the relative price increase even with the current chip shortages. SSD prices have stagnated for nearly 2 years now. I bought a 2TB Sabrent Rocket for $220 back in July of 2019.
  • XacTactX - Monday, March 1, 2021 - link

    I agree with you, and the way I see it, QLC is supposed to be 2x denser than TLC, so manufacturers should be able to offer a significant discount for a QLC drive instead of a TLC drive. When Intel is selling the 665p for $80-90 it is reasonable for a person to buy QLC, but for $150 it's kinda crazy. At $150 I would recommend the Phison E12 or Samsung 970 Evo / Evo Plus, they have more consistent performance and higher write endurance than the 670p.
  • cyrusfox - Monday, March 1, 2021 - link

    They are going 3 bits per cell to 4 bits per cell. That is not double the density but only 1/3 denser.

    That said I would expect them to be priced competitively as the 660p are priced. I feel like the 665P were on cleareance and I also picked up a 1TB model for $90.

    Eventually they will be priced retail to whatever the market will bear I guess. I was able to pick up a lot of optane 16GB drives for $8-10 recently. I remember when they launched at $40 new...
  • XacTactX - Monday, March 1, 2021 - link

    Thanks for the information, this whole time I was under the impression that MLC has 4 bits per cell, TLC has 8 bits, and QLC has 16 bits. I thought the number of phases per cell is the same as bits per cell. It turns out that MLC is 2 bpc, TLC is 3 bpc, and QLC is 4 bpc. I've been reading about SSDs since 2010, only took me 11 years to figure it out :P

    Yeah I want to try Intel Optane as well, I noticed that the 16 GB version is super cheap, I think it's because OEMs were buying them and now they are liquidating their supply. I want the 32 GB version but the pricing is too expensive, I can't justify $70 for a 32 GB Optane drive
  • Zizy - Monday, March 1, 2021 - link

    Nah, SLC needs to distinguish 2 voltage levels for 1 bit, MLC needs to distinguish 4 voltage levels to read those 2 bits, TLC needs to distinguish 8 for 3 bits etc. Density improvements are going up at a slow pace, while complexity doubles every next step. That's why every next step took longer before it made sense and became successful on the market. We are still at TLC->QLC transition and it seems it will take a while longer before we are close to done. Especially if such overpriced QLC products get launched - you can get a pretty good TLC for the money.
  • kpb321 - Monday, March 1, 2021 - link

    Time will tell but so far QLC hasn't really provided a benefit in most cases for consumers except at possibly the largest size. If you remember back to the planar TLC it was not very popular and didn't work too well either. Going to the 3d flash manufacturing and subsequently going back to much larger lithography as part of that process is what really made TLC usable and popular. I don't see a similar transition coming to help out QLC so we will have to see if it can slowly improve enough to make it worth while for the typical consumer.
  • ichaya - Tuesday, March 2, 2021 - link

    The 960EVO did pretty well for TLC and PCIE3, and a QLC drive that can do PCIE4 speeds would do as well IMO. This endurance is about where the 960EVO was... 370TBW vs 400TBW for 1TB. Just get those speeds up, and it would easily be worth the asking price or more. Maybe in another generation or I hope not two.
  • ksec - Monday, March 1, 2021 - link

    Yes. I dont mind QLC or even OLC. I want cheaper, and larger SSD.
  • meacupla - Monday, March 1, 2021 - link

    So Intel's CPUs are a dud
    Then their SSDs turned into dud

    If intel somehow screws up their network and wifi chips, that will be something to see.
  • Slash3 - Tuesday, March 2, 2021 - link

    *cough*

    https://www.pugetsystems.com/labs/support-hardware...

Log in

Don't have an account? Sign up now