Original Link: https://www.anandtech.com/show/6935/seagate-600-ssd-review



If you had asked me back in 2008 who I thought would be leading the SSD industry in 2013 I would’ve said Intel, Western Digital and Seagate. Intel because of its commanding early lead in the market, and WD/Seagate because as the leaders in hard drives they couldn’t afford to be absent from the long term transition to SSDs. The days of having to explain why SSDs are better than mechanical drives are thankfully well behind us, now it’s just a question of predicting the inevitable. I figured that the hard drive vendors would see the same future and quickly try to establish a foothold in the SSD market. It turns out I’m really bad at predicting things.

Like most converging markets (in this case, storage + NAND), the SSD industry hasn’t been dominated by players in the market that came before it. Instead, SSDs attracted newcomers to the client/enterprise storage business. Not unlike DRAM, owning a NAND foundry has its benefits when building a profitable SSD business. It’s no surprise that Intel, Micron and Samsung are some of the more frequently discussed SSD vendors - all of them own (either partially or fully) NAND foundries.

Whether or not ownership in a foundry will be a requirement for building a sustainable SSD business is still unclear, but until that question gets answered there’s room for everyone to play in the quickly growing SSD market. This year, Seagate re-enters the SSD market with a serious portfolio. Today it not only announces two 2.5” SATA drives, including its first client-focused SSD, but also a 2.5” SAS product and a PCIe SSD solution.

The products that we’re focusing on today are the two 2.5” SATA drives: Seagate’s 600 and 600 Pro.

Architecture

The 600 and 600 Pro are both based on Link A Media Device’s LM87800 controller. The LAMD controller is the same as the one used in Corsair’s Neutron and Neutron GTX. Previous Seagate SSDs actually used a two-chip solution, with Seagate’s custom silicon controlling the host interface while Link A Media provided a NAND interface chip. The LM87800 is apparently a single chip integration of the earlier Seagate designs. The controller uses the drive chassis for cooling, with a thermal pad acting as an interface layer.

The firmware on the 600/600 Pro is unique to Seagate. It’s unclear whether or not Seagate has access to firmware source, but the solution is definitely custom (as you’ll see from the performance/consistency results). The LM87800 doesn't use any data de-duplication/compression and allegedly uses a DSP-like architecture.

The controller is paired with two DDR2-800 devices, with roughly 1MB of DRAM per GB of NAND storage. The high ratio of DRAM to NAND is common in drives with flat indirection tablets as we’ve come to notice. It’s a more costly (and potentially more power hungry) design decision, but one that can have tangible benefits as far as performance consistency is concerned.

Seagate 600 NAND/DRAM Configuration
  # of NAND Packages # of Die per Package Total NAND on-board DRAM
480GB 8 8 512GB 512MB
240GB 8 4 256GB 256MB
120GB 8 2 128GB 128MB

The LM87800 controller is a bit dated by modern standards, especially if we look at what is possible with Crucial’s M500. There’s no hardware encryption support and obviously no eDrive certification. Despite launching in 2013, the 600/600 Pro feature a controller that is distinctly 2012. Admittedly, that seems to be the case with most SSD makers these days. Everyone seems to be waiting for the transition to SATA Express before launching truly new controller designs.

Seagate has deals in place to secure NAND supply from both Samsung and Toshiba, although all of the 600 series will show up with 19nm Toshiba 2-bit-per-cell MLC NAND. The LM87800 controller features 8 NAND channels, and can access even more NAND die in parallel through interleaving.

 



The Seagate 600/600 Pro

Both the Seagate 600 and 600 Pro are 2.5” SATA drives. Their enclosures are completely screw-less, which makes getting in a bit of a pain but it’s not impossible. The 600 is available in 7mm and 5mm thicknesses, the latter is something we’ve only recently seen with Western Digital’s UltraSlim drive announcement. The 600 Pro is only available in a 7mm form factor.


Seagate 600 (left) vs. Seagate 600 Pro (right)

All 600/600 Pro designs that I’ve seen thus far use single-sided PCBs and 8 NAND devices. Seagate simply varies the number of NAND die per package to hit various capacity points.

Seagate 600

The Seagate 600 is available in 120GB, 240GB and 480GB capacities using 128GB, 256GB and 512GB of NAND, respectively. All of those drives have 8 NAND devices, and 2, 4 and 8 19nm NAND die per package, respectively.


480GB Seagate 600


480GB Seagate 600 (back)

The 600 Pro is available in the same capacities, but adds 100GB, 200GB and 400GB versions as well. The 200/400GB 600 Pros have 128/256/512GB of NAND, but are over provisioned to give the controller more spare area to work with. I like the idea of setting aside more spare area for the Pro drive, but the fact that not all 600 Pros are configured this way is bound to be confusing to customers.

Seagate 600 Pro

Other than the availability of heavily over provisioned drives, the 600 Pro also separates itself from the client-focused Seagate 600 by including an array of capacitors for power loss protection. In the event of unexpected power loss Seagate expects the 600 Pro will be able to commit all data received by the LM8780 controller to NAND.


400GB Seagate 600 Pro

The 600 carries a 3 year warranty and is rated for up to 40GB of writes per day throughout that warranty period (the 120GB model is rated for 20GB of writes per day). The 600 Pro uses better binned NAND and boasts higher endurance over the course of its longer 5 year warranty. As is typically the case with SSDs, endurance tends not to be an issue for client usage - in the enterprise whether or not you can get by with the 600 or need the 600 Pro really depends on your workload.

Seagate isn’t announcing pricing other than to say that the 600/600 Pro will be priced inline with competing drives.



Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 50K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB Seagate 600 480GB Seagate 600 Pro 400GB
Default
25% Spare Area    

Now this is a bit surprising. I expected a tightly clustered group of IOs like we got with the LAMD based Corsair Neutron, but instead we see something different entirely. There's a clustering of IOs around the absolute minimum performance, but it looks like the controller is constantly striving for better performance. If there's any indication that Seagate's firmware is obviously different than what Corsair uses, this is it. If we look at the 400GB Seagate 600 Pro we get a good feel for what happens with further over provisioning. The 400GB Pro maintains consistently high performance for longer than the 480GB 600, and when it falls off the minimums are also higher as you'd expect.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB Seagate 600 480GB Seagate 600 Pro 400GB
Default
25% Spare Area    

Zooming in, the Seagate 600 definitely doesn't look bad - it's far better than the Samsung or Crucial offerings, but still obviously short of Corsair's Neutron. I almost wonder if Seagate prioritized peak performance a bit here in order to be more competitive in most client benchmarks.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB Seagate 600 480GB Seagate 600 Pro 400GB
Default
25% Spare Area    

The situation really looks a lot worse than it is here. The 600's performance isn't very consistent, but there's a clear floor at just above 5000 IOPS which is quite respectable. Compared to the Crucial and Samsung drives, the 600/600 Pro offer much better performance consistency. I do wish that Seagate had managed to deliver even more consistent performance given that we know what the controller is capable of. For client usage I suspect this won't matter, but in random write heavy enterprise workloads with large RAID arrays it isn't desirable behavior.



AnandTech Storage Bench 2013 Preview - 'The Destroyer'

When I built the AnandTech Heavy and Light Storage Bench suites in 2011 I did so because we didn't have any good tools at the time that would begin to stress a drive's garbage collection routines. Once all blocks have a sufficient number of used pages, all further writes will inevitably trigger some sort of garbage collection/block recycling algorithm. Our Heavy 2011 test in particular was designed to do just this. By hitting the test SSD with a large enough and write intensive enough workload, we could ensure that some amount of GC would happen.

There were a couple of issues with our 2011 tests that I've been wanting to rectify however. First off, all of our 2011 tests were built using Windows 7 x64 pre-SP1, which meant there were potentially some 4K alignment issues that wouldn't exist had we built the trace on a system with SP1. This didn't really impact most SSDs but it proved to be a problem with some hard drives. Secondly, and more recently, I've shifted focus from simply triggering GC routines to really looking at worst case scenario performance after prolonged random IO. For years I'd felt the negative impacts of inconsistent IO performance with all SSDs, but until the S3700 showed up I didn't think to actually measure and visualize IO consistency. The problem with our IO consistency tests are they are very focused on 4KB random writes at high queue depths and full LBA spans, not exactly a real world client usage model. The aspects of SSD architecture that those tests stress however are very important, and none of our existing tests were doing a good job of quantifying that.

I needed an updated heavy test, one that dealt with an even larger set of data and one that somehow incorporated IO consistency into its metrics. The new benchmark doesn't have a name, I've just been calling it The Destroyer (although AnandTech Storage Bench 2013 is likely a better fit for PR reasons).

Everything about this new test is bigger and better. The test platform moves to Windows 8 Pro x64. The workload is far more realistic. Just as before, this is an application trace based test - I record all IO requests made to a test system, then play them back on the drive I'm measuring and run statistical analysis on the drive's responses.

Imitating most modern benchmarks I crafted the Destroyer out of a series of scenarios. For this benchmark I focused heavily on Photo editing, Gaming, Virtualization, General Productivity, Video Playback and Application Development. Rough descriptions of the various scenarios are in the table below:

AnandTech Storage Bench 2013 Preview - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

While some tasks remained independent, many were stitched together (e.g. system backups would take place while other scenarios were taking place). The overall stats give some justification to what I've been calling this test internally:

AnandTech Storage Bench 2013 Preview - The Destroyer, Specs
  The Destroyer (2013) Heavy 2011
Reads 38.83 million 2.17 million
Writes 10.98 million 1.78 million
Total IO Operations 49.8 million 3.99 million
Total GB Read 1583.02 GB 48.63 GB
Total GB Written 875.62 GB 106.32 GB
Average Queue Depth ~5.5 ~4.6
Focus Worst case multitasking, IO consistency Peak IO, basic GC routines

SSDs have grown in their performance abilities over the years, so I wanted a new test that could really push high queue depths at times. The average queue depth is still realistic for a client workload, but the Destroyer has some very demanding peaks. When I first introduced the Heavy 2011 test, some drives would take multiple hours to complete it - today most high performance SSDs can finish the test in under 90 minutes. The Destroyer? So far the fastest I've seen it go is 10 hours. Most high performance I've tested seem to need around 12 - 13 hours per run, with mainstream drives taking closer to 24 hours. The read/write balance is also a lot more realistic than in the Heavy 2011 test. Back in 2011 I just needed something that had a ton of writes so I could start separating the good from the bad. Now that the drives have matured, I felt a test that was a bit more balanced would be a better idea.

Despite the balance recalibration, there's just a ton of data moving around in this test. Ultimately the sheer volume of data here and the fact that there's a good amount of random IO courtesy of all of the multitasking (e.g. background VM work, background photo exports/syncs, etc...) makes the Destroyer do a far better job of giving credit for performance consistency than the old Heavy 2011 test. Both tests are valid, they just stress/showcase different things. As the days of begging for better random IO performance and basic GC intelligence are over, I wanted a test that would give me a bit more of what I'm interested in these days. As I mentioned in the S3700 review - having good worst case IO performance and consistency matters just as much to client users as it does to enterprise users.

Given the sheer amount of time it takes to run through the Destroyer, and the fact that the test was only completed recently, I don't have many results to share. I'll be populating this database over the coming weeks/months. I'm still hunting for any issues/weirdness with the test so I'm not ready to remove the "Preview" label from it just yet. But the results thus far are very telling.

I'm reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric I've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

AnandTech Storage Bench 2013 -

Now we see what Seagate's balance of consistency and peak performance gives us: leading performance in our latest benchmark. The Destroyer does a good job of penalizing drives with poor IO consistency as the entire drive is written to more than once, but the workload is more client-like than a pure 4KB random write. The result is a test that likes both peak performance and consistent behavior. Seagate's 600 delivers both. I purposefully didn't include a 120GB Seagate 600 here. This test was really optimized for 400GB+ capacities, at lower capacities (especially on drives that don't behave well in a full state) the performance dropoff can be significant. I'm not too eager to include 240/256GB drives here either but I kept some of the original numbers I launched the test with.

OCZ's Vector actually does incredibly well here, giving us more insight into the balance of peak performance/IO consistency needed to do well in our latest test.

AnandTech Storage Bench 2013 -

The 600's average service time throughout this test is very good. OCZ's Vector does even better here, outperforming the 480GB 600 and falling short of the 400GB 600 Pro. The difference between the 600 and 600 Pro here gives you a good idea of how much performance can scale if you leave some spare area on the drive.



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read (4K Aligned)

Random read performance is consistent across all capacity points. Performance here isn't as high as what Samsung is capable of achieving but it is very good.

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Low queue depth random write performance has just gotten insanely high on client drives over the past couple of years. Seagate doesn't lead the pack with the 600 but it does good enough. Note the lack of any real difference between the capacities in terms of performance.

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Ramp up queue depth and we see a small gap between the 120GB capacity and the rest. The 600/600 Pro climb the charts a bit at higher queue depths. Note the lack of any performance difference between the 600 and 600 Pro at similar capacities.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Here's how you tell that Seagate has client drive experience: incredible low queue depth sequential read performance. I'm not sure why the 240GB 600 does so well here, but for the most part all of the drives are clustered around the same values.

Desktop Iometer - 128KB Sequential Write (4K Aligned)

Low queue depth sequential writes are also good. The 240GB capacity does better than the rest for some reason. Only the 120GB capacity shows any sign of weakness compared to other class leaders.

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance - AS-SSD

On the read side, at high queue depths we're pretty much saturating 6Gbps SATA at this point. The fastest drive here only holds a 3% advantage over the 600s.

Incompressible Sequential Write Performance - AS-SSD

Once again we see solid performance from the 600s. There's no performance advantage to the Pro, and the 120GB capacity is measurably slower.



Performance vs. Transfer Size

ATTO is a useful tool for quickly measuring the impact of transfer size on performance. You can get the complete data set in Bench. I pointed this out earlier but we see much better low queue depth sequential performance than on the Corsair drives, despite using the same controller (and presumably somewhat similar firmware). It's now very clear to me that Seagate had some degree of say in what happened at the firmware level on these drives.

Samsung maintains the best curve in these tests however. On the write side, the 120GB 600 is the only drive that tops out early.



AnandTech Storage Bench 2011

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running in 2010.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

Seagate's 600 does reasonably well here, but if you don't take into account IO consistency then the 600/600 Pro are still behind Samsung's SSD 840 Pro.

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)



AnandTech Storage Bench 2011 - Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Light Workload 2011 - Average Data Rate

Performance in our light workload is competitive, albeit not industry leading. The 600s continue to do very well across the board.

Light Workload 2011 - Average Read Speed

Light Workload 2011 - Average Write Speed

Light Workload 2011 - Disk Busy Time

Light Workload 2011 - Disk Busy Time (Reads)

Light Workload 2011 - Disk Busy Time (Writes)

 



Power Consumption

Under load, Seagate's 600 and 600 Pro do a great job on power consumption. The combination of Toshiba's 19nm NAND and whatever firmware tweaks Seagate did result in very compelling power consumption under load. The problem is the 600 is measurably worse than Samsung's SSD 840 Pro at idle. For desktops this shouldn't matter, but notebook users looking to stay on battery as long as possible would be better off with an 840 Pro.

Drive Power Consumption - Idle

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write



Final Words

It's really good to see another SSD maker recognize the value of Link A Media's controller architecture and use it. Seagate's 600 SSD is a great drive, particularly thanks to how well it performs in a full drive scenario. The 600/600 Pro's peak performance is good, but combine it with great worst case scenario performance and you have the makings of a very good drive.

For client users I see no reason to consider the 600 Pro over the 600. If you need the power fail protection then the 600 Pro is your only option. Similarly if you need more endurance, the 600 Pro makes sense there as well. For everyone else, the 600 should do very well (in fact, it'll likely perform more consistently than many other drives I've seen branded as enterprise solutions).

There are two downsides to Seagate's 600/600 Pro: 1) idle power consumption and 2) no hardware encryption support. The first can be a deal breaker for notebook users. Unfortunately here Seagate is at the mercy of Link A Media. The LM87800 controller seems to have enterprise beginnings, where idle power just doesn't matter as much. Power consumption under load is great, but high idle power draw can really hurt in many light workload mobile applications. Desktop users won't be impacted. The lack of hardware encryption support and support for Microsoft's eDrive standard is less of an issue, but it's hard to not want those things after seeing what Crucial's M500 can do.

Long term I do wonder what will happen to the Seagate/LAMD relationship. Link A Media is now owned by Hynix, and last I heard Hynix didn't want drive makers using LAMD controllers without Hynix NAND. Obviously the 600/600 Pro were in development since before the acquisition so I wouldn't expect to see any issues here, but I get the impression that the successor to these drives won't be based on a Link A Media controller. It's no skin off of the 600's back for its successor to go a different route, but I worry that the best feature of the 600 may get lost in the process. What makes the 600 great is its balance of high peak performance with solid minimum performance. Performance consistency isn't as good as on Corsair's Neutron (another LM87800 drive) but it's far better than a lot of the drives on the market today. Ultimately what this means is you can use more of the Seagate 600's capacity than you could on other drives without performance suffering considerably. I usually recommend keeping around 20% of your drive free in order to improve IO consistency, but with LAMD based drives I'm actually ok shrinking that recommendation to 10% or below. There are obviously benefits if you keep more free space on your drive, but Seagate's 600 doesn't need the spare area as badly as others - and this is what I like most about the 600.

Of course the usual caveats apply. Although the LM87800 is a fairly well understood controller by this point, I'd still like to see how Seagate's validation and testing have done before broadly recommending the drive. I would assume the 600/600 Pro have been well tested given Seagate's experience in the HDD industry, but when it comes to SSDs I've learned to never take anything for granted. There's also the question of how regularly/quickly we should expect to see firmware updates for these drives, should issues arise. Again, I feel like Seagate will be better here than most first timers in the SSD market but these are all caveats I've applied in the past when dealing with a relative newcomer.

Log in

Don't have an account? Sign up now