Seagate Announces PCIe x16 SSD Capable Of 10GB/sby Billy Tallis on March 9, 2016 6:00 AM EST
- Posted in
- Open Compute
- Enterprise SSDs
At the Open Compute Project Summit this week in San Jose, Seagate will show off a pair of upcoming enterprise NVMe SSDs with impressive throughput specifications. The drives will have PCIe x16 and x8 interfaces and provide maximum throughput of 10GB/s and 6.7GB/s respectively. Seagate has provided few details so far, but it's safe to say those numbers are peak sequential read speeds.
The big question is what controllers are used in these drives. Most NVMe SSD controllers support at most 4 PCIe lanes, with the notable exceptions being PMC-Sierra's 8-lane controllers. Seagate does have an internal development team with SandForce, but it's highly unlikely they've been able to develop such a large controller so soon. And these new Seagate SSDs are probably not based on unannounced third-party controllers.
This means the 16-lane SSD from Seagate is almost certainly a multi-controller solution with an on-board PCIe switch, which is now common for top of the line enterprise PCIe SSDs. A 10GB/s read speed suggests that the 16-lane drive is most likely based on four of Seagate's Nytro XM1440 M.2 SSDs, which advertise 2.5GB/s read speed for capacities of at least 800GB and use a Marvell controller. Seagate's blog shows CAD renderings that seem consistent with a layout of four Nytro XM1440 M.2 drives on one card, but the requisite PCIe switch chip isn't shown. (EDIT: Photographs of the board have surfaced showing that it does passively route the PCIe lanes from the x16 connector to the M.2 slots without a PCIe switch chip. This may prevent all four drives from being used on platforms like Intel's Xeon E3 where the 16 PCIe lanes can only be divided into x8+x8 or x8+x4+x4 configurations.)
Seagate claims that the 10GB/s speed of the 16-lane drive is 4GB/s faster than any competing drive. If Seagate's new drive is a single-controller solution then that's a fair comparison and an impressive accomplishment, but there are already multi-drive products on the market offering RAID0 speeds well in excess of 6GB/s. HP's Z Turbo Drive Quad Pro is a PCIe x16 card that provides connectivity and cooling for up to four M.2 PCIe SSDs. When ordered as part of a workstation it can be configured with four Samsung SM951 SSDs to provide an advertised 9GB/s sequential read speed, though when sold separately it only comes populated with two M.2 SSDs.
Meanwhile the 8-lane drive is probably not based on two 4-lane controllers, despite it being the most obvious solution. Most products based on a single controller from Intel, Samsung, Marvell or Phison with a PCIe 3.0 x4 link advertise maximum read speeds of 2.2–2.8GB/s, so providing 6.7GB/s from just two controllers would require 20% higher performance than any PCIe 3.0 x4 NVMe controller has attained. Instead, the PCIe x8 SSD Seagate is announcing is probably another four Marvell controller design that is limited in sequential speeds by the overhead of the drive's PCIe switch and the upstream PCIe link. The 1M IOPS claimed for the 8-lane drive is slightly higher than four times the rating for a single XM1440, but some capacities of the 2.5" XF1440 offer enough IOPS. The thermal constraints of the M.2 form factor compared to 2.5" drives and add-in cards with large heatsinks account for the discrepancy in IOPS rating. According to Seagate the 8-lane drive will offer some cost and power savings over the 16-lane drive, and it's not hard to imagine that it could also allow servers to a larger total capacity for the same number of PCIe lanes.
Seagate's blog shows a rendering of the 8-lane card with the same heatsink layout as their Nytro XP series flash accelerator cards that use a RAID of SandForce SATA controllers to provide up to 4GB/s sequential read speeds. There's a good chance this is just a placeholder illustration, as Seagate says the 8-lane drive is still being finalized.
Both drives are intended for the Open Compute Project (OCP) hardware ecosystem founded by Facebook and now also supported by a variety of major companies in cloud computing, telecom, networking, and finance. The Open Compute Project focuses on datacenter hardware and infrastructure, with members contributing specifications and designs that are more detailed than industry standards like the ATX form factor. Seagate says their new drives will comply with OCP specifications, but the specific standards haven't been identified. Potentially relevant standards include a specification for thermal monitoring of PCIe add-in cards and a specification for M.2 SSDs that sets standards for things like minimum performance, the conditions under which thermal throttling is permitted, maximum power consumption and mandatory eDrive encryption support.
Based on the assumption that both drives are rougly equivalent to four Nytro XM1440 drives plus a PCIe switch chip, peak power consumption will probably be at least 29W for the 8-lane drive and could be nearly 40W for the 16-lane drive.
Seagate describes the PCIe x16 drive as production-ready but the 8-lane drive is still being finalized. Samples of each have been made available to Seagate's customers and the full product launch is planned for summer of 2016. Capacities have not been announced but are likely to start at 3.2TB or 3.84TB for the highest-performing models.
Post Your CommentPlease log in or sign up to comment.
View All Comments
SaolDan - Wednesday, March 9, 2016 - linkNeat!
nathanddrews - Wednesday, March 9, 2016 - linkSo... will future iterations of mainstream CPUs and motherboard provide us with more lanes for all these magical toys? Between the demands of NVMe drives, GPUs, USB Type-C, Thunderbolt, multiple SATA3/SATA-Express ports, and expansion cards, those lanes dry up very fast if you want to use a lot of them.
TemjinGold - Wednesday, March 9, 2016 - linkThis particular toy isn't mainstream though and won't be something the average Joe can afford.
nathanddrews - Wednesday, March 9, 2016 - linkObviously, but this isn't the only PCIe NVMe drive.
Flunk - Wednesday, March 9, 2016 - linkZ170 has a lot more PCIe 3.0 lanes (20) than previous chipsets, so Intel is at least trying.
The_Assimilator - Wednesday, March 9, 2016 - link20 really isn't enough, considering a single graphics card will consume all except 4 of those. That leaves you room for one x4 device like an M.2 drive. What we really need is 32 lanes minimum, but Intel probably won't implement that because it would cannibalise their HEDT platform.
extide - Wednesday, March 9, 2016 - linkWell, it's 20 lanes from the chipset, there are still 16 from the CPU, although those 20 lanes from the chipset are all ran through a single link equivalent to a PCIe 3.0 x4 link.
ShieTar - Wednesday, March 9, 2016 - linkAnd if you need more, you just buy a Board with a PLX switch on it. The EVGA Z170 Classified 4-Way has about 70 lanes implemented if you add up all the connections available on the board. They can't all shovel data to the CPU at the same time, but that shouldn't really be a problem.
And to be honest, if you can afford two GPUs, one or more NVMe SSDs and dozens of fast USB and SATA devices on top, a 2011-3 system should not break the bank either. And on that platform, even a relatively simple Board like the Gigabyte GA-X99-UD3 will give you 48 lanes for extension cards on top of all the M.2, Thunderbolt, USB, SATA-Express connectivity.
DanNeely - Thursday, March 10, 2016 - linkPLX's are mostly gone from current generation boards. The company that made them was bought by an enterprise product maker who promptly jacked the prices up about 4x from $10-20 to $40-80; which has almost completely priced them out of the consumer market. At this point, you might as well just go LGA2011 for the price.
DarkXale - Wednesday, March 9, 2016 - linkThis device isn't exactly aimed at regular consumer machines though; its a safe assumption that anyone buying a device like this is using a E-class system. And for regular consumers - IO performance is more important than sequential performance - and that doesn't require a lot of lanes.