Introduction: Enterprise Storage 101

Since the introduction of x86 based servers at the end of the 20th century, the cost of server hardware has declined rapidly while the performance per watt and performance per dollar has increased rapidly. This pushed the server market to evolve from closed, proprietary, and most importantly extremely expensive mainframe and proprietary RISC servers into today's highly competitive x86 server market. However, the professional storage market is still ruled by the proprietary, legacy systems.

Today, you can get a very powerful server that can cope with most server workloads for something like $5000. Even better, you can run tens of workloads in parallel by virtualizing them. But go to the storage market with four or even five times the budget and you will likely return with a low end SAN.

Worse yet is that there is good chance this expensive device will choke regularly due to the use of a storage intensive application. We quote a market survey conducted in march 2013:

Forty-four percent of respondents said disproportionate storage-related costs were an obstacle preventing them from virtualizing more of their workloads. Forty-two percent said the same about performance degradation or the inability to meet performance expectations.

Note that the study does not mention the percentage of customers stuck in denial :-). The performance per dollar of the average SAN array is mediocre at best, and the storage capacity per dollar is simply awful.

You might think that the hardware inside a SAN is vastly superior to what can be found in your average server, but that is not the case. EMC (the market leader) and others have disclosed more than once that “the goal has always to been to use as much standard, commercial, off-the-shelf hardware as we can”. So your SAN array is probably nothing more than a typical Xeon server built by Quanta with a shiny bezel. A decent professional 1TB drive costs a few hundred dollars. Place that same drive inside a SAN appliance and suddenly the price per terabyte is multiplied by at least three, sometimes even 10! When it comes to pricing and vendor lock-in you can say that storage systems are still stuck in the “mainframe era” despite the use of cheap off-the-shelf hardware.

So why do EMC, NetApp, and the other giants in the storage market charge so much for what is essentially a Xeon based server, an admittedly well designed and reliable storage backplane, and some unreliable and slow performing hard drives? The reason is not some complex market situation that can only be explained by financial experts. No, the key is simply the basic component of a storage system: the unreliable and slow magnetic disk.

As we all know, the magnetic disk is the component that fails most in the data center, and it is by far the slowest core component in modern computers. Building a reliable and somewhat performant storage system based upon such a mediocre storage component can only be done with complex software. And complex software is yet another prime reason why IT services fail. So only a few companies that were able to build a solid reputation gained enough trust to succeed in the storage market.

This is why EMC, NetApp, IBM, and HP rule the storage market today even though they are charging an arm and a leg for a few terabytes of capacity. Professional buyers trust the devices these vendors make and are willing to pay a huge premium just to be sure that they get good reliability and decent but hardly compelling performance and capacity.

The Winds of Change
Comments Locked

60 Comments

View All Comments

  • WeaselITB - Tuesday, August 6, 2013 - link

    Fascinating perspective piece. I look forward to the CouldFounders review -- that stuff seems pretty interesting.

    Thanks,
    -Weasel
  • shodanshok - Tuesday, August 6, 2013 - link

    Very interesting article. It basically match my personal option on SAN market: it is an overprice one, with much less performance per $$$ then DAS.

    Anyway, with the advent of thin pools / thin volumes in RHEL 6.4 and dmcache in RHEL 7.0, commodity, cheap Linux distribution (CentOS costs 0, by the way) basically matche the feature-set exposed by most low/mid end SAN. This means that a cheap server with 12-24 2.5'' bays can be converted to SAN-like works, with very good results also.

    In this point of view, the recent S3500 / Crucial M500 disks are very interesting: the first provide enterprise-certified, high performance, yet (relatively) low cost storage, and the second, while not explicitly targeted at the enterprise market, is available at outstanding capacity/cost ratio (the 1TB version is about 650 euros). Moreover it also has a capacitor array to prevent data loss in the case of power failure.

    Bottom line: for high performance, low cost storage, use a Linux server with loads of SATA SSDs. The only drawback is that you _had_ to know the VGS/LVS cli interface, because good GUIs tend to be commercial products and, anyway, for data recovery the cli remains your best friend.

    A note on the RAID level: while most sysadmins continue to use RAID5/6, I think it is really wrong in most cases. The R/M/W penalty is simply too much on mechanincal disks. I've done some tests here: http://www.ilsistemista.net/index.php/linux-a-unix...

    Maybe on SSDs the results are better for RAID5, but the low-performance degraded state (and very slow/dangerous reconstruction process) ramain.
  • Kyrra1234 - Wednesday, August 7, 2013 - link

    The enterprise storage market is about the value-add you get from buying from the big name companies (EMC, Netapp, HP, etc...). All of those will come with support contracts for replacement gear and to help you fix any problems you may run into with the storage system. I'd say the key reasons to buy from some of these big players:

    * Let someone else worry about maintaining the systems (this is helpful for large datacenter operations where the customer has petabytes of data).
    * The data reporting tools you get from these companies will out-shine any home grown solution.
    * When something goes wrong, these systems will have extensive logs about what happened, and those companies will fly out engineers to rescue your data.
    * Hardware/Firmware testing and verification. The testing that is behind these solutions is pretty staggering.

    For smaller operations, rolling out an enterprise SAN is probably overkill. But if your data and uptime is important to you, enterprise storage will be less of a headache when compared to JBOD setups.
  • Adul - Wednesday, August 7, 2013 - link

    We looked at Fusion-IO ioDrive and decided not to go that route as the work loads presented by virtualize desktops we offer would have killed those units in a heartbeat. We opted instead for a product by greenbytes for our VDI offering.
  • Adul - Wednesday, August 7, 2013 - link

    See if you can get one of these devices for review :)

    http://getgreenbytes.com/solutions/vio/

    we have hundreds of VDI instances running on this.
  • Brutalizer - Sunday, August 11, 2013 - link

    These Greenbyte servers are running ZFS and Solaris (illumos)
    http://www.virtualizationpractice.com/greenbytes-a...
  • Brutalizer - Sunday, August 11, 2013 - link

    GreenByte:
    http://www.theregister.co.uk/2012/10/12/greenbytes...

    Also, Tegile is using ZFS and Solaris:
    http://www.theregister.co.uk/2012/06/01/tegile_zeb...

    Who said ZFS is not the future?
  • woogitboogity - Sunday, August 11, 2013 - link

    If there is one thing I absolutely adore about real capitalism it is these moments where the establishment goes down in flames. Just the thought of their jaws dropping and stammering "but that's not fair!" when they themselves were making mockery of fair prices with absurd profit margins... priceless. Working with computers gives you so very many of these wonderful moments of truth...

    On the software end it is almost as much fun as watching plutocrats and dictators alike try to "contain" or "limit" TCP/IP's ability to spread information.
  • wumpus - Wednesday, August 14, 2013 - link

    There also seems to be a disconnect in what Reed-Solomon can do and what they are concerned about (while RAID 6 uses Reed Solomon, it is a specific application and not a general limitation).

    It is almost impossible to scale rotating discs (presumably magnetic, but don't ignore optical forever) to the point where Reed-Solomon becomes an issue. The basic algorithm scales (easily) to 256 disks (or whatever you are striping across) of which typically you want about 16 (or less) parity disks. Any panic over "some byte of data was mangled while a drive died" just means you need to use more parity disks. Somehow using up all 256 is silly (for rotating media) as few applications access data in groups of 256 sectors a time (current 1MB, possibly more by the time somebody might consider it).

    All this goes out the window if you are using flash (and can otherwise deal with the large page clear requirement issue), but I doubt that many are up to such large sizes yet. If extreme multilevel optical disks ever take over, things might get more interesting on this front (I will still expect Reed Solomon to do well, but eventually things might reach the tipping point).
  • equals42 - Saturday, August 17, 2013 - link

    The author misunderstands how NetApp uses NVRAM. NVRAM is not a cache for the hottest data. Writes are always to DRAM memory. The writes are committed to NVRAM (which is mirrored to another controller) before being acknowledged to the host but the write IO and its commitment to disk or SSD via WAFL sequential CP writes is all from DRAM. While any data remains in DRAM, it can be considered cached but the contents of NVRAM do not constitute nor is it used for caching for host reads.

    NVRAM is only to make sure that no writes are ever lost due to a controller loss. This is important to recognize since most mid-range systems (and all the low-end ones I've investigated) do NOT protect from write losses in event of failure. Data loss like this can lead to corruption in block-based scenarios and database corruption in nearly any scenario.

Log in

Don't have an account? Sign up now