As motherboard layout is important in ensuring that airflow is as unobstructed as possible, case design is also critical in facilitating excellent airflow.  If you're building a system with one or two hard drives, most cases work fine for a home file server - just make sure there is a fan near the hard drive(s) so it is not sitting in stagnant air.  However, if your server requires multiple disks, here are a few cases that work especially well for file servers.

Mini-ITX

Fractal's Array R2 is a nearly perfect home file server case.  At less than 14" deep by 10" wide by 8" tall, it occupies little volume.  It positions a removable hard drive cage immediately behind a quiet yet powerful 140mm intake fan.  The hard drive cage accommodates up to six hard drives using vibration-dampening silicone mounts, and there is also room for a 2.5" drive (either an SSD or HDD).  According to my testing, when stuffed with six low-rpm 2TB mass storage HDDs and one SSD (with an Intel Pentium G620 CPU installed), the temperatures of the HDDs hover around 40C even when all drives are under artificial sustained load (using Iometer).  The PSU is a custom SFX form factor model, 80+ efficiency 300W unit with ample amps on the split 12V rail to power six HDDs.  The PSU features seven SATA connectors and one legacy molex connector, so there are no extraneous molex plugs and enough SATA plugs.  Furthermore, the cables are shorter than typical, so excessive cabling does not interfere with airflow.  The case itself is constructed of aluminum so it is lightweight, and its overall build quality is very high.  It does not have room for an optical drive, but I consider optical drives superfluous for a home file server.  If necessary, you can always hook up a USB interface external optical drive.  The only drawback of this case and its PSU is the price: at just under $200, it is not cheap.  However, the subjective aesthetics, objective functionality of the case and the custom PSU are worth the cost if you want a small but capacious home file server case. 

Micro-ATX

As I prefer home file servers that take up as little space as possible, Silverstone's TJ08B-E is a great, smaller micro-ATX minitower.  It's less than 16" deep, 9" wide, 15" tall and weighs less than 12 pounds.  It can accommodate up to five HDDs plus one SSD.  As with the Fractal Array R2, the hard drives are placed immediately behind a front intake fan - though in this case, it's an even larger 180mm unit.  The TJ08B-E is flexible in that it can hold a couple optical drives as well as a GPU in case you want to repurpose or multipurpose it.  When stuffed with four low-rpm green drives, the temperatures under load don't exceed 45C during sustained transfers.  Overall build quality is very good, like most Silverstone cases. 

Silverstone makes a diminutive, fully modular PSU that makes working with smaller cases like the TJ08B-E, Lian Li PC-Q08, and others much easier.  Silverstone also offers a short cable kit, making the ST50F-P PSU even better suited to SFF cases.  Finally, it's clear that Silverstone had smaller multi-HDD systems in mind when designing the CP06 SATA power plug extension cable.  This extender connects to a single SATA power plug and then has four SATA power plugs that are spaced closer than usual together, further reducing cable clutter.  Though the cost of these accessories adds up, they make an ideal cabling solution very easy to implement.  Regardless of whatever PSU you decide to go with, if you use a split 12V rail model, make sure you don't load up one rail with HDDs.  If you go with a single 12V rail model, you'll want that rail to be beefy - for example, don't try to put ten HDDs and four case fans on a budget PSU with a 20A 12V rail.

Full Tower

Very few cases can accommodate ten HDDs at stock (without adding adapters), and such cases are not at all small.  Full towers also typically offer excellent airflow, and cable management is not very difficult.  Fractal's Define XL is one of the least expensive 10 HDD bay full tower cases available.  It is well-built, and extra care has been paid to making the case quiet in the form of panel insulation.  It is impossible to hear active HDDs inside this case even when you're sitting just a few feet from it (even the notoriously loud VelociRaptors).  Further, there are plenty of integrated niceties like adjustable/flexible cable baffles that assist in cable management.  Seven of the ten HDD slots are immediately behind fans, with three slots one cage removed from the front intake fans.  Even still, the HDDs that aren't right behind the fans stay cool (between 35C and 40C).  At around $150, it is an excellent value.  Just make sure you don't pair it up with Silverstone's short cable kit! 

We've saved the most important aspect of a home file server - the hard drives - for the next and last component page.

CPUs, Motherboards, and RAM Hard Drives
Comments Locked

152 Comments

View All Comments

  • mino - Tuesday, September 6, 2011 - link

    For a plenty of money :)

    Basically, a SINGLE decent raid card costs ~200+ for which you have the rest of the system.

    And you need at least 2 of them for redundancy.

    Also, with a DEDICATED file server and open sourced ZFS, who needs HW RAID? ...
  • alpha754293 - Tuesday, September 6, 2011 - link

    In most cases, the speed of the drives/controller/interface is almost immaterial because you're going to be streaming it over a 1 Gbps network at most.

    And if you actually HAVE 10GoE or IB or Myrinet or any of the others, I'm pretty sure that if you can afford the $5000 switch, you'd "splurge" on the $1000 "proper" HW RAID card.

    Amusing how all these people are like "speed speed speed!!!!" forgetting that the network will likely be the bottleneck. (And wifi is even worse, 0.45 Gbps is the best you can do with wifi-n.)
  • DigitalFreak - Sunday, September 4, 2011 - link

    I've been using Dell PERC-5i cards for years. You can find them relatively cheap on E-bay, and they usually include the battery backup. I believe they're limited to 2TB drives though.
  • JohanAnandtech - Monday, September 5, 2011 - link

    "But there's the fact that software RAID (which is what you're getting on your main board) is utterly inferior to those with dedicated RAID cards"

    hmm. I am not sure those entry-level firmware thingies that have a R in front of them are so superior. They offload most processing tasks to the CPU anyway, and they tend to create problems if they break and you replace them with a new one with a newer firmware. I would be interested to know why you feel that Hardware RAID (except the high end stuff) is superior?
  • Brutalizer - Monday, September 5, 2011 - link

    When you are saying that software raid is inferior to hardware raid, I hope you are aware that hw-raid is not safe against data corruption?

    You have heard about ECC RAM? Spontaneous bit flips can occur in RAM, which is corrected by ECC memory sticks.

    Guess what, the same spontaneous bit flips occur in disks too. And hw-raid does not detect nor correct such bit flips. In other words, hw-raid has no ECC correction functionality. Data might be corrupted by hw-raid!

    Neither does NTFS, ext3, XFS, ReiserFS, etc correct bit flips. Read here for more information, there are also research papers on data corruption vs hw-raid, NTFS, JFS, etc:
    http://en.wikipedia.org/wiki/ZFS#Data_Integrity

    In my opinion, the only reason to use ZFS is because it detects and corrects such bit flips. No other solution does. Read the link for more information.
  • sor - Monday, September 5, 2011 - link

    Many RAID solutions scrub disks, comparing the data on one disk to the other disks in the array. This is not quite as robust as the filesystem being able to checksum, but as your own link points out, the chances of a hard drive flipping bits is something on the order of 1 in 1.6PB, so combined with a RAID that regularly scrubs the data I don't see home users needing to even think about this.
  • Brutalizer - Monday, September 5, 2011 - link

    You are neglecting something important here.

    Say that you repair a raid-5 array. Say that you are using 2TB disks, and you have an error rate of 1 in 10^16 (just as stated in the article). If you repair one disk, then you need to read 2 000 000 000 000 byte, every time you read a bit, an error can occur.

    The chances of at LEAST ONE ERROR, can be calculated by this wellknown formula:
    1 - (1-P)^n
    where P is the probability of an error occuring, and "n" is the number of times the error can occur.

    If you insert those numbers, then it turns out that during repair, there is something like 25% of hitting at least one read error. It might you have hit two errors, or three errors, etc. Thus, there are 25% chance of you getting read errors.

    If you repair a raid, and then run into read errors - you have lost all your data, if you are using raid-5.

    Thus, this silent corruption is a big problem. Say some bits in a video file is flipped - that is no problem. An white pixel might be red instead. Say your rar file has been affected, then you can not open it anymore. Or a database is affected. This is a huge problem for sysadmins:
    http://jforonda.blogspot.com/2006/06/silent-data-c...
  • Brutalizer - Monday, September 5, 2011 - link

    PS. There is 1 in 10^16 that the disk will not be able to recover the bit. But there are more factors involved: current spikes (no raid can do this):
    http://blogs.oracle.com/elowe/entry/zfs_saves_the_...

    bugs in firmware, loose cables, etc. Thus, the chance is much higher than 10^ 16.

    Also, raid does not scrub disks thoroughly. They only compute parity. That is not checksumming data. See here about raid problems:
    http://en.wikipedia.org/wiki/RAID#Problems_with_RA...
  • alpha754293 - Tuesday, September 6, 2011 - link

    @Brutalizer
    Bit flips

    I think that CERN was testing that and found that it was like 1 bit in 10^14 bits (read/write) or something like that. That works out (according to the CERN presentation) to be 1 BIT in 11.6 TiB.

    If a) you're concerned about silent data corruption on that scale, and b) that you're running ZFS - make sure you have tape backups. Since there ARE no offline data recovery tools available. Not even at Sun/Oracle. (I asked.)
  • sor - Monday, September 5, 2011 - link

    Inferior how? I've been doing storage evaluation for years, and I can say that software raid generally performs better, uses negligible CPU, and is easier to recover from failure (no proprietary hardware). The only reason I'd want a hardware RAID is for ease of use and the battery-backed writeback.

Log in

Don't have an account? Sign up now