In our series on FreeNAS (now to be known as TrueNAS), veteran journalist Manek Dubash walked us through how to use open source software based on a truly ingenious filesystem that can turn your average PC into a super-resilient Network Attached Server (NAS).
UnRAID is different.
TrueNAS is based on RAID, a widely used method of sharing data across an array of commodity-class hard drives to attain enterprise-class reliability.
UnRAID, as the name comes right out and tells you, is none of that. It aims to bring you much of what RAID has to offer, but without any of the downsides. And it’s based on design ideas that should appeal to the budget-conscious.
Which is something that particularly attracts it to Tested Technology. And something we’re hoping you’ll want to explore with us.
Because these days we’re all awash with digital data—in the form of crucial business information, family photos, music we love and movies we want to watch. To keep it all safe and find it easily when you need it, the logically simplest solution and the easiest to maintain is a NAS.
TESTED TECHNOLOGY HAS WRITTEN UP several different network attached storage (NAS) solutions over the years. We’ve touched on Buffalo’s Terastation, Synology’s very basic, entry-level single drive devices and explored in some depth the possibility of creating your own heavy-weight multidrive NAS from an old PC. We’re also hoping to feature the Chinese storage manufacturer, TerraMaster, some time soon.
Our loudest shout has been to QNAP, with an 8-part series covering its 4-bay TS-451 and 451+. The length of this series is justified, we think, by the truly interesting evolution of QNAP’s reimagination of what a NAS is all about. And enabled, it has to be said, by QNAP’s generosity in donating not one but two very capable mid-range consumer devices.
We had hoped to cover QNAP’s rival, Synology, in similar depth, as their operating system, DSM, is evolving in much the same way. However, the two closely similar single drive products Synology has been able to donate so far are very basic entry-level devices. Fine for the absolute beginner but not suitable for exploring Synology’s very capable Linux-based operating system in any depth.
The invitation is still open to Synology to address this balance. In the meantime, we’ve come to realise that a more fundamental balance needs to be explored in these pages.
We mentioned this in passing in our prelude to the Synology DS120j review. The question raised was about RAID, the ingenious late-1980s invention that substitutes an array of commodity, low-capacity hard disk drives for the costly enterprise-class large hard drives that businesses the world over had become used to paying through the nose for.
RAID, with its subsequent commercial variants, continues to be a crucial storage solution for big businesses. But its appropriateness for consumer use and mom-and-pop outfits is being increasingly questioned as drives get larger, more reliable, and, on a per-gigabyte basis, rapidly cheaper. The crucial point is that the repairability of a RAID array when a drive dies, as it invariably will, starts to fall off a cliff once drive size begins to be measured in terabytes.
Why Not RAID
You will hear that RAID stands for “Redundant Array of Independent Drives”. This is a fiction, a contrived backronym.
The “I” in the original acronym proudly stood for “inexpensive”, which was a key feature of the idea. Buy a bunch of the commodity hard drives that by the end of the 1980s were being churned out in their millions to feed the world’s then rampant personal computer habit, arrange them in a group and add mathematics in the form of software so that each drive could keep an eye on the reliability of the others, correcting errors as necessary.
As the RAID idea moved into big business, vendors found they needed to sustain their margins by fudging that awkward word “inexpensive”. That they chose “independent” as a substitute is a clue to how little they actually understood the technology they were selling. The drives in a RAID array (with the exception of RAID 1, mirroring, which is not really RAID at all) are far from “independent” as it is their mutual cooperation that delivers the reliability and resilience.
Two of the problems of RAID are highlighted in Manek Dubash’s masterly NAS adventure. Yes, his homemade rig turns a so-so old PC into an enviably reliable and resilient storage system. Low cost*? In a way. But at the heart of the system are the hard drives, with each one now having at the very least 70,000 times the capacity per drive than that envisaged by the progenitors of RAID, when commodity drives with capacities measured in gigabytes were still a dream and the mid-MB range for PCs were just becoming a commercial reality.
Our QNAP TS-451, for example, is currently running a 6TB drive in each of its four drive bays. Simple maths suggests that the total capacity should be around 24TB. But the rather more complex maths of the RAID 10 solution we’re using brings that down to just eleven useable Terabytes.
This presents RAID with a huge challenge. Back in 1989, rebuilding the data on a replacement 380MB drive by checking it against data and metadata maintained on the remaining drives wasn’t time-intensive. But resilvering, as techies now tend to call the process, may take a matter of days with today’s high-capacity monsters. Until that work is finished, the reliability and resilience of the other RAID drives tends to be compromised.
This need not be an issue for large enterprises that can afford to mirror their mission-critical data across multiple RAID arrays and back-up to off-premise sites. For a small business relying on a single 6-bay RAID NAS, the potential downtime and possible data loss becomes a serious consideration.
Couple the resilvering problem with the fact that should the main device managing the drives also fail, it’s unlikely that the data from single drive could be cheaply and easily recovered and you’ve defined half the problem with RAID.
The other half is the logistics of expanding the RAID array, if—or almost certainly when—you come to need more storage space. Here again the deep pockets of a large enterprise ameliorate what is a fundamental problem. An enterprise 24-bay RAID array with a working capacity of 300TB can be taken off line, torn down and every drive replaced with larger drives while the data continue to be served from its mirrored twin with its belt-and-braces off-prem backup. Small and medium businesses won’t have this luxury.
As Manek discovered, replacing his original small 8 drives with newer much larger drives involved a lot more than simply swapping. The sophisticated RAID arrangement underpinning his ZFS filesystem and the FreeNAS operating system defines strict rules about drives matching in size during the resilvering process.
How much more convenient if you could just throw in an extra drive when you needed more capacity, or pull out a small old drive and replace it with a bigger new one. And if you clocked into work one morning to find the machine in charge of the drives had simply died, how useful to be able just to take out the drives one by one, plug them into some other machine—a PC, perhaps, or a Raspberry Pi—and peel off the data as you needed it.
You can’t do any of that with a typical RAID array. But with UnRAID—yes, you can.
We began the QNAP series back in 2015 with the naive idea of starting with a single drive in the 4-bay TS-451 and incrementing the drive numbers as the story evolved.
This was the basis on which we approached Western Digital, asking if they’d be good enough to donate a single 4TB drive. We’d add new drives in dribs and drabs as the budget permitted. As luck would have it, Western Digital decided that Tested Technology deserved to be among the first publications with their latest 6TB drives, and they sent us a full complement of four. We were able to get off to a running start on all four cylinders.
Technically, yes, that toe-in-the-water single drive by drive approach will work with RAID—as long as you’re prepared each time you add a drive to back up all the data onto some other device and restore everything again afresh to the newly incremented storage. RAID doesn’t really get going until it gets all its ducks in a row. And for RAID to work properly, if that first drive had a costly (at the time) state-of-the-art capacity of 6TB, subsequent drives would need to be the same size. A financially daunting proposition.
However, UnRAID laughs at these rules. It’s going to allow me to take my old Cosmos machine, the one we originally built back in 2007 as one of the first Hackintoshes, and kick it off as a single drive NAS. Then we’ll be able to expand it drive by drive, theoretically until all its twelve bays are stuffed with drives. They’ll be drives of whatever size and price Tested Technology has available or can afford. And never once will we have to copy data off that machine for safe storage while we tear down the drives and start over again.
That’s why UnRAID.
Why Not UnRAID?
Am I making this all sound too rosy? You’re right to ask.
I’m writing this prelude with the naive enthusiasm of one who is looking forward to an untried adventure. I have a broad idea about how UnRAID works but no experience of working with it.
Manek’s FreeNAS is open source software, freely downloadable over the Internet. QNAP and Synology charge nothing for their operating systems and their subsequent busy updates.
You have to pay for UnRAID.
Of course, the manufacturers of proprietary NAS operating systems get their money back on the obligatory hardware you’ll need. And FreeNAS, though genuinely available at no cost to enthusiasts, for enterprise customers comes bundled with the high-end NAS devices that iXSystems, the FreeNAS developer and maintainer, sells with costly support contracts.
UnRAID is based on open source software too—a stripped-down distribution of Linux called Slackware. This is the same Linux I first loaded onto an IBM Thinkpad from a set of 10 floppy disks back in the mid-1990s. Slackware was last revised in 2016, although the current version of UnRAID uses a much more recent Linux kernel.
The vendor and developer of UnRAID, Lime Technology, licences the software in tiers, with prices depending on how many drives you will be using. If you understand the principle of open source software you’ll know that among its freedoms is the freedom to charge for its use, especially if the vendor has something valuable to add to the mix.
And Lime Technology has added something very useful, although that appears to be closed source. This is a similar layered arrangement to, for example, MacOS, which has open source BSD Unix as its core with a lot of proprietary Apple icing on the cake.
Over the course of 15 years, Lime Technology has put a lot of work into the development of a system that turns standard Intel hardware into a close to ideal network storage management system easily tuneable to your precise needs. This on-going work is paid for through Lime Technology’s licencing arrangement. Tested Technology strongly favours fully open source software. But also understands that small and medium sized development companies need and deserve an income stream.
So, yes, you have to pay for this**. But not very much.
And let’s admit up front that this UnRAID NAS will probably never have the cast iron resilience and reliability of Manek’s FreeNAS. That’s not to say UnRAID can’t attain this, just that we won’t be taking it that far in this upcoming story.
Manek puts his trust in a 32GB RAM machine with ZFS as the filesystem governing his three 14TB Seagate Iron Wolf drives, all backed up onto a second NAS with the same configuration. UnRAID can cope with 32GB of RAM (but doesn’t need it) and—thanks to a user-created plug-in—can also make use of ZFS (but doesn’t have to).
The Cunning Plan
Part one of this story is going to be very modest. As modest, in fact, as the Synology DS119j. A single drive NAS with no special RAID-like data protection.
*IBM’s 3390 with a starting capacity of around 1GB for a mere $90,000 emerged onto the enterprise market in 1989,
But, as we argued in that Synology piece, the reliance here isn’t on multiple drives keeping their eyes on each other. It’s on the manufacturing quality of the individual drive. Which, today in 2021 is going to be orders of magnitude greater than those old megabyte commodity drives of the late ’80s. And almost certainly approaching, if not exceeding, the reliability of a full-blown IBM enterprise drive of that time*.
As UnRAID allows this flexibility we’ll be able to follow the trajectory originally designed for that QNAP NAS. Once the single drive is up and running and we’ve put it through its paces (some of them, UnRAID has many, many routes to explore), the plan is to add a second drive.
This new drive will add nothing to the capacity and will probably slow the system down a bit. As it will also need to be at least as large as the existing drive it will also double the storage cost. And if what the drive manufacturers say about the reliability of their latest products is accurate, may never even need to be called on to perform its duty, even as we expand to three, four and even more drives.
You’ll be curious, then, about why we’re bothering with this drive. I’m hoping to make this clear when we come to it in chapter 2. Meanwhile, I’ll press on and implement phase one of the plan, a single 18TB drive on the old Cosmos machine. Full report to follow in chapter 1.