Your NAS is full to the brim with data. What are your options?
To recap: the home-built NAS holds eight 3TB Seagate drives and contains backups and user data for the household, as well as my media library. And it’s mirrored onto a backup server which contains the same data.
We elected to use ZFS as our filesystem, for reasons discussed in part 1. Available space on the main server is currently less than 4TB, with over 17TB already used. That’s 81% of the total and for ZFS that’s about as full as it should get. Above 80% ZFS starts to slow down, as it needs that space for its reliability-enhancing technology.
At this point there are quite a few concepts that need expanding, so it’s time to back up and give you the full picture.
Diving into the Detail
Part 1 introduced you to the LSI Logic SAS 9211-81 disk controller, based around the company’s SAS2008 chip. It’s widely used in enterprise and very popular among fans of home-build devices because of the number of drives it allows you to add.
The big snag from our point of view is that this disk controller has its own ideas of how RAID should be implemented. ZFS takes care of its own RAID implementation in software and those RAID features built into the SAS2008 chip are going to mess this up. So the first thing to do is to get rid of them.
The procedure can be carried out from the BIOS level of some computers or from inside the old DOS operating system. It’s easy to do but also rather easy to get it wrong. I followed the steps listed here.
Once converted from a sophisticated RAID controller to a plain and simple Host Bus Adapter (HBA), an enterprise-grade board like the LSI 9211-8i is near-perfect for the job of managing the disks of a home/small business server. It has the performance you need and it’s cheap—you can find one on a well-known auction site for under £50.
In part 1 we saw that OpenZFS arrays formatted as RAID-Z2 devote two disks to parity data. This means that any two disks in the array can fail before data is lost; fortunately, I’ve never had to put this to the test.
Not just yet, anyway. But the nub of the problem—apart from the fact that while the media library and the backups of the various home devices are getting bigger but the disks aren’t—is that what they are visibly doing is getting older and showing their age.
“Visibly” thanks to SMART.
SMART (or more strictly S.M.A.R.T) stands for Self-monitoring, Analysis and Reporting Technology. It can anticipate and report physical failure, even while the drive is still faithfully delivering and storing your data. There are built-in error correction methods and SMART keeps a count of how often they need to be used. Temperature and vibration also offer clues. SMART stores all these data and draws conclusions about the state of your drive that it can pass it on to you when you ask for it.
SMART gets outsmarted by some drives—it’s thought to provide useful warnings in only two-thirds of the disk drives it’s installed in. But here it was handing me the bad news that these old drives were nearing the end of their lives.
The Origin of a Remarkable FileSystem
Sun Microsystems, generally referred to simply as SUN, dawned in the early 1980s, rose to a glorious noon towards the end of the Millenium, and sank beneath the horizon as the 21st century set in.
Its name memorialises its origin: SUN stands for Stanford University Network, and its founders, in 1982—Scott McNealy, Andy Bechtolsheim, and Vinod Khosla—were all graduates of that University. Bill Joy, another Stanford graduate, joined them that same year.
All four are revered in the 20th-Century Computing Hall of Fame and the work they did together was extraordinary, paving the way for much of the development underpinning our current technology.
The company began by selling graphics workstations running SUN’s own version of Unix called Solaris and immediately became profitable. Its share price soared, encouraging the company to invest unprecedented sums in personnel, hardware and software development. It’s no exaggeration to say that during this period SUN’s innovations quietly but inexorably revolutionised the IT industry.
Sun saw a way of playing this right down the middle. Adopt the new, cheap hardware. Don’t trust it—because by definition it’s untrustworthy. Simply ensure it behaves to enterprise standards by writing software that makes no assumptions about its trustworthiness.
Simply? Well, perhaps not. But the SUN founders understood the intricacies of the Unix operating system inside out. They knew that however complex this “building on sand” software would have to be, anyone using the system, or having to manage and maintain the system would need to be presented with an interface that was simple, logical and easy to grasp.
Out of these ideas, ZFS was born.
The ZFS conundrum
However, one of the disadvantages of ZFS is that you can’t expand an existing pool of drives (the zpool*) simply by adding another disk because the RAID array’s redundancy configuration can’t be changed—there’s more detail about why not here.
The advantage is flexibility; the zpool can then be divided up (or not) to suit.
The disadvantage that ZFS’s zpool can’t easily be expanded is something the OpenZFS team is currently working on. However, this particular part of the OpenZFS project has been ongoing for some years and doesn’t look like reaching production-readiness any time soon.
- replace every disk in the system one by one, allowing the RAID to rebuild after each addition, or
- create a new zpool with new disks and migrate the data across.
The first option, which in RAID circles you’ll often hear referred to as resilvering—a fanciful derivation from the jargon of “mirroring” drives, is very much in the spirit of how RAID is meant to work. Although you can’t (currently) expand the zpool by installing additional drives, ZFS does allow you to replace the existing drives with new, larger drives, a feature known as auto-expand. You’d have to do this piecemeal, resilvering to reinstate the data on each new, larger drive, one drive at a time, before moving on to the next one.
But when RAID was first introduced in the early ’80s, the (relatively) low cost drives it was designed to work with measured their capacities in megabytes. With much less data to handle, resilvering a single drive could finish during a longish coffee break. Rebuilding the data on a multi-terabyte drive, as Tested Technology can testify, is a glacial process measured in days.
Replacing the old disks one by one, allowing the zpool to resilver after each disk addition would certainly work. The time constraint—probably well over several days to carry out the whole replacement operation with today’s multi-terabyte drives—might not matter, as the whole idea of RAID is that the NAS remains usable (if noticeably slower) during the resilvering process.
So why not do that?
There’s one other constraint. This is a home server. It lives in the cellar. While situated away from most living spaces, it still makes its presence audibly felt, despite being configured to be as quiet as possible. The same is true of the backup server. They each also consume electricity.
Reducing the total number of disks from eight to three in each server would ameliorate both of these issues. Not only would that lower noise emissions and power consumption, it also makes future capacity expansion much easier.
So the plan I came up with was to forget about resilvering and instead start afresh. Create a new zpool with a smaller array of larger drives. Then I’d have to find some way of copying all the existing data to it.
To this end, I wondered if the storage industry might be interested in joining my adventure. They’ve been very helpful to Tested Technology in the past. So I dropped an email on the publication’s behalf to Seagate.
How this worked out is a story for the next instalment, but let’s just say it involves a 3D printer and a set of new, high-capacity disks.