UnRAID Revisited with the LincStation N1 (part 2)

While we wait for the major Unraid upgrade that will allow us to dispense with our pseudo Unraid array, we’ve been making a few cautious changes to the LincStation N1 installation we described in part 1. And although it remains without data redundancy, we’ve increasingly come to rely on it for media storage and deployment across the LAN.

This isn’t as stupid as it might sound. One of our key “cautious changes” is the implementation of Syncthing on the N1, which ensures that any data transferred to the Unraid machine gets copied across to the relative security of BTRFS on our Terramaster 4-bay F4-223. We could, of course, simply use the F4-223 directly for media deployment, as indeed we were doing before the N1 arrived. But there’s an excellent reason for bringing the N1 into the loop.

We’ve been looking at our electricity bill!


Tested Technology Towers recently became conscious that the cost of electricity has virtually doubled since the beginning of this decade. As a consequence, we’ve changed our minds about keeping our several NAS devices running 24/7. They’re now on a schedule: one of them comes up once a week, another just once a day in the late evening. One of them remains powered at all times, ready to deliver data whenever it’s needed. That last machine is the LincStation N1.

Once we’d hooked up our Meross smart plugs to all our NASes to check their power consumption, picking the N1 as the always-pn choice was a no-brainer. Here are the relative standby power consumption figures, comparing the N1 to the F4-223:

 Approx Average
(W)
Daily Power
(KWh)
Daily Cost
(£)
Extrapolated
Annual Cost
(£)
N190.190.0518.25
F4‑223300.690.2073.00

What We Will Miss about Unraid

A dedicated drive (or pair of drives) for parity checking: that was the single most attractive feature that originally drew Tested Technology to Unraid in 2019. A key point is that the drives in the array remain truly independent of one another. In ordinary operation a drive spins up only when it’s needed to read or write a file. That drive and the parity drive are the only drives tha need to be spinning. There are just two occasions when all the drives are spinning: when the system is running a parity check* and when Unraid is reconstructing absent data for a removed or corrupt drive.

*Parity checking is an optional user-settable operation typically run out of hours to ensure that the parity is being maintained accurately.
Even the two parity drives, if you’re using two, are independent of one another. One isn’t a mirrored backup of the other; each is checking parity in a different way.

By avoiding the RAID strategy of striping data across multiple drives, Unraid offers a couple of important benefits. We’ve already mentioned that the way data are written uncomplicatedly to each drive makes it very simple to remove the drive and read the data off it in another system.

That’s a very valuable feature if the NAS enclosure hardware fails for some reason. In the course of ordinary operations, Unraid drives are like independent agents in a properly organised spy cell, each drive in its own domain, only active on demand, knowing nothing about any of the other drives in the array.

You can think of this classic Unraid array as a miniature version of the once-popular MAID storage design. The acronym stands for Massive Array of Idle Drives.

*A rather ungainly acronym for Write Once Read Occasionally.
This was designed for WORO* use cases where the data will be written once and subsequently reading from time to time. This dominant scenario in many enterprise situations is also relevant to many consumer and small business NASes. A collection of photos or movies, for example, is built from individual files each written once to the storage and then read back whenever they need to be consulted.

The M for Massive in MAID storage can mean hundreds—or even thousands—of hard drives. The energy cost of keeping an array like that running would be nonsensical. That’s where the Idle part comes in. Most of the time they’re not running.

Unraid is comfortable with a dozen or so drives, which hardly qualifies it to be called MAID. But with the rising cost of electricity, even for Unraid-size arrays, running up the drives only on demand can make a lot of sense. The obvious downside is that data delivery is delayed as the drive comes up to speed. But Unraid is a flexible enough proposition to offer ways of speeding up delivery when that’s what you need.

Alas, no more…

We feel nostalgic about those previous paragraphs, knowing that our new LincStation N1 will probably never be using this original, economical and ingenious parity-checking technology. For reasons we explained in part one, an all-solid-state NAS like the N1 storing its data in NAND cells isn’t the proper place for bit-by-bit parity checking.

That’s OK, because Unraid—with the help of its underlying Linux operating system—has evolved to give us other ways to ensure the integrity of the data it stores. Yes, we’re back with the old-school RAID concepts we thought we were escaping. But the rich variety of the apps and tools that have burgeoned around Unraid during the course of its nearly quarter-century of existence, are carried over into the new era of BTRFS, ZFS array mechanisms. Unraid on SSD loses its unique selling proposition but all those goodies remain.

Something for LincStation N1 Fans

At the end of April we received a rather odd email from the LincStation manufacturers, LincPlus. Advising us that a BIOS upgrade was now available for the N1 and including a link to the download with full instruction on how to install it was industry good practice. According to the email, “this update focuses primarily on the fan speed control system, providing smarter, low-noise fan speed adjustments to adapt to varying temperature conditions for better heat dissipation.” The upgrade oddness was the offer of twenty dollars for sending back a screenshot of the successful update.

Our small Tested Technology team has a total of around one hundred years experience of the IT industry. During this time we’ve undertaken uncountable similar firmware and software upgrades. But never before has a manufacturer offered to pay for the time we put in doing the work.

The upgrade was simple enough. First we had to download the BIOS zip file from the link provided and copy its unzipped contents across to a USB stick formatted as FAT32 and named UTOBOOT. With this USB stick plugged into one of the three USB sockets on the N1, and a keyboard attached to a second USB socket, hitting the F11 key on boot while holding down the the Shift key opened up a screen that automatically booted the system off UTOBOOT. The BIOS update then runs for a minute or two before restarting the system. If you have adeptly yanked out the UTOBOOT stick ahead of the restart the hardware should now be loading Unraid. Job done.

An improved BIOS and twenty dollars in the kitty. Kerching!

Well, not quite. To claim the prize, LincPlus was asking for “a screenshot of the successful BIOS update, demonstrating the updated fan control settings”. We’d managed the successful update, because it showed up in the Unraid Dashboard. But the fan settings would be buried down in the BIOS and we had no instructions how to get to the BIOS settings. And we could find no clue how to do that on the LincPlus Website either.

There seems to be no standard for accessing the BIOS. At boot time you machine-gun one of the function keys, or perhaps Esc or Del and hope for the best. We started with F12 and hit the jackpot first time.

Installing Syncthing

Can’t be done. Not on this LincStation N1 with the (dummy) main Unraid array powered down.

Syncthing is available for Unraid as a Docker—actually as a choice of two different Dockers, from the binhex repository and the linuxserver repository.

Dockers are installed by default on the main Unraid array. This is something that Lime will need to change in the near future when the Unraid array is no longer “main”, gets relegated to the category of “User-defined Pools” and can optionally be left out altogether. Meanwhile, if we want to install a Docker like Syncthing we’re going to have to power up our dummy array. The version of Unraid we’re currently running (version 6.12.10) insists “No main array, no Dockers”.

But even so, we can’t install Syncthing on it. Our dummy array isn’t intended for use and is only 2GB in size. By default, the system creates four directories on this primary Unraid array:  appdata, domains, isos and system. These are set up to house the Dockers that many of the apps, like Syncthing, deploy.

From the Unraid Manual
  • appdata – This is the default location for storing working files associated with docker containers. Typically there will be a sub-folder for each docker container.
  • system – This is the default location for storing the docker application binaries, and VM XML templates
  • domains – This is the default location for storing virtual disk images (vdisks) that are used by VMs.
  • isos – This is the default location for storing CD ISO images for use with VMs.

Docker is a technique that allows an application to be deployed inside an isolating container rather than being run directly using the resources of the operating system. This approach bears some similarity to virtual machines, with the difference that rather than duplicating all the operating system resources inside the container, a Docker container can remain lightweight by only including resources needed uniquely by the app it is running. Common resources, notably the operating system kernel, that can be relied on to be present, are accessed from outside the container.

This reliance on common resources means that, unlike virtual machines, which can be used to run a guest operating system and its associated applications on an entirely different host operating system (MacOS and Apple apps on a Windows machine, say), a Docker needs to be closely tailored to the operating system it will run on. But if you’re adapting apps to run on a particular operating system, like Unraid, Docker containers are an efficient and safe way to do this.

Apps that run in a Docker container access files in a virtual environment unique to the container. These files can be made available to the host operating system by way of a map maintained in the Docker’s configuration. This map can be set up prior to installation by editing the configuration file presented by the Unraid encapsulation of the Docker. If you leave this configuration file alone, by default mappings will be set up to point to the main Unraid array.

This is what made the installation of Dockers on our original 2019 Unraid system such a simple procedure. Unraid’s original array scheme lets you install Docker-based apps from the Community Applications store without needing to know anything about Dockers. But as our new LincStation N1 all-SSD NAS has effectively dispensed with the original array scheme, we now needed to mug up on Dockers. 

The two relevant sections of the Binhex Syncthing Docker config look like this:

The upper mapping means that Syncthing’s media directory is linked by default to Unraid’s /mnt/user directory. The lower mapping is set to create the Syncthing config file where it can be located by the Unraid administrator (perhaps using the Unraid WebUI console) in /mnt/user/appdata/binhex-syncthing.

A couple of things to note about these maps. Firstly, /mnt/user (not be confused with the /usr standard Unix system directory) is an artifact of the Unraid Linux implementation. In our case it’s actually the same directory as /mnt/disk1, which is Unraid’s main array (our 2GB dummy).

Our investigations into this left us baffled for an embarrassing length of time. They were definitely the same directory because a new subdirectory we created in /mnt/user instantly appeared in /mnt/disk1 but when we looked for symlinks or hard links we couldn’t find any. That was when we learnt about shfs.

Preliminary research uncovered a little-known Linux utility called shfs that’s useful for mounting remote filesystems. It dates back to 2004 and is still in beta, so it’s an unlikely candidate.
Tested Technology doesn’t pretend to have an expert handle on Unix, one of the 20th century’s greatest inventions. But we’ve been writing about Unix-like operating systems since the mid-1990s and were surprised to discover this apparently large hole in our knowledge.

shfs is the power behind the unique way Unraid handles user shares. It turns out to be a command line utility called shfs in /usr/local/bin.This is a proprietary application, exclusive to Unraid and indeed one of its core pieces of technology. shfs creates and manages am Unraid-exclusive virtual file system called fuse.shfs.

ChatGPT gave us the low-down:

Primary Functions of shfs in Unraid:

  1. Unified Directory View: Aggregates directories and files from all disks in the array to create a single logical view.
  2. User Shares Management: Manages user-defined shares that can span multiple physical disks, providing flexibility in how data is stored and accessed.
  3. Dynamic Allocation: Handles file placement dynamically based on user share settings, including allocation methods, split levels, and disk inclusion/exclusion.

There’s a lot more to be said about shfs.But for present purposes all we need to know is that /mnt/user is a virtual directory that by default aggregates directories and files physically resident elsewhere. In this case, the “elsewhere” is our dummy Unraid array.

If we were to just go ahead and install binhex-syncthing using these default setting we’d risk overflooding our small USB, and in fact this is what happened on our first attempt. For our second try we switched to the linuxserver implementation of Syncthing. There seems to be little difference between the two but the fact that the linuxserver version had been updated more recently was our decider.

The configuration page for the linuxserver version of Syncthing doesn’t have options filled in for the two mappings it offers by default. These are for a pair of Docker internal directories simply called data1 and data2. Strangely, there’s no suggestion as to what these directories might be for but to be on the safe side we mapped them to /mnt/disks/OWC/data1 and /mnt/disks/OWC/data2.

It wasn’t until after we’d run the install, successfully and without incident, that we discovered we’d missed an important optional mapping, hidden in a dropdown labelled “Show more settings …”. This is the mapping for the Docker’s config directory and it’s prefigured to point to /mnt/user/appdata/syncthing. Thanks to shfs this lands the config directory on our dummy USB Unraid array.

For now, this Syncthing installation has no problem. Its config.xml is only a small text file. Of course, when the version of Unraid arrives that allows us to do away with this dummy USB, we’ll have to move the config directory elsewhere. But by then, hopefully, we’ll understand shfs a lot better.

It’s already clear to us that rather than fiddling with these individual mappings for Syncthing’s Docker directories, shfs very likely offers a simpler way of changing the basic default, so that /mnt/user points elsewhere than the main Unraid array.

 

Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *