You want a NAS device that is repairable, non-proprietary, reliable, energy efficient, cost efficient, and standalone.
I won't tell you why you want this. If you're reading this post, you've probably already decided that you want one, but don't know what you should build. This guide is written around building a device for home use. It reflects decades of experience, but is written to reflect reality at the time the document was written. If you are reading this post ten years into the future, the specific information about hardware and software may no longer apply, but the reasoning that chose the hardware and software in the first place is unlikely to go out of date any time soon (and has remained mostly consistent for the previous twenty years at least!).
It's important that the device be repairable. There are lots of different aspects that affect repairability, but I think the most important are:
It's critical that the device be non-proprietary. This ties into all of the other requirements here.
We can avoid proprietary software easily. We cannot avoid proprietary hardware so easily: We have, unfortunately, regressed as a society to the point that electrical schematics are no longer provided with computer hardware. The best we can do is use hardware that conforms to various published open standards and can therefore be freely interchanged with other conformant hardware.
Needless to say, a device that's going to be running 24/7 that is intended for long-term storage of data must be reliable. It must continue to work day after day, and require minimal maintenance. The system must guarantee the integrity of the data we store: Every single byte of data we store must be verifiably free of corruption for the entire lifetime of the device.
We'll aim for a lifespan of around twenty years. We must accept that hardware failures do happen, and build in a level of redundancy that matches our tolerance for failures (and build cost).
As the device is running 24/7, we want energy usage to be as low as we can reasonably get it. However, if lowering energy usage requires using strange or non-standard/non-commodity parts, we'll instead use the commodity parts and live with a slightly higher energy usage.
As this is a home NAS and won't be used as some kind of mission-critical business server, we don't want to spend obscene amounts of money on it.
You may not be the kind of questionable individual who has a 19" server rack in their home, and nor should you need to be. If you are that kind of individual, you've probably already built something more advanced than the system we'll build here.
As this is a home NAS, we'll try to build something that doesn't require much of anything in terms of other hardware running on the network.
There are some goals that we deliberately won't try to achieve.
It's possible to build a NAS where you can keep adding more storage over time. However, in my experience, it is better to:
Why do things this way? Firstly, storage technology improves quickly. The amount of storage per unit of currency you can buy today increases year by year. This means storage is essentially deflationary; tomorrow is generally always a better time to buy storage than today. Keeping this in mind, you might as well take the simpler option: Buy the minimum that you need right now and that you know will last you for a good long while, and plan to build a successor system ten or twenty years from now that will massively increase your storage capacity (and energy efficiency, if trends continue). When you build the successor, you can copy your old data to the new system, switch everything over to the new system, and then turn off the old system, without service disruption. Because we're taking care to ensure that everything is very generic, commodity, and non-proprietary, migrating the entirety of your data to the new system can be done in one single command.
Secondly, we'll be using software filesystems that are drastically easier to manage if you are working with arrays of drives where all of the individual drives are the of same size. At some point in the past, it was economical to buy 1TB hard drives. It is now no longer economical to buy drives smaller than 4TB. This means that, at some point in the future, drive sizes we consider commonplace now may not even be purchasable later. Pick a size that makes sense right now and buy an array of those. Management simplicity is not to be undervalued!
We'll build a system that can run a few services (such as, perhaps, immich for a photo gallery), but we'll largely treat the system as being a dumb container for data, and that won't have a lot of CPU power or performance.
If you want to run lots of extra services, it's better to set up a separate machine specifically for those services. It's outside of the scope of this document, but you can make your storage accessible from the machine we'll build to other machines on your network.
Here's the general approach we'll take for the hardware:
For software:
x86_64 is the current de-facto standard architecture for servers and desktops.
Everything from the instruction set to the boot process is standardized and documented in
publicly-available documents. Machines built twenty years ago still largely
boot and operate the same way as machines built today. This is, sadly, at the
time of writing not the case for any other hardware architecture. In ten years
time, this may no longer be true, but it is true as of late 2025.
Additionally, x86_64 hardware almost inevitably involves
ATX
form factors, and so we will never have to struggle to find a computer case
into which our components will fit, and will never have to locate a power
supply with a non-standard connector.
AMD CPUs appear to be more cost effective than the competition and, importantly, AMD rarely releases new incompatible CPU socket types. This means that it is possible to buy compatible motherboards for many years beyond the release of each CPU.
In my experience, ASUS motherboards seem to use the most robust components and appear to have the best quality assurance. I have seen one ASUS motherboard fail in around twenty five years of purchases and deployments.
I have never seen a Seasonic power supply fail, and have never met anyone who has seen one fail. I have seen plenty of supplies fail from just about every other PSU vendor.
Update (2025-11-22): I'm defining "fail" here as "the power supply ran for some time and then stopped working". I have never seen a Seasonic power supply do this. I have had one Seasonic PSU show up with a broken fan, but I think this may have been damaged in shipping.
Silverstone have a range of non-rackmount cases that have hot-swappable drive bays for home NAS applications. Once you have used hot-swappable bays, you never want to go back to having to open a case to remove drives.
I remain unconvinced of the utility of ECC specifically for home use. Cosmic rays absolutely do flip bits in memory, and this is a real problem, but statistically this is incredibly unlikely to cause actual practical problems for a single device in the home. In a data center with hundreds of thousands of machines with terabytes of memory per node, ECC is justified. Supporting ECC memory would require buying much more expensive hardware, so we just won't.
Most cases that have hot-swappable drive bays will use
SAS connectors
(specifically SFF-8087). This is not a connector that most consumer
hardware users will ever see as it tends to only appear on server-class
hardware.
In order to connect these drive bays to a consumer motherboard, we need a host bus adapter card. These tend to look like this:

Note the two SFF-8087 connectors on the right hand side of the card. We
can connect two SAS cables from those connectors to the drive bays and
give ourselves (typically) eight drive connections. Additionally, this
can lead to big performance increases because drive I/O is performed over the
PCI bus instead of the motherboard's SATA hardware.
There is a thriving market in refurbished cast-off enterprise hardware. These cards can be found on eBay and Amazon for £20-£80. For our purposes, it almost doesn't matter which specific card we choose because they will all be sufficient.
At the time of writing, purely in terms of price-per-terabyte, traditional "spinning rust" hard drives are still far ahead of solid state storage. For our purposes, we don't care about performance, we care about our storage being large and cheap.
Additionally, whilst solid state storage is still considered to be more reliable than traditional hard drives, the failure modes tend to be less desirable. In my experience, when solid state drives fail, they simply go from working perfectly to completely dead with no prior warning. Whilst traditional hard drives may fail more often, they tend to fail gracefully and let you know ahead of time that they're in the process of dying.
We'll use a single solid state drive to boot the operating system, but we'll use magnetic drives for our actual storage. We'll mitigate reliability concerns with redundant drives.
Computers generally don't like being subjected to voltage spikes or voltage drops. Magnetic drives, despite being vastly more robust nowadays, generally don't like having their electrical supply abruptly removed.
A common desktop uninterruptible power supply should be used to clean the supply of power to the computer, and to give you a good ten minutes to manually shut the computer down if your power goes out completely.
We'll use Linux because it's ubiquitous and vendor-neutral, and we'll stick to very boring and stability-focused distributions of Linux. Because we are using very generic and commodity hardware, and because we don't need the latest sparkly features, we don't need to use a Linux distribution that represents the bleeding edge of Linux development. We want reliable, stable, and boring.
We could use a NAS-focused distribution such as TrueNAS, but that would be tying our system to the fate of one company, and would make the software we're running less common and less "standard". Typically, anyone familiar with Linux can sit down at any Debian or RHEL system and be competent enough to diagnose and fix issues. For the kind of long-term support we need, RHEL, in particular, guarantees ten years of software support for each release.
My current choice for these kinds of systems is AlmaLinux (a 100%-compatible RHEL derivative), but Debian Stable is also a good choice.
There is simply no other filesystem, at the time of writing, that even comes close to competing with ZFS in terms of reliability, data integrity, and administrative ease. ZFS is widely supported on multiple operating systems, and is used in everything from small deployments to truly huge supercomputer deployments at places such as Lawrence Livermore National Laboratory.
Critically, ZFS contains integrity features that transparently detect and automatically correct data corruption. It is the only filesystem that I trust for the long-term storage of data.
One small issue can arise with ZFS depending on which Linux distribution you're using. ZFS is completely open-source, non-proprietary, and is considered "free software". The source code is, however, distributed under a license that is considered incompatible with the license used for the Linux kernel. Therefore, ZFS is maintained outside of the Linux kernel project. Why is this an issue? Because ZFS is maintained externally, it inevitably slightly lags behind the current Linux kernel. Therefore, if you're using a very bleeding-edge Linux kernel, there's always a chance that ZFS won't (yet) be compatible with it. You can reboot the system after an upgrade and find that you can't load your ZFS filesystems. This makes unattended upgrades something that you just can't have. This problem goes away entirely if you are using a stability-focused distribution.
This is the part of the build that I can't be too specific about because the actual numbers involved will be based on your needs. As mentioned, you'll need to work out what your ten year storage requirements are and buy accordingly. I have a post from last year that examines drive pricing. You also need to decide how tolerant the system must be to drive failures.
Ultimately, systems like ZFS (and the hardware-based RAID systems that preceded it) aim to do one main thing: Take a set of drives, and present the illusion that they're really one large drive. Additional concerns include "spread all the drive I/O over all drives so that we can read and write data much more quickly than we could if we had a single drive as the bottleneck" and "make it so that if one or more drives fail, we can replace them and carry on without losing data".
Disks inevitably fail. A filesystem such as ZFS, depending on how it is configured, is capable of continuing to run without interruption even when drives fail. The main choice you must make when planning an array of drives is "how many drives must fail simultaneously for data loss to occur?"
ZFS takes drives and combines them into a storage pool. We have the freedom to decide exactly how the drives are combined, and the choices we make change how the storage pool performs, and how much failure it can tolerate before breaking down. Depending on how we choose to configure things, we are essentially betting against the probability of a specific set of failure conditions occurring. We can increase the safety of the system, and this will generally require us to trade against performance, or trade against the total usable storage size of the pool.
Let's assume, for the sake of example, that we have six drives. The most commonly seen configurations for storage pools of six drives include...
The simplest and most dangerous configuration is known as striping in traditional RAID terminology:

We have a storage pool called Storage, and inside that pool we have
six drives. Any data written to the pool is distributed amongst the six
drives. This gives us very high write performance; the underlying system
can write data to any drive that's ready to receive it, essentially multiplying
our effective bandwidth by 6x. Additionally, we get to use every single byte of
every drive for our storage. This means that if we bought 36TB of storage, we
get to use all 36TB for our data.
Unfortunately, this comes at a cost. If one single drive fails, we lose all of the data in the pool.

Obviously, for long-term data storage, this kind of configuration isn't suitable. This kind of setup might be used in situations where we care about performance above all else, or where the data is purely temporary and where we can completely destroy and create the storage pool anew.
Another kind of pool configuration we could use involves mirroring:

We divide the six drives up into pairs of drives, and we join together each pair such that data written to one is automatically mirrored onto its partner. If one partner of a pair fails, there's no problem! The system can continue functioning. ZFS will tell you that the drive has failed, and will tell you that you need to replace it, but everything will continue working anyway. In fact, the system would keep working even if a partner from each pair failed:

In this particular configuration, you can see that we effectively spend half of our storage on redundancy. If our six drives add up to 36TB of space, we only get to use 18TB of that space for our data, as the other 18TB is spent on mirrored copies of data.
However, if a single pair fails, we are back to the same situation as pure striping: We lose all of the data in the pool.

This doesn't sound particularly good, but consider that hard drive failure rates are extremely low. Backblaze regularly publishes drive failure rates from their datacenters. Annualized failure rates for most drives are in the single digit percentages. Generally, if you have a hundred drives, you might see one or two fail per year. Consider also that we're not betting that two drives won't fail at the same time, we're actually betting that two specific drives in the same pair won't fail at the same time, which is much more unlikely.
It's also possible, if you have unlimited funds and an extreme aversion to risk, to create triplets or quadlets (and beyond) of drives so that three or four drives in the same group have to fail at the same time for there to be any data loss. I've never actually seen anyone do this as most people aren't willing to buy three or four times the storage that they need and spend it on redundancy.
The last configuration supported by ZFS falls somewhere between the above
two configurations in terms of safety, performance, and cost. We can configure
a storage pool in so-called raidz1 or raidz2 mode. In raidz1, one drive's
worth of storage space is used as "parity" data that can be used to recover
from the failure of one drive. This gives us a configuration that looks
somewhat like striping (although it doesn't have the same performance
characteristics), but the pool can tolerate the failure of any one drive
in the pool. A failed drive will simply degrade the pool, but the pool will
keep working. Replacing the failed drive will restore the pool to full
performance.

In a six drive array, this means we get to use (5 / 6) * 100 = ~83% of our
storage pool for our data, with one drive's worth of storage being used for
recovery. If any two drives fail simultaneously, the pool fails:

Accordingly, raidz2 simply uses two drives worth of storage for
recovery, and so the pool will only fail if any three drives fail
simultaneously:


This obviously allows use to use (4 / 6) * 100 = ~66% of our storage pool
for data, with two drive's worth of storage used for recovery.
Of course, raidz3 uses three disks worth of storage, and can tolerate
four disks failing...
For long-term data storage, I have tended to use raidz2. I feel it balances
safety with allowing me to use more than half of the storage that I paid for.
I feel that, I might get unlucky and have a second drive fail before I replace
the first one, but it seems very unlikely that I'd allow a third to fail. If
I had unlimited funds to throw at storage, I would use mirroring everywhere.
For small arrays of three drives, I use raidz1.
There are more complex configurations possible if you are using much larger numbers of drives, but that's not going to happen in a home NAS setup.
Repeating some common wisdom here: When you buy drives, try to buy drives from a mix of manufacturers. This reduces the chances that you manage to buy two drives from a batch that had a manufacturing defect. If you do end up having to buy multiple drives from a single manufacturer (which you will if you're buying six or more drives), and you're using mirroring for pairs, try to ensure that each drive ends up in a different pair.
I'd assume the following:
raidz2 unless you have money to burn.We need a low end AMD-compatible ASUS motherboard with at least one PCI slot. We need space for an NVMe to boot the operating system. We need support for 64gb of RAM. We need a low-end AMD CPU that has an embedded GPU so that we don't have to use an external graphics card.
There are many components that would fulfill these requirements, but here's an example list pulled from Scan at the time of writing:
| Item | Price |
|---|---|
| ASUS ROG Strix B550-F | £149.99 |
| AMD Ryzen 3 3200g | £52.49 |
| 64gb Corsair RAM | £150.00 |
| 650w Seasonic PSU | £99.98 |
| 500gb WD NVMe | £47.99 |
| £500.45 |
Note: We are late in the lifecycle of DDR4 memory. If you are concerned about this, simply pick another low-end ASUS motherboard that takes an AM5 CPU with an embedded GPU instead, and use DDR5 memory and an appropriate low-end AMD Ryzen CPU instead.
We also need 6-8 drives that will vary depending on our storage needs. In 2024, 6x8TB drives cost around £1000. That price drops month-by-month, and the density of drives increases.
We also need a SAS host bus adapter card. eBay has piles of these. Most of the cards are refurbished cards from LSI. They are typically labelled along the lines of:
9300-8i SAS LSI HBA - IT Mode NAS Unraid ZFS TrueNAS Proxmox JBOD - NO BIOS LSI 9300-16i SAS HBA Card IT Mode NO BIOS Unraid ZFS TrueNAS Proxmox JBOD Server LSI 9300-8i SAS HBA Card - IT Mode NAS Unraid ZFS TrueNAS Proxmox JBOD - NO BIOS
Here's an example:
It is difficult to go wrong when buying these cards, but use your best judgement: The above examples make it clear that it is intended for use in some kind of home NAS (hence all the keywords such as TrueNAS, Proxmox, JBOD, ZFS, and etc). The cards will have been flashed with basic firmware that is essentially designed to offload all of the work to a software system such as ZFS.
Finally, a Silverstone case. Something like the CS383 is ideal for this kind of application. This particular case has SATA connectors in the drive bays, and so it will require a breakout cable to get from the SAS connectors on the PCI card to the SATA connectors on the drive bays. Other cases usually expose a backplane that has SAS connectors on it, so that will just require a standard SAS cable to get from the card to the backplane.
We should also use an uninterruptible power supply. I like Eaton power supplies, but they are not cheap. APC power supplies are tolerable, but they have introduced an anti-consumer practice of making it impossible to change the battery without voiding the warranty. It is entirely possible to change the battery, but you have to open the casing to do it. Eaton power supplies consistently give you a battery slot to make changing the battery trivial. The batteries are simple lead acid batteries and so will typically last over a decade. The computer we've described here will never draw more than 200W and will frequently be nowhere near this value. Any common uninterruptible power supply for desktop computers will "work".
Get AlmaLinux or Debian Stable and install it onto the internal NVMe.
Then, follow the OpenZFS installation guide. It's typically a couple of commands to install the ZFS kernel modules.
A full guide to ZFS is outside of the scope of this documentation, but the existing manual is excellent, and there's a ton of information online on setting up storage pools. Once your pool is up and running, that's essentially the work completed. It's up to you how you decide to organize files in filesystems created in the pool.
You can copy files directly onto the machine using standard tools such as
ssh or sftp, or set up Samba to allow for backing
up over Windows file sharing.
Set up ZFS snapshots to
protect daily backups and allow you to efficiently go back in time to any
previous version of your data...
Print QR codes onto sticky labels that you can place onto the drawers of the drive bays. The QR code should contain the manufacturer, model number, and serial number of the drive that's in the bay. Then, when ZFS tells you that a drive has failed and you need to replace it, you know exactly which drive bay you need to open.

When it breaks, you can't fix it. It will come with a proprietary operating system. When it comes time to get your data off of it, you'll find you can't. The manufacturers will require the use of "certified" drives that just so happen to cost about four times as much as standard drives whilst being functionally identical. You are a helpless captive consumer and your data means nothing.
You'll pay a minimum of £4 per terabyte per month in perpetuity for the privilege of storing your data on someone else's computer. When it comes time for you to download or move that data, you'll pay for the privilege of doing that too. When their system is compromised, your data goes with it (which can be mitigated somewhat by end-to-end encryption). If the cloud provider goes out of business, or just decides that it can't be bothered to offer cloud storage anymore, your data is gone.
If you want to store around 36TB of data, that's £1728 per year. The hardware described here may well cost less than £1728 (depending on where you are in the world) and is a one-time fee (or perhaps once a decade), and you can access it as quickly as your LAN will allow for no cost.
I do recommend backing up some of your data to Hetzner Object Storage using rclone with end-to-end encryption as an extra off-site backup for peace of mind.
Drives that are not powered up regularly tend to rot. A drive kept powered off in storage is not being checked for data corruption, and the corruption cannot be detected at a later date unless you've religiously stored checksums with all of your data. Corruption that is detected cannot be repaired. Flash cells degrade without regular supplies of power. Additionally, mechanical drives have parts that seize up over time when not used regularly. Your data may be in pristine condition on the drive, but that's poor consolation if the drive won't physically rotate anymore. Anything not actively managed succumbs to entropy.
Additionally: You should be taking daily backups. There is no excuse for not taking daily backups. If performing a backup daily takes anything more than typing the word "backup", you will simply not do daily backups. Having to take out and plug in a drive qualifies.