Where my data lives: storage choices for Project Quiet Ground

I decided to write this because it’s one of the fundamental blocks that led me to shape Project Quiet Ground the way it is. Some of this content has been written in bits and pieces in previous posts, but has never been consolidated into a single post.

It’s no secret that I always planned for disaster recovery. The reasons why I am so obsessed with it are quite personal, and I’m not ready to share them fully yet, although in a previous post I mentioned that in my life I learnt not to rely on systems and on people.

I was always very wary of losing data, but it was never as important as when I started being a consultant on the move. I always use the phrase “the backpack is my office, the suitcase my home”. It happened that my laptop broke a couple of times during my trips. What I was planning for was damn real.

In the early days I was an employee with limited choice on the hardware: ThinkPads at IBM and Red Hat, Toshiba when I was at Sun Microsystems. So Linux was the obvious choice (better than Windows after all!).

But after many years using Linux on my laptop, in 2010, when I started my endeavour as a freelancer, I chose Apple, macOS and the MacBook Pro in particular, at the time for two reasons.

The first one was the worldwide hardware support, where you could fix your laptop anywhere in the world by just walking into any Apple Store. I have to say, that is quite unique and nothing that any laptop vendor can still provide. And yes, it happened twice, in Zurich and in London.

The other one was Time Machine. A real drama-free backup and restore. Just plug one or more disks and back your system up. And in the worst case scenario, you boot from there and restore an entire system.

I kept two USB disks in two different locations for the Time Machine backup, for a more long-term stable machine snapshot, and I later added Dropbox for a more daily backup of the actual data files while on the move.

That’s where I also learned to design my software, but I’ll leave to another post how I designed my personal accounting software for resilience.

That was until 2023, when I was living between London and Milan and decided to give it a try back on Linux on the desktop. I was quite upset to have spent 5000 Euros on a machine that had an endemic butterfly keyboard issue and had already been fixed twice under AppleCare (and now needs further repair!).

That was the time when my connection introduced me back to FreeBSD. And when I started dreaming about building a workstation with FreeBSD, in a similar way she was using it as a daily driver. Long story short, I wasn’t, and I’m not, ready yet to use FreeBSD full time as a daily driver.

But as I got acquainted again with FreeBSD and with ZFS (that I had actually used back in my days at Sun Microsystems), especially when I rebuilt my NAS with FreeBSD and ZFS, I really valued ZFS snapshots and, most of all, ZFS send/receive to copy data between systems and locations.

Gosh, moving my data archive to ZFS was eye opening. Online and offline backup was as easy as that: a self-contained “capsule” with my data.

But the true importance of that didn’t occur to me until I dug back into IBM i in recent months. One of the things I appreciated the most about that system (but also AIX, even if not to the same extent) is SAVLIB and SAVSYS. You insert a tape, run SAVSYS, and you have a full, drama-free bootable tape from which you can restore the entire system.

Does it ring a bell? Look at the pattern here. Different systems, same idea: keep data and state together, so they can be restored as a whole. Except for the medium and the operating system, the approach is very similar to Apple’s Time Machine.

And what about SAVLIB? You just save one or more libraries to tape, and you can restore both the applications and data at the same time. A sort of self-contained, isolated capsule that holds your core data.

At the same time, and it has been in the back of my mind since my connection and I experimented with NanoPIs, I wanted to build an “emergency datacenter in a box”: a Pelican case that can hold minimal equipment, like a beefy mini PC (e.g. a Minisforum) and a switch with Wi-Fi, where a small business could rebuild their applications and data in case the main building went bananas, even if with degraded performance.

This will probably trigger a lot of questions around modern applications, and you might recall my previous posts about my data-first approach. I might have scattered thoughts, but they all come together like a jigsaw puzzle and make sense in my weird, geeky brain.

But back on track: how could that be possible? Yes, at this point you’ve probably spotted it: ZFS. If you design an application that is self-contained, along with its data, you can use the same approach as IBM i with SAVLIB. You just export the ZFS dataset periodically, and rebuilding becomes as easy as restoring the dataset and restarting the applications.

For the record, the “datacenter in a box” is still on my to-do list.
Did I mention I am slow on everything? 😉

Bringing this back to my own setup. After this chain of thoughts, and albeit not being able to live on a FreeBSD workstation as I wished, I understood that if I could create a capsule of self-sufficient data and related applications, in case of disaster recovery, all I would have to do is buy a new workstation, mini PC or server, install FreeBSD, restore the home directory dataset and run an Ansible playbook (or a script) to install the standard applications that come with ports. Full stop.

From mail, to chat client, to sc-im with standard spreadsheet data, to custom scripts and COBOL applications, to name a few, they just work out of the box.

Damn simple but ingenious, right?

For the record, this is how my home directory on my main NAS is saved.

So, when I started thinking about Project Quiet Ground, that was perhaps the main requirement: a drama-free backup and restore for my personal midrange environment.

I immediately saw two utilisation patterns.

Either personal use, where everything stays confined in my home directory. In that case, the ZFS snapshots and send/receive I am already implementing are sufficient to back up my entire world. This is the current case.

But I also saw a potential usage for small workgroups, where Project Quiet Ground can sit in a multi-user environment. Even if that’s not the case yet, the backup and restore would follow the same pattern as the local installation: just zfs send/receive the “/var/midrange” dataset.

Look at this from the perspective of my “datacenter in a box” idea: plug the USB disk, import, send/receive the ZFS dataset, all done. And in a more rack-mounted server scenario, like my colocation, that would be the same boring procedure.

And, believe it or not, ZFS can behave quite similarly to IBM i’s ASP (Auxiliary Storage Pool), so that (in theory) I can move selected libraries to more efficient storage. There’s a lot more to say about ZFS, including snapshots and data retention, but I don’t believe this post is the right place to go further into detail.

This is also what led me to structure the system around libraries mapped to datasets, separating personal and shared environments.

It’s not theory. It’s already happening in the first scenario.

This is the reason why my initial thoughts around Project Quiet Ground were based on FreeBSD and ZFS. But my life has changed, and I am adapting, and so are the requirements of the project. Linux, along with my PursePC, also came into the mix.

As I stated in the prequel, this is a project meant to fit my life and not meant for others. Perhaps this makes sense for very few, or even for no one at all, and that’s ok.

Project Quiet Ground is an ongoing project. Nothing is fixed, and it is shaped by my daily usage. But I wanted to give you some background on the choices I am making for the project.

And yes, I am still dreaming of a cabin in the woods and a FreeBSD workstation. Still imagining the scent of coffee and cinnamon buns in the morning that will never be.

2026-04-08