I have a home server that I’m using and hosting files on it. I’m worried about it breaking and loosing access to the files. So what method do you use to backup everything?
On hope
The “small to medium business” route I see!
And using the fact that raid is a backup!
This guy is rawdogging his RPi, just like me
Me too! Actual servers are docker-compose which is on git but the data…yeah that’s on hope hahaha
This is the way.
Backblaze on a B2 account. 0.005$ per gb. You pay for the storage you use. You pay for when you need to download your backup.
On my truenas server, it’s easy as pie to setup and easy as 🥧 to restore a backup when needed.
I’ll add to this that restic works amazingly with Backblaze. Plus a dozen or so other backup options.
B2 is awesome. I have Duplicati set up on OpenMediaVault to backup my OS nightly to B2 (as well as a local copy to the HDD).
I also recommend B2, it’s an S3 compatible service so any backup software/scripts/plugins that work with S3 should work with Backblaze.
Maybe I’m stupid, but what is B2? A Backblaze product?
Yes it’s their cloud storage.
I didn’t realize they did anything other than that!
deleted by creator
You guys back up your server?
In the 20 years that I’ve been running a home server I’ve never had anything more than a failed disk in the array which didn’t cause any data loss.
I do have backups since it’s a good practice and also because it familiarizes me with the software and processes as they change and update so my skillset is always fresh for work purposes.
If your data is replaceable, there’s not much point unless it’s a long wait or high cost to get it back. It’s why I don’t have many backups.
I am lucky enough to have a second physical location to store a second computer, with effectively free internet access (as long as the data volume is low, under about 1TB/month.)
I use the ZFS file system for my storage pool, so backups are as easy as a few commands in a script triggered every few hours, that takes a ZFS snapshot and tosses it to my second computer via SSH.
So what method do you use to backup everything?
Depends on what OS that server is running. Windows, Unraid, Linux, NAS (like Synology or QNAP), etc.
There are a bazillion different ways to back up your data but it almost always starts with “how is your data being hosted/served?”
deleted by creator
I run everything in docker. I have an ansible playbook that backs up all the docker volumes to a minio server I’m running on a separate machine. I periodically upload backups to idrivee2 with the same playbook
I use Duplicati and backup server to both another PC and the cloud. Unlike a lot of data hoarders I take a pretty minimalist approach to only backing up core (mostly docker) configs and OS installation.
I have media lists but to me all that content is ephemeral and easily re-acquired so I don’t include it.
Duplicati is great in many ways but it’s still considered as being in beta by it’s developers. I would not trust it if the data you back up is extremely important to you.
Borgbackup
ITT: lots of the usual paranoid overkill. If you do
rsync
with the--backup
switch to a remote box or a VPS, that will cover all bases in the real world. The probability of losing anything is close to 0.The more serious risk is discovering that something broke 3 weeks ago and the backups were not happening. So you need to make sure you are getting some kind of notification when the script completes successfully.
Proxmox Backup Server. It’s life-changing. I back up every night and I can’t tell you the number of times I’ve completely messed something up only to revert it in a matter of minutes to the nightly backup. You need a separate machine running it–something that kept me from doing it for the longest time–but it is 100% worth it.
I back that up to Backblaze B2 (using Duplicati currently, but I’m going to switch to Kopia), but thankfully I haven’t had to use that, yet.
PBS backs up the host as well, right? Shame Veeam won’t add Proxmox support. I really only backup my VMs and some basic configs
Veeam has been pretty good for my HyperV VMs, but I do wish I could find something a bit better. I’ve been hearing a lot about Proxmox lately. I wonder if it’s worth switching to. I’m a MS guy myself so I just used what I know.
Veeam can’t backup Proxmox on hypervisor level, only HyperV and VMWare
PBS only backs up the VMs and containers, not the host. That being said, the Proxmox host is super-easy to install and the VMs and containers all carry over, even if you, for example, botch an upgrade (ask me how I know…)
Then what’s the purpose over just setting up the built in snapshot backup tool, that unlike PBS can natively back up onto an SMB network share?
I’m not super familiar with how snapshots work, but that seems like a good solution. As I remember, what pushed me to PBS was the ability to make incremental backups to keep them from eating up storage space, which I’m not sure is possible with just the snapshots in Proxmox. I could be wrong, though.
You are right about the snapshots yeah. The built in backup doesn’t seem to do incremental backups.
I’m backing up my stuff over to Storj DCS (basically S3 but distributed over several regions) and it’s been working like a charm for the better part of a year. Quite cheap as well, similar to Backblaze.
For me the upside was I could prepay with crypto and not use any credit card.
Veeam Agent going to a NAS on-site and the NAS is backed up nightly to IDrive because it’s the cheapest cloud backup service I could find with Linux support. It’s a bit slow, very CPU-bound, but it’s robust and their support is pretty responsive.
I run linux for everything, the nice thing is everything is a file so I use rsync to backup all my configs for physical servers. I can do a clean install, run my setup script, then rsync over the config files, reboot and everyone’s happy.
For the actual data I also rsync from my main server to others. Each server has a schedule for when they get rsynced to so I have a history of about 3 weeks.
For virtual servers I just use the proxmox built in backup system which works great.
Very important files get encrypted and sent to the cloud as well, but out of dozens of TB this only accounts for a few gigs.
I’ve also never thrown out a disk or USB stick in my life and use them for archiving, even if the drive is half dead as long as it’ll accept data I shove a copy of something on it, label and document it. There’s so many copies of everything that it can all be rebuild if needed even if half these drives end up not working. I keep most of these off-site. At some point I’ll have to physically destroy the oldest ones like the few 13 GB IDE disks that just make no sense to bother with.
Almost all the services I host run in docker container (or userland systemd services). What I back up are sqlite databases containing the config or plain data. Every day, my NAS rsyncs the db from my server onto its local storage, and I have Hyper Backup backup the backups into an encrypted S3 bucket. HB keeps the last n versions, and manages their lifecycle. It’s all pretty handy!