Hello selfhosted! Sometimes I have to transfer big files or a large amounts of small files in my homelab. I used rsync but specifying the IP address and the folders and everything is bit fiddly. I thought about writing a bash script but before I do that I wanted to ask you about your favourite way to achieve this. Maybe I am missing out on an awesome tool I wasn’t even thinking about.
What’s wrong with rsync? If you don’t like IP addresses, use a domain name. If you use certificate authentication, you can tab complete the folders. It’s a really nice UX IMO.
If you’ll do this a lot, just mount the target directory with sshfs or NFS. Then use rsync or a GUI file manager.
Just don’t run rsync as a daemon as that’s a security nightmare
Why would you do that? That sounds awful…
It is, rsync sends data in plain text. There is a optional password that is also sent in plain text.
The daemon tracks file state, so the transfers start quicker because rsync doesn’t have to scan the filesystem.
Right, but if you’re transferring things that frequently, there are better solutions.
Not necessarily. Rsync deltas are very efficient, and not everything supports deltas.
It may very well be the correct tool for the job.
Anyway, problem fit wasn’t part of the question.
Yeah, there are probably a few perfect fits for it. I don’t rsync between machines very often, so the only use case I might have is backups, which is already well covered with a number of tools. Otherwise I just want to sync a few directories.
I never even set up DNS for things that aren’t public facing. I just keep /etc/hosts updated everywhere and ssh/scp/rsync things around using their non-fqdn hostnames.
You could also use mDNS to the same effect.