Contributors to this thread:

How come that after all these decades, innovations and fancy solutions, the most accessible and most reliable way to do #backups is freaking zip/tar with low compression onto an external network drive... #linux

Currently dabbling with an overlay #encrypted FS + #rsync, which is similar to what is described here:

i.e. files are encrypted in real-time through a virtual filesystem and the encrypted content is stored externally via rsync. Makes it super fast, incremental and reasonably secure. When restoring, you can sync back to the encrypted FS and it magically appears in your real FS.

The only downside (but also advantage...) is that the remotely stored content can not be viewed by itself (file- and directory names are gibberish) , i.e. you restore everything or nothing. #backups

Edit: Another downside: Since directory names and filenames are not identifiable, rsync can also not exclude certain patterns like cachefiles, trashbin, .git-dirs, etc.

Some new development here: I actually took the advice of above article and use #gocryptfs for this now with the "plaintextnames" parameter, such that file names stay un-obfuscated and #rsync's exclude option works as intended.

Looks like gocryptfs is a lot faster too. Seems like the ideal option for my #backup needs