What's considered nowadays the best practice (in terms of security) for running selfhosted workloads with containers? Daemon less, unprivileged podman containers?
And maybe updating container images with a mechanism similar to renovate with "minimumReleaseTime=7days" or something similar!?
You’ll set yourself up for success if you check the dependencies of anything you run, regardless of it being containerised. Use something like Snyk to scan containers and repositories for known exploits and see if anything stands out.
Then you need to run things with as least privilege as possible. Sadly, Docker and containers in general are an anti-pattern here because they’re about convenience first, security second. So the OP should have run the contains as read-only with tight resource limits and ideally IP restrictions on access if it’s not a public service.
Another thing you can do is use Tailscale, or something like it, to keep things being a zero trust, encrypted, access model. Not suitable for public services of course.
As always: never run containers as root. Never expose ports to the internet unless needed. Never give containers outbound internet access. Run containers that you trust and understand, and not random garbage you find on the internet that ships with ancient vulnerabilities and a full suite of tools. Audit your containers, scan them for vulnerabilities, and nuke them from orbit on the regular.
Easier said than done, I know.
Podman makes it easier to be more secure by default than Docker. OpenShift does too, but that's probably taking things too far for a simple self hosted app.
I only know of https://perses.dev/ but haven't had a look at it for ~half a year. It was very barebones back then but I'm hopeful it can replace Grafana for at least basic dashboarding soon.
Atuin is a monster. I was looking to prove an idea I had around my workflow and the problems I experience. I want something small, minimal and source code I trust.
Is there any concept of private key rotation or something else? In case a client with a nostr key on it got compromised or something similar? With a traditional password passed logins I would just set a new password from another machine. Regeneration of a new nostr key would mean it's a new account isn't it?
Restic is the winner. It talks directly to many backends, is a static binary (so you can drop the executable in operating systems which don’t allow package installation like a NAS OS) and has a clean CLI.
Kopia is a bit newer and less tested.
All three have a lot of commands to work with repositories. Each one of them is much better than closed source
proprietary backup software that I have dealt with, like Synology hyperbackup nonsense.
If you want a better solution, the next level is ZFS.
You can consider something like syncthing to get the important files onto your NAS, and then use ZFS snapshots and replication via syncoid/sanoid to do the actual backing up.
Or install ZFS also on end devices, and do ZFS replication to NAS, which is what I do. I have ZFS on my laptop, snapshot data every 30 minutes, and replicate them. Those snapshots are very useful, as sometimes I accidentally delete data.
With ZFS, all file system is replicated. The backup will be consistent, which is not the case with file level backup. With latter, you have to also worry about lock files, permissions, etc. The restore will be more natural and quick with ZFS.
I can't speak to zfs but I don't find btrfs snapshots to be a viable replacement for borgbackup. To your filesystem consistency point I snapshot, back the snapshot up with borg, and then delete the snapshot. I never run borg against a writable subvolume.
Same here: my selection boiled down to Borg vs. Restic. I started with Restic because my friends used it and, while it was perfectly satisfactory functionally, found it unbearably slow with large backups. Changed to Borg and I've been happy everafter !
What is a "large" backup? Slow to backup locally or slow to backup over a network? (obviously you are not saying its slow without understanding the network is inherently slow, but more along the lines of maybe its network protocol is slow.)
Those were only about 10 TB - home scale, and over SSH across 2 to 10 ms. I was coming from rdiff-backup, which satisfyingly saturated disk writes, whereas I didn't even understand what bottleneck restic was hitting.
Kopia is awesome. With exception to it’s retention policies, but work like no other backup software that I’ve experienced to date. I don’t know if it’s just my stupidity, being stuck in 20 year thinking or just the fact it’s different. But for me, it feels like a footgun.
The fact that Kopia has a UI is awesome for non-technical users.
I migrated off restic due to memory usage, to Kopia. I am currently debating switching back to restic purely because of how retention works.
I don't know about the other two but restic seems to have a very good author/maintainer. That is to say that he is very active in fixing problems, etc..
I picked Kopia when I needed something that worked on Windows and came with a GUI.
I was setting up PCs for unsophisticated users who needed to be able to do their own restores. Most OSS choices are only appropriate for technical users, and some like Borg are *nix-only.
Instead of the luks key in tpm you can use a fido2 compatible hardware security usb token. For booting/unlocking it has to be plugged in then you can remove it. This is pretty convenient and secure against many threats like stealing the nas in my opinion.
And maybe updating container images with a mechanism similar to renovate with "minimumReleaseTime=7days" or something similar!?