> 5) If you want network redundancy you can create a 1G vSwitch (VLAN) on the 1G ports for internal use. Give each server a loopback IP, then use BGP to distribute routes (bird).
Are you willing to share example config for that part?
Should note that if you don't have enough networking knowledge, this is an excellent way to build a gun to shoot yourself in the foot with. If you misconfigure BGP or don't take basic precautions such as sanity filters on in- and outbound routes, you can easily do something silly like overwrite each server's default route, taking down all your services.
It's not rocket science, but it is complex, and building something complex you don't fully understand for production services can be a very bad idea.
You can put HAProxy in front with a write- and a read-frontend with a backend each and all servers in the backend. To determine which server is a write instance or a standby you can provide a `external-check command` to the backends. That command can be a bash script, that connects to the server and executes `SELECT pg_is_in_recovery();`.
Interesting, so why do that? Is it just to simplify your client code? Instead of using the s3 api, you just save files in a standard (virtual) file system? Any other benefits or reliability/performance drawbacks?
I use it, for example, for Gitea storage. My host doesn't have a lot of storage, but I use reclone so that it all goes directly onto Google Drive (unlimited paid storage).
Gitea also has a private docker container registry built in, which quickly grows large into several (hundred) gigabytes. It all works perfectly well with rclone.
This makes my host stateless. Just run gitea docker image with Google-Drive backed storage. It works great because both git repositories and docker images are backed by files that are essentially immutable.
An example that would not work well would be trying to run a container whose storage is a SQLite file that updates often. Trying to sync that SQLite file to google drive with rclone would be a bad idea.
Cloud drive approach considerably simplifies the code. You can use shell scripts, Unix tools, 3rd party apps. You can combine them. This approach gives a lot of freedom and power. It is just Unix but with a cloud.
Such approach also protects you from a vendor lock-in: you can use any cloud storage you like today.
Performance drawbacks are evident - if a file is not cached in the local cache then it takes some time to get it there. But it does not really matter for the most apps because that initial lag is relatively short.
My biggest issue with using IDEA for DB work is that DB connections are saved to the project or workspace (not sure about the IDEA lingo). We have many projects and so I have to add the DB connections again and again. Is there a nice solution for that?
In Jetbrains IDEs, when you define a data source, there is a button to make the source global. If you do that, it will be accessible across all projects.
When I run into questions like this with IDEA, there is usually an answer. If not, file an issue in their issue tracker. Seriously. They fix stuff nonstop. Sometimes it even takes years, but they eventually do it if it is a good one. They are very responsive in their tracker.
I've been using Colemak on a Ergodox since September of last year and am very happy with it. I too switched to Colemak when I started using my Ergodox. The first weeks my head hurt, but now I'm fine.
If you have a manger, that decides this, or the PO decides this, IMO that’s a key problem of your scrum implementation.