it may happen that you just don't need it - the same way not everyone need to use vim/neovim.
without tmux/screen though, it's much harder, even less reliable, to work over ssh, so it becomes natural need for such sort of tools.
Say I use screen and later tmux since I believe ~ 2010 but not using "advanced" features like "panes" and screen splitting every month, most of the time for me it's just switching between windows in session and different sessions (not that often) and that's all.
As a helper, for some projects, I do use predefined layouts (say first 4 windows opens with inventory dir, other 2 with root folder of ansible repo) so on, but need this also not very often, like when laptop reboots (which is every ~ 3 week on Win11 nowdays)
> [1] to be honest, I'm not sure I understand the intent of open_file_cache... Opening files is usually not that expensive
I may have a hint here - remember, that Nginx was created in the times of dialup was a thing yet and having single Pentium 3 server was a norm (I believe I've seen myself that wwwXXX machines in the Rambler DCs over that time).
So my a bit educated guess here, that saving every syscal was sorta ultimate goal and it was more efficient in terms of at least latency by that times. You may take a look how Nginx parses http methods (GET/POST) to save operations.
Myself I don't remember seeing large benefits of using open_file_cache, but I likely never did a proper perf test here. Say ensure use of sendfile/buffers/TLS termination made much more influence for me on modern (10-15 years old) HW.
You are probably talking about VMs - those do have traffic limits. Servers, on the other side, with default 1Gbit NICs doesn't (let's say until you consume 80%+ of bandwidth for months)
Quoting:
> Traffic
>All root servers have a dedicated 1 GBit uplink by default and with it unlimited traffic. Inclusive monthly traffic for servers with 10G uplink is 20TB. There is no bandwidth limitation. We will charge € 1 ($1.20)/TB for overusage.
Huh, this must have changed after I concluded my contract with them (several years ago).
Huh, archive.org tells me its been unlimited since at least 5 years ago, so I guess I must’ve just seen someone mention 20TB and felt it was a reasonsble limit :)
> With swap enabled, it is very, very, VERY common for the system to become completely unresponsive - no magic-sysrq, no ctrl-alt-f2 to login as root, no ssh'ing in ...
It's usually enough to have couple of times when you need to get into distant DC / wait for some IPMI connected for couple of hours, to learn "let it fail fast and gimme ssh back" on practice vs theory on "you should have swap on"
Conversely, having critical processes get OOMKilled in critical sections can teach you the lesson that it's virtually impossible to write robust software with the assumption that any process can die at any instruction because the kernel thought it's not that important. OOM errors can be handled; SIGKILL can't.
reply