I feel like all three major OSs are crap and have trade offs. It’s all about finding which one’s trade offs are ones you can live with. I can use any of the three, but each annoys me in different ways. I use macOS because I find it less annoying than Windows or Linux once I’ve set it up. The truly annoying bit is that each OS has a bit of exclusive software and a few exclusive features that are amazing… but I don’t want 3 OSs.
and the fact that among the 3, MacOS is the hardest to virtualize on any other platform (technically against their ToS to do so on non-Apple hardware too last I checked half a dozen years ago). This effectively puts a stop to most practical use within bigger companies; it’s a safer bet to just buy a few Mac minis and toss them on a rack shelf somewhere, and a lot less of a headache when that on person who figured out how to make it work leaves.
MacBook Pros definitely “do the trick” for hundreds of thousands of us software developers in Silicon Valley every day. Docker works fine on M1 MBPs and it’s not much of a chore to target amd64 with a Dockerfile eg
FROM --platform=linux/amd64 python:3.7-alpine
Plenty of the software that runs the internet is written, compiled, and tested on MacBooks, even if it may get rebuilt for production on some x86_64 server or AWS instance (possibly even an ARM-based Graviton instance).
I agree that MacBooks have gotten less “pro” and now seem to cater to college kids who probably don’t need them more than they do to folks like Digital Imaging Technicians working on movie sets, but they’re still quite popular for the latter, even if some level of Windows and Linux machines have started taking up some of the roles (especially for things like VFX, see eg https://vfxplatform.com/ ) and sound and music recording / production / editing (plus Windows has had Cubase for a very long time, and before that, plenty of studios had Cubase for the Atari 520ST and Atari 1040ST back in the late 80s).
I say some of it is still just plain bad - and I say it as someone who is considering a MacBook this year:
For example: the keys that Macs have instead of page up, page down, home and end still feels like four different "surprise me" buttons in every app I tried them in.
If you are OK with using a different window manager, Enlightenment has independent virtual desktops per monitor too.
Enlightenment versions since 17 have been a bit hit and miss for being stable vs. frustratingly buggy, though. Enlightenment 25 has been alright (version packaged in Bookworm (current Debian stable).
At my previous job we used docker swarm for a cluster of a dozen machines. Does not have all the fancy features of k8s but gets the job done to automate deployments
The major argument is that "if it's wrong, nothing bad will happen, because it will break in runtime". Isn't it better to break before things are in production, like at compile time?
Things "break" all the time in production, regardless of type system. The amount of instrumentation you have to have anyways makes this whole conversation almost moot.
Even {insert your favorite language here} programmers write bugs.
Segfaults can happen regardless of language/typesystem.
I haven't seen a single SRE say, "oh, you use X language, with Y type system? Ok cool, you don't need us then"
The best apples-to-apples experiment for the value of type systems is JS vs TS.
When I write TypeScript, I have so few runtime errors that I'm often in disbelief. Sometimes I have zero runtime errors on my first test of an application.
Contrast this with my own writing of JavaScript (same coder, same runtime, etc.) and it's very clear. JS needs many, many unit tests that TS just doesn't.
Yea, I program in Python with a very strict type checking configuration, and I have never gotten a runtime error in production due to type related issues (like attribute errors or key errors), only weird edge case stuff or algorithmic mistakes.
The goal of a type system is not to prevent all bugs, or to prevent all bad things happening at runtime.
The goal is to catch as many bugs as possible BEFORE it runs in production. And even if you just catch a single bug, you have already won. In particular if the type system is balanced in such a way so there's not much additional effort involved (e.g. by using type inference).
This sounds like the Law of Averages. All things don't break at the same frequency, and noticing that doesn't mean that you're saying that something else is perfect; other things could just break less often, for different reasons.
Of course. But fixing a bug in production is 100x more expensive than fixing the same bug during development. Ideally, we minimize the number of things we rely on SREs to manage.
Nope. Not true. I haven’t had a bug in production the last 5+ years. And I am maintaining large scale C++ software with massive refactorings and features added. The key is having solid tests.
Really? Types languages are way safer and break less, which is what you want and you want breakage to happen as early as possible in the dev cycle for velocity
Yes but things can break less with a robust type system. This is categorically true.
For example ELM is a programming language that cannot crash due to it's type system. I mean it can crash off of exceeding system resources or through the FFI, but it cannot crash any other way.
The argument the OP is making is that the difference achieved by a type system is negligible, but I would argue that in some cases this is true, but ELM is a case where it's not.
If automated tests/linters are a part of your build system, then they basically are breaking at compile time. (Having to write tests to mimic functionality of a compiler seems wasteful to me though)
Type checking is a form of proof. It proves your system is type safe. You would need billions or trillions of tests to achieve type safety equivalent to a type checker.
A successful test only proves something for a single test case. In fact it is impossible to do the equivalent of type checking in the runtime code of your program unless the runtime code has the ability to self "reflect" on it's own source code.
There are also complex type systems that can prove correctness and completely eliminate the need for tests all together but this style of programming is really challenging and time consuming.