The only bit that I didn't agree with was the downsides. You can scale individual modules.
You can set your load balancer to make all calls to specific endpoints go to specific servers. These can scale independently.
If you have background workers (ActiveJobs/Oban for example), these can be on different queues, that you can scale.
It's actually really easy to build out a mono-repo system allow for scale.
If you organise your workers into folders based on their purpose (reporting, exporting, ... ), and you're careful about feature flags, you can drastically reduce git-conflicts and CICD issues.
Yeah.. I always remind myself of the netscape browser. A lesson in "if it's working to mess with it"
My question is always the reverse. Why try it in Y new language. Is there some feature that Y provides that was missing in X? How often do those features come up.
Company I worked for decided to build out a new microservice in language Y. The whole company was writing in W and X, but they decided to write the new service in Y. When something goes wrong, or a bug needs fixing, 3 people in the company of over 100 devs know Y. Guess what management is doing.. Re-writing it in X.
Agreed. I rather dislike the idea of "safe" coding languages. Fighting with a memory leak in an elixir app, for the past week. I never viewed c or c++ as unsafe. Writing code is hard, always has been, always will be. It is never safe.
Safe code is just code that cannot have Undefined Behavior. C and C++ have the concept of "soundness" just like Rust, just no way to statically guard against it.
There is more than undefined behavior, if I was to define using an uninitialized variable as yielding whatever data was previously in that location, it is well defined but unsafe.
That could be well defined for POD types like arrays of bytes (I think in that case they call it "freezing"), but you'd still have undefined behavior if the type in question contained pointers. Also at least in Rust it's UB to create illegal values of certain types, like a bool that's anything other than 0 or 1.
Which is all kind of to say, all of Rust's different safety rules end up being surprisingly densely interconnected. It's really hard to guarantee one of them (say "no use-after-free") without ultimately requiring all of them ("no uninitialized variables", "no mutable aliasing", etc).
As the other person pointed out, these are two different things. Sanitizers add runtime checks (with zero consideration for performance - don’t use these in productions). Static analysis runs at compile time, and while both GCC and Clang are doing amazing jobs of it, it’s still very easy to run into trouble. The mostly catch the low-hanging fruit.
The technical reason is that Rust-the-language gives the compiler much more information to work with, and it doesn’t look like it is possible to add this information to the C or C++ languages.
Yes.. that.. If you are against oil and plastics, walk your talk. If you are against rare earth, walk your talk. If you have a degree in chem-eng, and you're building low plastic solutions, and you're critical, then you're being honest.
Saying "no no no" but doing it on a new cell phone you know was built on rare earth is like a vegan giving a talk while sitting on their new leather couch.
1. US government contracting preferentially selecting over the last 75 years for people who are used to working slow and dotting every i / crossing every t -- i.e. business as usual
2. Regulations during business as usual slowing down the maximum throughput of processes
That isn't to say that other people don't still exist in the US, just that they're not currently at government contractors, because the government hasn't prioritized their core competencies (speed).
It's entirely possible that, similar as was done to military command staff at the outbreak of WWII, the US rewrites its regs, fires people who are incompetent at working at a faster pace, and recognizes and elevates talent.
Unfortunately the current executive branch, while tearing down regulations, then has more interest in profiteering and nepotism than truly pushing exceptional engineers.
Sounds like a right to repair argument. It will lose. Try putting linux on a windows-10 laptop. That BIOS is nailed down hard. It can be done but its a right PITA
Can you elaborate? I bought a ThinkPad early this year. It probably had Windows 11 (didn't check, wiped it immediately) and it works fine?
I don't doubt that there are laptops where it is hard, but I would love to hear some examples and in what way the UEFI firmware is nailed down such that you cannot install Linux?
They might not.. But you'd very likely have their number saved on your phone. Might even have them as an un-mutable contact. My wife/kids and their school are all on the "never mute" list.
> But you'd very likely have their number saved on your phone.
I certainly don't. Every call I get from the school seems to come from a different number. And the camp she was at when she hurt her leg and had to be taken for immediate medical attention.
I get it, in your world, in your experience, it all works out. But in mine, it just doesn't. From experience, I _know_ this is true.
I mean. I have a little book on my desk with password hints. "2nd grade best friends phone number", "birthday of first dog". It also has a grid of random numbers/letters on the front page, so I can write "first_crush_b4*5". You'd have to have physical access to the book, and know what the hint leads to. It's un-hackable. I mean aside from social, or physically breaking into my house.
If you have background workers (ActiveJobs/Oban for example), these can be on different queues, that you can scale. It's actually really easy to build out a mono-repo system allow for scale.
If you organise your workers into folders based on their purpose (reporting, exporting, ... ), and you're careful about feature flags, you can drastically reduce git-conflicts and CICD issues.