50 extra packages are cursed
There is a user in the JavaScript community who goes around adding "backwards compatibility" to projects. They do this by adding 50 extra package dependencies to your project, which are maintained by them.
which bring to this user: Jordan Harband https://github.com/sponsors/ljharb
Does anyone know what they actually mean with that cursed knowledge point? And what's the "backwards compatibility" that Jordan also boasts in his GH profile?
To not just link to another thread: The specialty of ljharbs issues sits somewhere between "JavaScript is a very dynamic programming language that grew a lot and quite fast" and "we cannot trust developers to do the right thing".
His libraries tend to build up on older runtime implementations and freeze every used functionality during runtime, so they provide "second-run safety" and "backwards compatibility". Developers disagree with some of its effects, such as a grown dependency tree and impacts in performance of multiple magnitudes (as measured in micro-benchmarks). ljharb seems to follow a rather strong ideology, but is a member of the TC39 group and a highly trusted person.
It definitely feels a bit strange and potentially alarming, but after reading through that whole thread he ultimately seems like a sincere person doing work that he thinks matters, now getting dogpiled for it.
At least in the thread linked here, it seems like his maintainership over the project is legitimate, which makes it wrong to characterize him as "forcing" his ways on anyone.
Even ignoring that examples of his behavior are easily found elsewhere, the link itself shows him completely disregarding feedback from other contributors to force his own way.
Honestly, I can't understand the intent behind such a defensive rebuttal to the criticism of his actions.
My point wasn't about javascript. He got pushback because he ignored everyone and just did his own thing.
It has nothing to do with javascript and you can see that in the link. That's a weird excuse.
I still think the conclusion on "setTimeout is cursed"[0] is faulty:
> The setTimeout method in JavaScript is cursed when used with small values because the implementation may or may not actually wait the specified time.
The issue to me seems that performance.now()[1] returns the timestamp in milliseconds and will therefor round up/down. So 1ms errors are just within its tolerance.
setTimeout() does not actually guarantee to run after the elapsed time. It merely gets queued for the next async execution window after that timer elapsed. Hence it can also be off by infinity and never get called - because JS is single threaded (unless you use a worker - which comes with its own challenges) and async windows only open if the main thread is "idle".
Usually, this is very close to the time you set via setTimeout, but it's very frequently slightly off, too.
setTimeout guarantees that the time provided is the time that has at least been elapsed, if it elapses at all - I think that is known to every JavaScript engineer out there.
Then there are also gotchas like these[0][1]:
> As specified in the HTML standard, browsers will enforce a minimum timeout of 4 milliseconds once a nested call to setTimeout has been scheduled 5 times.
Still, the issue is rather how to measure the elapsed time reliably, for unit-tests among other things.
Indeed - I was a bit surprised by them mentioning this to be honest, since, as I understand it, this is kind of a widely accepted limitation of setTimeout - it's purely a 'best effort' timer. It's not intended to be something where "yes after exactly Xms it'll execute.
This isn't quite the whole picture. If called in a nested context, `setTimeout` callbacks get executed in the next execution window, or at least 4ms after the initial call, whichever is earlier. Similarly, I believe `setInterval` has a minimum interval that it can't run faster than.
Is there even such a thing? You're at the mercy of the platform you're running on. And Windows, Linux, Mac, Android, and iOS are not realtime to begin with.
I guess if you're running on a realtime platform but in a VM like JS does, you can then take that property away, downgrading the "language" from being realtime. I wouldn't call that a language property still though, maybe my VM implementation doesn't make that downgrade after all.
They think Postgres is cursed with a 2^16 limit; SQL Server has a parameter limit of ~2,000. I guess at least it's low enough that you're going to fail early.
Sure, but SQL Server DB protocol (TD) has a dedicated Bulk Insert specific for that functionality. TDS isn't perfect, but it is much better then the postgresql wire protocol v3.
Sometime I want to build a DB front-end that you send up some type of iceberg/parquet or similar, and return a similar file format over a quic protocol. Like quic, persistent connections could be virtualized, and bulk insert could be sane and normalized: eg insert these rows into a table or temp table, then execute this script referencing it. While I'm at it, I'll normalize PL/SQL so even brain-dead back-ends (sqlite) could use procedural statements and in-database logic.
> Cursed knowledge we have learned as a result of building Immich that we wish we never knew