Yeah but version ranges are fiction. Some says: we require libpupa 0.2.0+. Sure you can find a version in that range. But what if it doesn’t work? How can you know that your library will work with all the future libpupa releases in advance?
Under semver, any dependency version X.Y.* is supposed to be compatible with any software that was built with version X.Z.* when Y > Z. If not, the author of the dependency has broken semver.
"Supposed to" being the operative phrase. This is of little comfort when you need version X.Y for a security fix but your build breaks.
Note that Maven is more complex than others here have mentioned. In some cases, Maven compares versions lexically (e.g. version 1.2 is considered newer than version 1.10).
It reminds me of the whole mess of Angular 2+ upgrades. It was I believe before lockfiles in npm? Literally every new person joining the team had to get the node_modules handed to them from someone else's machine for the project to work, since `npm install` could never install anything working together.
I worked for 20 years in an ecosystem that didn’t have lockfiles and had reproducible builds before the term was invented, and now you come and tell me that it couldn’t be?
"they are not lockfiles!" is a debatable separate topic, but for a wider disconnected ecosystem of sources, you can't really rely on versions being useful for reproducibility
It's also not about fully reproducible builds, it's about a tradeoff to get modern package manger (npm, cargo, ...) experience and also somewhat reproducible builds.
> modern package manger (npm, cargo, ...) experience
Lol, the word "modern" has truly lost all meaning. Your list of "modern package managers" seems to coincide with a list of legacy tooling I wrote four years ago! https://news.ycombinator.com/item?id=29459209
> If that version doesn’t work, you can try another one.
And how will this look like, if your app doesn't have library C mentioned in its dependencies, only libraries A and B? You are prohibited from answering "well, just specify all the transitive dependencies manually" because it's precisely what a lockfile is/does.
Maven's version resolution mechanism determines which version of a dependency to use when multiple versions are specified in a project's dependency tree. Here's how it works:
- Nearest Definition Wins: When multiple versions of the same dependency appear in the dependency tree, the version closest to your project in the tree will be used.
- First Declaration Wins: If two versions of the same dependency are at the same depth in the tree, the first one declared in the POM will be used.
Well, I guess this works if one appends their newly-added dependencies are appended at the end of the section in the pom.xml instead of generating it alphabetically sorted just in time for the build.
It's not "all the transitive dependencies". It's only the transitive dependencies you need to explicitly specify a version for because the one that was specified by your direct dependency is not appropriate for X reason.
Alternative answer: both versions will be picked up.
It's not always the correct solution, but sometimes it is. If I have a dependency that uses libUtil 2.0 and another that uses libUtil 3.0 but neither exposes types from libUtil externally, or I don't use functions that expose libUtil types, I shouldn't have to care about the conflict.
This points to a software best-practice: "Don't leak types from your dependencies." If your package depends on A, never emit one of A's structs.
Good luck finding a project of any complexity that manages to adhere to that kind of design sensibility religiously.
(I think the only language I've ever used that provided top-level support for recognizing that complexity was SML/NJ, and it's been so long that I don't remember exactly how it was done... Modules could take parameters so at the top level you could pass to each module what submodule it would be using, and only then could the module emit types originating from the submodule because the passing-in "app code" had visibility on the submodule to comprehend those types. It was... Exactly as un-ergonomic as you think. A real nightmare. "Turn your brain around backwards" kind of software architecting.)
I can think of plenty situations where you really want to use the dependency's types though. For instance the dependency provides some sort of data structure and you have one library that produces said data structure and a separate library that consumes it.
What you're describing with SML functors is essentially dependency injection I think; it's a good thing to have in the toolbox but not a universal solution either. (I do like functors for dependency injection, much more than the inscrutable goo it tends to be in OOP languages anyways)
I can think of those situations too, and in practice this is done all the time (by everyone I know, including me).
In theory... None of us should be doing it. Emitting raw underlying structures from a dependency coupled with ranged versioning means part of your API is under-specified; "And this function returns an argument, the type of which is whatever this third-party that we don't directly communicate with says the type is." That's hard to code against in the general case (but it works out often enough in the specific case that I think it's safe to do 95-ish percent of the time).
It works just fine in C land because modifying a struct in any way is an ABI breaking change, so in practice any struct type that is exported has to be automatically deemed frozen (except for major version upgrades where compat is explicitly not a goal).
Alternatively, it's a pointer to an opaque data structure. But then that fact (that it's a pointer) is frozen.
Either way, you can rely on dependencies not just pulling the rug from under you.
I like this answer. "It works just fine in C land because this is a completely impossible story in C land."
(I remember, ages ago, trying to wrap my head around Component Object Model. It took me awhile to grasp it in the abstract because, I finally realized, it was trying to solve a problem I'd never needed to solve before: ABI compatibility across closed-source binaries with different compilation architectures).
So you need to test if the version worked yourself (e.g. via automated tests)? Seems better to have the library author do this for you and define a range.
It’s totally fine in Maven, no need to rebuild or repackage anything. You just override version of libinsecure in your pom.xml and it uses the version you told it to
Don't forget the part where Maven silently picks one version for you when there are transitive dependency conflicts (and no, it's not always the newest one).
Clojure Electric is different. It’s not really a sync, it’s more of a thin client. It relies of having fast connection to server at all times, and re-fetches everything all the time. They innovation is that they found a really, really ergonomic way to do it
Electric’s network state distribution is fully incremental, i’m not sure what you mean by “re-fetches everything all the time” but that is not how i would describe it.
If you are referring to virtual scroll over large collections - yes, we use the persistent connection to stream the window of visible records from the server in realtime as the user scrolls, affording approximately realtime virtual scroll over arbitrarily large views (we target collections of size 500-50,000 records and test at 100ms artificial RT latency, my actual prod latency to the Fly edge network is 6ms RT ping), and the Electric client retains in memory precisely the state needed to materialize the current DOM state, no more no less. Which means the client process performance is decoupled from the size of the dataset - which is NOT the case for sync engines, which put high memory and compute pressure on the end user device for enterprise scale datasets. It also inherits the traditional backend-for-frontend security model, which all enterprise apps require, including consumer apps like Notion that make the bulk of their revenue from enterprise citizen devs and therefore are exposed to enterprise data security compliance. And this is in an AI-focused world where companies want to defend against AI scrapers so they can sell their data assets to foundation model providers for use in training!
Which IMO is the real problem with sync engines: they are not a good match for enterprise applications, nor are they a good match for hyper scale consumer saas that aspire to sell into enterprise. So what market are they for exactly?
If you think of an existing database, like Postgres, sure. It’s not very convenient.
What I am saying is, in a perfect world, database and server will be the one and run code _and_ data at the same time. There’s really no good reason why they are separated, and it causes a lot of inconveniences right now.
Sure in an ideal world we don't need to worry about resources and everything is easy. There are very good reason why they are separated now. There have been systems like 4th dimension and K that combine them for decades. They're great for systems of a certain size. They do struggle once their workload is heavy enough, and seem to struggle to scale out. Being able to update my application without updating the storage engine reduces the risk. Having standardized backup solutions for my RDBMS means is a whole level of effort I don't have to worry about. Data storage can even be optimized without my application having to be updated.
Why? Can’t you specify which version to use?