Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was hoping part of this delay was due to people arguing lock files are poor engineering to begin with, but alas, no mention of that. I guess we've just given up on any kind of package version flexibility.


That would be because package version flexibility is an entirely orthogonal concept to lock files, and to conflate them shows a lack of understanding.

pyproject.toml describes the supported dependency versions. Those dependencies are then resolved to some specific versions, and the output of that resolution is the lock file. This allows someone else to install the same dependencies in a reproducible way. It doesn't prevent someone resolving pyproject.toml to a different set of dependency versions.

If you are building a library, downstream users of your library won't use your lockfile. Lockfiles can still be useful for a library: one can use multiple lockfiles to try to validate its dependency specifications. For example you might generate a lockfile using minimum-supported-versions of all dependencies and then run your test suite against that, in addition to running the test suite against the default set of resolved dependencies.


> I guess we've just given up on any kind of package version flexibility.

Presumably because decades of experience has demonstrated that humans are extremely bad at maintaining compatibility between releases and dealing with fallout from badly specified package versions is probably second only to NULL in terms of engineering time wasted?

Or possibly it's just because a lot of the Python ecosystem doesn't even try and follow semver and you have no guarantee that any two versions are compatible with each other without checking the changelog and sacrificing a small chicken...


> Or possibly it's just because a lot of the Python ecosystem doesn't even try and follow semver

Even if they try, semver can only ever be a suggestion of the possibility of compatibility at best because people are not oracles and they misjudge the effects of changes all the time.


One opinion that gets me flamed all the things is this: I hate semver. Just use linear version numbers. Incompatibility? That's a new package with a new name.


In ecosystems with hundreds of dependencies, that requires you to review the changelog and source code of all of them all the time, especially if you want to avoid security issues fixed in more recent versions. I’d rather have a clear indication of something I need to take care of (a new major version), something new I might benefit from (a new minor version), or something that improved the library in some way (a new patch version). That alone slices required effort on my part down considerably.


That would be ideal, but in practice, SemVer is frequently broken with technicalities or outright disregard for standards. Look at K8s: still on v1, but there have been countless breaking changes. Their argument is that the core application hasn’t changed its API, it’s all of the parts that go into it to make it useful (Ingress, Service, etc.) that have had breaking changes. This is an absurd argument IMO.

Then there’s Python - which I dearly love - that uses something that looks exactly like SemVer, but isn’t. This is often confusing for newcomers, especially because most libraries use SemVer.


If a cure was effective on 99.9999% of all patients, would you say that it’s frequently failing? Millions of projects use SemVer just fine.

Singling out two behemoths designed by committee to demonstrate SemVer's shortcomings seems misleading.


> Millions of projects use SemVer just fine.

This is one of those things that becomes clearly untrue if you try to apply any amount of rigor to the definition of "just fine".

I have yet to see a single major Python project claiming to use SemVer that didn't occasionally unintentionally violate the promise made by the versioning because you cannot always easily predict whether a change will be incompatible, and, even if you could, people still regularly just make mistakes because people are not perfect. Versioning is hard. Keeping promises about not breaking things is even harder.

That may be good enough for things that don't matter, but it's not good enough for things that matter a lot.


> Look at K8s

Maybe I'm being too pedantic here, but semver for applications is always going to be broken and driven by marketing. SemVer, for my money, is only applicable realistically to libraries.


What do you mean, "have to take care of something"? You don't have to upgrade to a new major version. The problem with major versions is that they make it too easy to break other people are cause work for them.


Sometimes you do have to upgrade. We were using a package that was two years old and the Google APIs it called were renamed one day. I’m sure there was an announcement or something to give us warning, but for whatever reason, we didn’t get them. So that day, everything came crashing to a halt. We spent the afternoon upgrading and then we were done.

To say that you don’t have to upgrade is true, but it always comes at a price.


Software is churn. Sticking to outdated versions for too long, the rest of the world evolves without you, until other things will start breaking. For example because a new dependency A you need depends on another package B you already have, but A needs a newer version of B than you use. At that point, you have a huge undertaking ahead of you that blocks productivity and comes with a lot of risk of inadvertently breaking something.

Whether someone else or I am the problem doesn’t matter to my customers at the end of the day, if I’m unable to ship a feature I’m at fault.


I have to upgrade if I want security fixes. Even if they patch old majors for a time, that’s not perpetual.


> That's a new package with a new name.

Well, yeah, it's reasonable people flame you there. What is the difference between,

- zlib-1 v 23

- zlib 1.2.3

except that automatic updates and correlation are harder for the first approach. It also will likely make typosquatting so much more fun and require namespacing at the very least (to avoid someone publishing, e. G., zlib-3 when official projects only cover zlib-{1,2}.


I can already typosquat a "zlib2", so what's the difference?


Sure, but when bumping an existing zlib from 1 -> 2, you would increase the version number (in a package manager) instead of removing & adding separate dependencies.


At least that'll allow you to install both in parallel. Which is an absolutely essential requirement IMHO, and there not being a solution for this for semver'd Python packages is a root cause of all this I'd say.


The PHP ecosystem is almost universally semver and has been going strong for years now, without any major accidental breaking change related outages.

A little discipline and commitment to backwards compatibility and it isn’t too hard, really?


PHP compatibility and its commitment (across the ecosystem) to backwards compatibility is actually pretty cool. If there is one thing PHP does right, it’s this.


That's why other spaces have machine tools for this. There seems to be an overall drift in Python to use more type annotations anyway; making a tool that compares 2 versions of a package isn't rocket science (maybe it already exists?)

Literally everyone posting here is using a system built on compatible interfaces. Stable DLLs on Windows, dylib framework versions on OSX, ELF SOversioning on Linux.

It's clearly not impossible, just a question of priorities and effort, and that makes it a policy decision to do it or not. And I'll lean towards we've been shifting that policy too far in the wrong direction.


The only reason DLLs are stable on Windows is that every application now ships all the DLLs they need to avoid the DLL Hell caused by this exact thing not working.

I look forward to you demonstrating your tool that can check if two python packages are compatible with each other, maybe you can solve the halting problem when you're done with that?


I'm not a Windows developer, you'll have to excuse my ignorance on that. Pretty sure you're not shipping user32.dll with your applications though.

Also, I didn't claim any tool would give a perfect answer on compatibility. They don't for ELF libraries either, they just catch most problems, especially accidental ones. The goal is 99.9%, not 100%. Just being unable to solve a problem perfectly doesn't mean you should give up without trying.


Windows kept system libraries stable and modern software does a lot to avoid them with abstractions on top because they are unpleasant to use.

You could call that success but I think it’s just an extra layer of cruft.


> I'm not a Windows developer, you'll have to excuse my ignorance on that. Pretty sure you're not shipping user32.dll with your applications though.

Microsoft's famously terrifyingly large amount of engineering work that goes into maintaining backwards compatibility to allow you to run Windows 3.1 software on Windows 11 is certainly impressive, but maybe also is the exception that proves the rule.

> Just being unable to solve a problem perfectly doesn't mean you should give up without trying.

Currently no one can solve that problem at all, let alone imperfectly. If you can, I'd gladly sponsor your project, since it would make my life a lot easier.


I think it'd be near impossible to guarantee API compatibility, regardless of type hinting. E.g. if a function returns a list, in a new version I can add/remove items from that list such that it's a breaking change to users, without any API compatibility issues


Lock files are about what version the application needs installed, not what a library depends on. They don’t prevent package version flexibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: