> That makes no sense: shipping all dependencies (e.g. shipping a container image) gives perfect binary compatibility on Linux, which is what flatpak/snap/appimage do.
True, but sad. The way to achieve compatibility on Linux is to distribute applications in the form of what are essentially tarballs of entire Linux systems. This is the "fuck it" solution.
Of course I suppose it's not unusual for Windows stuff to be statically linked or to ship every DLL with the installer "just in case." This is also a "fuck it" solution.
> to distribute applications in the form of what are essentially tarballs of entire Linux systems.
No so bad when Linux ran from a floppy with 2Mb of RAM. Sadly every library just got bigger and bigger without any practical way to generate a lighter application specific version.
If Linux userspace had libraries with stable ABI, you could just tar or zip binaries and they would work. You wouldn't need to bundle system layer. This is how you deploy server apps on Windows Server systems. You just unpack and they work.
It is not a packaging problem. It is a system design problem. Linux ecosystem simply isn't nice for binary distribution except the kernel, mostly.
Linux feels a bit different since the complete system is not controlled by a single vendor. You have multiple distributions with their own kernel versions, libc versions, library dependencies, etc.
Mac OS has solved this but that is obviously a single vendor. FreeBSD has decent backwards compatibility (through the -compat packages), but that is also a single vendor.
> Linux feels a bit different since the complete system is not controlled by a single vendor. You have multiple distributions with their own kernel versions, libc versions, library dependencies, etc.
No, AFAICS that can't be it. The problem is that all those libraries (libc and others?) change all the time, and aren't backwards-compatible with earlier versions of themselves. If they were backwards-compatible, you could just make sure to have the newest one any of your applications needs, and everything would work.
-compat packages exist on fedora-like systems too, usually allowing it older versions to run. I can't say how far back, but RHEL usually has current version
- 1 for -compat packages.
Packaging is “hard” but mobile and app stores do it.
They do it by having standards in the OS, partial containerization, and above all: applications are not installed “on” the OS. They are self contained. They are also jailed and interact via APIs that grant them permissions or allow them to do things by proxy. This doesn’t just help with security but also with modularity. There is no such thing as an “installer” really.
The idea of an app being installed at a bunch of locations across a system is something that really must die. It’s a legacy holdover from old PC and/or special snowflake Unix server days when there were just not many machines in the world and every one had its loving admin. Things were also less complex back then. It was easy for an admin or PC owner to stroll around the filesystem and see everything. Now even my Mac laptop has thousands of processes and a gigantic filesystem larger than a huge UNIX server in the 90s.
I can't think of a single thing that would kill the bit last of joy I take in computing more. If I woke up in such a world, I'd immediately look to reimplement Linux in an app and proceed to totally ignore the host OS.
True, but sad. The way to achieve compatibility on Linux is to distribute applications in the form of what are essentially tarballs of entire Linux systems. This is the "fuck it" solution.
Of course I suppose it's not unusual for Windows stuff to be statically linked or to ship every DLL with the installer "just in case." This is also a "fuck it" solution.