Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Likely because Plan9's 'everything-is-a-filesystem' failed.


The standard answer is, "because inventing and implementing them was easier than fixing Python packaging."


I think "fixing distro packaging" is more apropos.

In a past life, I remember having to juggle third-party repositories in order to get very specific versions of various services, which resulted in more than few instances of hair-pull-inducing untangling of dependency weirdness.

This might be controversial, but I personally think that distro repos being the assumed first resort of software distribution on Linux has done untold amounts of damage to the software ecosystem on Linux. Containers, alongside Flatpak and Steam, are thankfully undoing the damage.


> This might be controversial, but I personally think that distro repos being the assumed first resort of software distribution on Linux has done untold amounts of damage to the software ecosystem on Linux.

Hard agree. After getting used to "system updates are... system updates; user software that's not part of the base system is managed by a separate package manager from system updates, doesn't need root, and approximately never breaks the base system (to include the graphical environment); development/project dependencies are not and should not be managed by either of those but through project-specific means" on macOS, the standard Linux "one package manager does everything" approach feels simply wrong.


> development/project dependencies are not and should not be managed by either of those but through project-specific means" on macOS, the standard Linux "one package manager does everything" approach feels simply wrong.

This predates macOS. The mainframe folks did this separation eons ago (see IBM VM/CMS).

On Unix, it's mostly the result of getting rid of your sysadmins who actually had a clue. Even in Unix-land in the Bad Old Days(tm), we used to have "/usr/local" for a reason. You didn't want the system updating your Perl version and bringing everything to a screeching halt; you used the version of Perl in /usr/local that was under your control.


I wonder if it can be traced back to something RedHat did somewhere, because it may have all begun once you COULDN'T be absolutely certain that anything even remotely "enterprise" was running on a RedHat.


I think it's a natural outgrowth of what Linux is.

Linux is just a kernel - you need to ship your own userland with it. Therefore, early distros had to assemble an entire OS around this newfangled kernel from bits and pieces, and those bits and pieces needed a way to be installed and removed at will. Eventually this installation mechanism gets scope creep and and suddenly things like FreeCiv and XBill are distributed using the same underlying system that bash and cron use.

This system of distro packaging might be good as a selling point for a distro - so people can brag about their distro comes with 10,000 packages or whatever. That said, I can think of no other operating system out there where the happiest path of releasing software is to simply release a tarball of the source, hope a distro maintainer packages it for you, hope they do it properly, and hope that nobody runs into a bug due to a newer or older version of a dependency you didn't test against.


Yours is a philosophy I encounter more and more. Where there should be that unified platform, ideally fast moving, where software is only tested against $latest. Stability is a thing of the past. The important thing is more feature.

Instead of designing a solution and perfecting it overtime, it's endless tweaking where there's a new redesign every years. And you're supposed to use the exact computer as the Dev to get their code to work.


Red Hat was actually doing something more directly based on a variety of existing Linux projects than Docker but switched to OCI/Docker when that came about--rather than jumping on the CloudFoundry bandwagon. (Which many argues was obviously the future for container orchestration.)

Kubernetes was also not the obvious winner in its time with Mesos in particular seeming like a possible alternative when it wasn't clear if orchestration and resource management weren't possibly different product categories.

I was at Red Hat at the time and my impression was they did a pretty good job of jumping onto where the community momentum at the time was--while doubtless influencing that momentum at the time.


Ngl this is why I started using them


Never grew popular, perhaps. But I'm not sure how it failed, and not sure how many of the Venm Diagrams of concerns plan9 really has with containers.

Yes there was an idea of creating bespoke filesystems for apps, custom mount structures that plan9 had. That containers also did something semi-parallel to. But container images as read only overlays (with a final rw top overlay) feel like a very narrow craft. Plan9 had a lot more to it (everything as a file), and containers have a lot more to them (process, user, net namespaces, container images to pre-assembled layers).

I can see some shared territory but these concerns feel mostly orthogonal. I could easily imagine a plan9 like entity arising amid the containerized world: these aren't really in tension with each other. There's also a decade and a half+ gap between Plan9's hayday and the rise of containers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: