LXC is really quite simple and makes a lot of sense when you need a lightweight VM, that behaves like a VM. Docker takes a very specific view of containers with layers, a modified container OS template and init that limits the container to running a single app. This may be great for deployment or PAAS specific use cases but adds needless complexity for more general use cases.
There is more to containers than running a single app. Their potential as lightweight, efficient and portable alternatives to VMs is immense. We have tons of resources on LXC at Flockport, including ready to use containers [1]
VM workloads can transition seamlessly to LXC containers. And the large ecosystem of Linux tools and apps that work on VMs and systems work perfectly in LXC. And you do not need to find a container specific way to do things.
Take networking, overlay networks or clustering that works in VMs work as well in containers. We have a largish number of tutorials on multi host container networking over both layer 2 and 3 with GRE, L2TP, VxLAN, IPSEC, tinc and more [2] A lot of Docker specific tools use these under the hood but they are easy enough to use on their own.
The LXC team are trying to move the needle with LXD. It uses unprivileged containers by default (non root users can run containers, this is not capabilities drop) multi host management with a REST api so you can query the LXD daemon for container orchestration across hosts and live migration. These are big steps forward and they really do need the support of the community. They have been doing this for 7 years and is thanks to then containers exist.
> The LXC team are trying to move the needle with LXD. It uses unprivileged containers by default (non root users can run containers, this is not capabilities drop) multi host management with a REST api so you can query the LXD daemon for container orchestration across hosts and live migration. These are big steps forward and they really do need the support of the community. They have been doing this for 7 years and is thanks to then containers exist.
I'm always surprised when people omit to point out that "the LXC team" is basically Canonical, the corporate sponsor behind Ubuntu (including the kernel bits).
People love to bash on the company, it'd be nice to acknowledge the achievements as well :)
Full disclaimer: I'm a Canonical employee. I hope that doesn't come out as a corporate shill, I genuinely love LXD.
I'm always surprised when people omit to point out that "the LXC team" is basically Canonical, the corporate sponsor behind Ubuntu (including the kernel bits).
Perhaps now, but historically you are very much incorrect. IBM funded it originally, and it was conceived and implemented (ie. as in they made it reach the "actually working and in-kernel" state) with the goal of segregating concurrently executing workloads on fat IBM-supplied mainframes.
They have been doing this for 7 years and is thanks to then containers exist.
Well, no, Linux containers have existed for longer, in the form of OpenVZ. I find it pretty funny that they are now considered "new" :)
Besides unprivileged containers, which I suppose can be useful for some use cases, what's the great advantage of LXD vis-a-viz OpenVZ, which is a more mature and well-understood technology?
Linux containers have existed for longer, in the form of OpenVZ.
Also Linux-vServer, yadda-yadda. Not only OpenVZ.
what's the great advantage of LXD vis-a-viz OpenVZ, which is a more mature and well-understood technology?
OpenVZ took approaches that meant it would never be accepted in to the Linux kernel. This means, frankly, that it is dying as people move to LXC, whose components are in the kernel. Claiming something is 'mature and well-understood' that is essentially destined to become a footnote in history may be correct, but it's also rather short-sighted and increasingly irrelevant.
The OpenVZ userspace already works with the same kernel components used by LXC; you can create, start and stop containers on both an OpenVZ and a regular kernel with the same commands.
The commands that don't work are those which don't have the underlying kernel feature to support them yet; as the upstream kernel gains such features, so will the OpenVZ userspace.
So there's not "short-sightedness". OpenVZ and its tooling will live beyond its kernel.
Meanwhile, LXC has already been abandoned by Docker (it's now using libcontainer by default), and others use systemd-nspawn, so what makes you think LXC isn't the short-sighted and soon-to-be irrelevant choice?
Last time I looked, and I did really look, the big picture was OpenVZ and Linux-vServer made it clear they were moving to work through cgroups and namespaces, same as LXC, and will essentially offer little delineation except edge-case features (freeze/thaw/migration/extra security stuff that breaks random programs or requires PhD's in syscall analysis/whatever). These features are questionable in many cases, and if they require custom kernels, they'll never take off as mainstream tooling, even if the projects survive.
By LXC I was referring to IBM's work in general (kernel work mostly) not just to the LXC userspace which has its issues but works mostly most of the time and is defacto standard, due to being first out the gate. Docker for all it's marketing budget can't change history, even with scores of people on its team.
I think Flockport is doing some amazing things. When I read the pagerduty bug-hunt post and saw that they were using ipsec between "hosts", I wondered if perhaps they used LXD with some of your services.
Someone here on HN encouraged others to check out LXD in a previous thread a week ago. I did it and was really blown away. It really did feel like I was running virtual machines from an extremely fast hypervisor.
What mattered to me most was getting away from "1 process per container" mentality. Phusion realized right away that this was an unnecessary constraint and built their base image[0] to help others.
With LXD you really get the best of both worlds. You can containerize your applications without having to re-write your entire stack because it also requires cron or a few other processes. Ain't nobody got time for that.
Without a doubt use Docker. It has the ecosystem, which is extremely important. It also has the support and community.
For devops engineers looking to build an entire enterprise around containers, LXD will be a great option when it reaches maturity and its ecosystem matures. It's not there yet, and probably won't be for a few years.
Another great company that's doing extremely cool things with LXC minus Docker is Flockport[0]
So flockport is a competitor to Docker? Seems to do what Docker does. So many choices. CoreOS too . Docker has the momentum and VC$. Hard to really know what the "right" choice is.
Also LXC has been in the kernel longer and seems more mature (since 2.6.20-something).
LXC in the kernel? It's a userspace toolkit that makes use of kernel technologies (namespaces, pivot_root, cgroups...), but I don't recall it being bundled as a part of it.
Sorry, I meant LXC support. Yeah it is comprised of a set of a features supported by regular Linux kernels + userspace tools. The feature set has been there for a quite a while.
> but I don't recall it being bundled as a part of it.
But it is in contrast to say OpenVZ which required patching the kernel to work.
I like LXD a lot, but it's simply too new to rally behind it fully. You have to backport it to Ubuntu 14.04 if you even want to run it on an LTS operating system.
That being said, I think Canonical has good vision and will execute. LXD combines the best from old and new school.
You could also write an Ansible role for your app which automates the setup and installation. Then installing your app becomes as simple as a single Ansible command without additional dependencies.
Is this open-source or do I have to pay? Not clear from the web-site.
If I have to coordinate multiple apps (three apps in this system, have to speak with each other over ports) and I want to run each app in its own container, on the same server, or distributed, can Ansible help me? Thanks.
Ansible is FOSS - so yes, open-source and free for you to use.
Ansible is an automation tool which lets you create one or more playbooks for automatically configuring and deploying applications.
You certainly could use it to automate the configuration of these apps in containers, and for the case of distributed multi-machine arrangement it will improve your life.
> What mattered to me most was getting away from "1 process per container" mentality. Phusion realized right away that this was an unnecessary constraint
It's a necessary constraint because otherwise developers continue to utilize a monolithic application development process. The goal should be isolated, interchangeable processes, not an entire virtual environment. It's just delaying the inevitable. Makes scaling a lot more straightforward as well etc...
If you want to have a single process that you care about, there is this thing called a process that fits the bill admirably. You can start them up very quickly, and kill them when you don't need them.
I think Docker is a great tool, but I also think the perceived benefits it brings get discussed a lot more often than the complexities it introduces.
Pushing your complexity to different parts of your stack does not reduce it - it just moves it. In many cases, that's perfectly fine and works better - but it's no panacea.
But since they can have only one process they will cram everything in that process. Possibly spawning hundreds of threads with mutexes.
Multiple processes on the other hand can encourages fault tollerance and modularization.
To put it another way, having just one process doesn't mean it is simpler. I have seen single processes applications built from millions of lines of C++ code.
Whether you have a single process or not, the "parent" application will responsible for maintaining and executing subprocesses/subroutines.
Applications still have the ability to spawn processes with single process containers, you're just doing it through a different OS/process management system.
By keeping the multiple processes in the same container, you're likely decreasing modularity by introducing interdependencies via the file system.
The thing is, we already have good tools for process management in UNIX-like operating systems. We have cron, process supervisors, systems for log handling. If you need a separate container for every single process you run, you need to get rid of all the standard UNIX tools and start from scratch. The alternatives for docker-style world do not exist yet and where they exist, they are far from stable. So you will end up reinventing them and most likely do it wrong.
I agree with most of your statement, except the part about "good tools" and the assumption that it will be done wrong forever. Mistakes will be made, but current nix tools aren't really anymore ready for the distributed paradigm.
What I meant is that the current Docker-based solutions are often done without much experience. I'm certainly not against experimenting, but I think that that there are many benefits of running a containerized environment and LXD takes the pragmatic approach that can be used right know with the tools everybody knows.
> you're likely decreasing modularity by introducing interdependencies via the file system.
Unless dependencies are already there. Then needlessly separating them into containers just makes the system have a higher overhead and have a higher fragility.
Say should the monitoring software be in a different conainter? But it looks at logs, so now it needs to mount a shared folder or coordinate sending logs via sockets. Or maybe it is a helper process. Like an indexer. Putting that in a separate container with its isolated OS dependencies might not make sense. It requires to reading from the file structures of the main process.
> you're just doing it through a different OS/process management system.
There is already software that does it. It is init, systemd and others. You can specify if restart on failure should happen, can specify dependencies between then and so on.
> Unless dependencies are already there. Then needlessly separating them into containers just makes the system have a higher overhead and have a higher fragility.
Advantages to the Docker approach are that the dependencies are explicitly exposed and you can homogenize your file systems. Ideally you're getting rid of using files in the first place. Unless you're using local storage on AWS, you're already going over the network.
> There is already software that does it. It is init, systemd and others.
It's hard to disagree that the old systems aren't more mature. They aren't as ready for distributing processes though. And that's where most everything is headed anyway.
> Or maybe it is a helper process. Like an indexer.
This is the motivation for the concept of pods that appears within rkt and kubernetes. Things like indexers, monitoring agents, service discovery ambassadors, syncers for static content file servers, etc. There are a number of use cases for multiple processes that live side-by-side and share a lifecycle. And these containers don't need to be coupled but it is nice if they can share a localhost interface and paths on a filesystem.
> What mattered to me most was getting away from "1 process per container" mentality.
As I understand, when Garden used to be called Warden, the team looked at the very early versions of Docker. They talked to the Docker team and didn't decide to use Docker as the substrate of Cloud Foundry because of the single-process model. Makes it hard to sideload agents.
That said, the next-gen Cloud Foundry guts (Diego) can take Docker images as native currency, so YMMV.
LXD just takes it a little further, so on the UI level you are dealing just with images through the whole life-span of the container, not just when you are creating it. That makes it possible to wrap it all in a nice API and automatize the environment.
Isn't that the same approach that Docker originally took with the Docker Hub Registry, where you can pull down pre-packaged monolithic images (albeit small ones if needed) via "docker pull ..."?[1]
I think this was important quote from the article:
"LXD is more like an “OS container” while Docker and rkt are more like “application containers.”"
There is more to containers than running a single app. Their potential as lightweight, efficient and portable alternatives to VMs is immense. We have tons of resources on LXC at Flockport, including ready to use containers [1]
VM workloads can transition seamlessly to LXC containers. And the large ecosystem of Linux tools and apps that work on VMs and systems work perfectly in LXC. And you do not need to find a container specific way to do things.
Take networking, overlay networks or clustering that works in VMs work as well in containers. We have a largish number of tutorials on multi host container networking over both layer 2 and 3 with GRE, L2TP, VxLAN, IPSEC, tinc and more [2] A lot of Docker specific tools use these under the hood but they are easy enough to use on their own.
The LXC team are trying to move the needle with LXD. It uses unprivileged containers by default (non root users can run containers, this is not capabilities drop) multi host management with a REST api so you can query the LXD daemon for container orchestration across hosts and live migration. These are big steps forward and they really do need the support of the community. They have been doing this for 7 years and is thanks to then containers exist.
[1] http://www.flockport.com/containers
[2] http://www.flockport.com/news