Sure, it's a nice benefit, but is it actually used? In 2014, how many of us are doing backend/service development in java and not deploying to Linux machines? There are certainly still backend/service java shops out there that aren't deploying to linux machines, but I wager that most are.
Regardless of those numbers though, why do the linux shops actually care about being able to deploy to non-linux machines if they never do in practice? Python can run just about anywhere these days, but python shops that deploy exclusively on Linux don't care about maintaining deployment compatibility with systems that they don't actually use.
In 2014, how many of us are doing backend/service development in java and not deploying to Linux machines?
Among silicon valley webstartups? Not many. But there still are a lot of Windows based installs large megacorps and small offices all over. For a lot of them, switching away is not an option so you might want to support them.
Among silicon valley webstartups, I would be surprised to hear much about java service/backend work at all. My experience is primarily with the megacorps.
If you are developing backend/service shit for other companies to use, then that is one thing, but I am thinking about in-house development. Most megacorps aren't software vendors; if they are developing software it is for themselves.
Regardless though, some megacorps deploying to windows doesn't explain the practices of java shops that don't deploy to windows. It seems like portability just for the sake of portability, without any real purpose but with real added hassle.
I've written telephony apps that deployed to Linux, Unix and Windows. I've written an OpenGL / Swing app on Windows and deployed it for Mac. This kind of stuff (a) does happen and (b) is valuable.
I don't understand how you believe that I was implying that nobody develops cross platform software.
I am talking about one specific kind of development: Service/backend development done in-house in a corporation that is not a software vendor. If you're doing Swing/OpenGL work, you're not doing the sort of work that I am talking about.
In my experience, while this sort of development may deploy to different nix's at different companies, within any particular company it tends to always deploy on only type of system. Most "mega-corps" are not software vendors; when they develop software they are developing it for themselves. This means that they control the entire stack, making portability fairly pointless.
When development in these organizations is done in other languages without such a "portability culture", not a second of thought is given to portability. The second a service is written in Java though, everyone starts jumping through hoops for something that they will never use.
Portability of Java helps you avoid the situation when your sysadmin tells you that he just upgraded a production server because the previous version of your distro reached its EOL and now your product does not start because libfoobar.so that you depend on is not included in the distro any more and he can't build it from source because the distro's default compiler also switched from gcc to clang.
Portability is important, even when you supposedly know your deployment platform.
Docker is an option if you're deploying on Linux, which is what the majority of server deployments use these days. Chef works on the rest of the Unixes, including OS X. If you're deploying Java on a Windows server, you're doing it wrong.
And the days of needing to use different development and production OSs are over...if you develop in Windows and deploy on some Unix variant, you should be using a VM.
I'm still not buying ClassLoader contortions as being in any way justified to support the single deployable artifact requirement. It's just not necessary and can lead to many subtle and hard-to-track-down issues as well as introducing unnecessary performance overhead.
Docker is an option if you're deploying on Linux, which is what the majority of server deployments use these days. Chef works on the rest of the Unixes, including OS X. If you're deploying Java on a Windows server, you're doing it wrong.
What about OS/400, OS/390, MVS, etc?
Believe it or not, the world is a lot bigger than *NIX and Windows. Go track down a mid-sized manufacturing company in the midwest or the southeast, for example, and I'd almost bet you money they have apps running on an AS/400 (iSeries), or an S/38 or S/36 or something, if not a mainframe.
ClassLoader contortions are one of the cleaner ways to handle the problem, but they're not the only way.
You could:
- Create a fat jar, which necessitates bytecode transformations to prevent namespace collisions if you don't want to be sloppy. JarJar did this kinda of thing many years ago, but it seems hard to trust a process like that.
- Create what is essentially a self-extracting archive. This looks like the path Capsule took. This introduces unnecessary state and increases startup time. This is what servlet containers do, so reinventing that wheel seems like a particularly foolish decision. Not to mention that it appears that Capsule adds the additional step of pulling in all dependencies from an external server...slow deployment and an additional point of failure!
I still believe that some sort of VM image or container makes the most sense. Short of that, any system that supports sshd can be configured with Chef. The "solutions" I see coming out of the Java camp all seem like hacks by comparison.
The next release will include the option to resolve (potentially download) all dependencies without launching the app, if someone finds that useful. If dependencies are resolved, or are embedded, startup time is increased by 100ms or so.
It still feels like a pre-automation mindset...this is a step that should happen during CI, not application startup or even deployment.
The delay is going to be a lot more than 100ms if you use something like this when scaling elastically. In that situation, you need to go from automatically provisioned to running and accepting load in as short a timeframe as possible. Fully-baked machine images or Docker containers work for this use case, your solution doesn't.
Also, making your Maven repository a dependency of your production environment just seems like a bad idea. It creates an additional attack vector from a security standpoint and an additional component that can fail from a reliability standpoint.
Copying a file takes the same time, regardless of it's capsule of chef that is copying the file. Once capsule has cached the file, you can cut a VM image, and no copying will occur later.
> making your Maven repository a dependency of your production environment
It becomes a dependency of your deployment process, which probably already depends on a maven repo. And a maven repo can be just a NAS drive, which is often reliable enough for use in production.
Docker - "The Linux Container Engine"
Think you've got your answer there.