Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It can point to Maven dependencies that are downloaded on the first launch

You wouldn't do this for a production deployment, right? Application starup that may or may not require access to the artifact repository to complete successfully. When that idea bounces around my developer neocortex, my sysadmin hindbrain starts reaching forward to strangle it.

And if you're not going to do it in production, doing it in development means having a gratuitous difference between development and production, which, again, is something i have learned to fear.

A zip with startup scripts is OK, but it requires installation.

'gradle installApp' works out of the box, and 'installs' the jars and scripts in your build directory, which is all you need to run locally. It's work of minutes to write an fpm [1] invocation that packages the output of installApp as an operating system package, which you can then send down the pipe towards production. This is simple, easy, standard, and fully integrated with a world of existing package management tools. Why would i use Capsule instead of doing this?

[1] https://github.com/jordansissel/fpm



Well, you can choose to embed the dependencies in the capsule, but I think the best approach for production deployment is have an organizational Maven repository (Artifactory/Nexus). This way you upload the libs, and the jars containing the actual apps to your repo, and all you need to do is restart the capsule (it can be configured to load and run the latest version).


So you're downloading jars from the repository to the production server when the app starts? I would feel very uneasy about that kind of coupling.

And i still don't see what advantage this has over just pushing out normal packages.


I feel uneasy about deploying the wrong version. With capsule, at launch it checks for a new version in the repo (if you configure it to use the newest version rather than a specific one). The packages are only downloaded once: not on every restart.

Alternatively, you can embed the dependencies, in which case it's just like a "normal package", only it doesn't require installation, and is just as easy to create (your build tool can create it). So it's a platform independent "normal package", with added features.


Interesting. I feel much more confident about deploying the right thing using an operating system package than some other mechanism. Almost everything else in the datacentre is deployed using operating system packages, so we get a lot of practice at deploying the right versions of things. The few legacy applications we have that are deployed via custom mechanisms are a headache - they require completely different tooling and troubleshooting knowledge to everything else.

But then, i have spent a fair amount of managing machines, shuffling packages around apt repositories, writing Puppet code and so on. Perhaps for a developer who has not served a sentence in operations, operating system packages are a less comforting proposition.

You seem to be very keen on avoiding "installation". Could you tell us about why that is?

And could you remind me what the added features of a capsule are? Putting aside the differences in delivery mechanism, which as i've said, i'm afraid i see as misfeatures, the only one i see is that it automatically finds the right JRE.

(Sorry you're being downvoted, by the way. I think this is an interesting discussion, and your gracious responses deserve upvotes, not downvotes.)


The advantage Capsule gives you is statelessness. The user does not need to put her system in a certain state. You get one file and you run it. It should work regardless of anything else that's installed -- or not -- on your machine, with the exception of an OS and a JRE.

There are other ways to achieve stateless deployment, and Capsule is not always the right fit. For example, it's not the right fit if your app requires an Oracle DB installed locally (it could be done, because Capsule supports an installation script embedded in the JAR, but that probably wouldn't be a good idea in this case). But when it is the right fit (e.g. microservices, grid workers etc.), it's much more lightweight than any other solution.


So your production application can potentially run in an untested configuration because someone has pushed a new version of XyzLib.

Again, for what advantage?

I mean, fine if you want to do that. But I wouldn't call it Modern Java or recommend anyone else do it.


How is that any more dangerous from deploying and installing an OS package? It's very hard to accidentally deploy to a release Maven repo. Maven Central makes you jump through hoops, and organizational repos have their own safeguards. They already deploy everything to their Maven repo anyway, why deploy again?

And if you don't want to enable automatic upgrades, you still get stateless, installation-free deployment.


That's the way we do it in production and it works great. Why create fat jar files and copy them to dozens of servers when you can just have each server pull down its dependencies from your repository.


So now the availability of your production service relies on the availability of your (development) repository. Another (pointless) point of failure.

And you can't see why this is a bad idea?


It should be obvious that if the repository were unavailable, we would not try to push a new version of our code. This is a much better approach since library dependencies are only usually downloaded once when we start a service for the first time, vs a fat jar that pushes dependencies every single time, leading to long startup times. This is a big deal when you've got a hundred servers and dozens of services on each.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: