Stuff like that led me fully away from Ruby (due to Rails), which is a shame, I see videos of people chugging along with Ruby and loving it, and it looks like a fun language, but when the only way I can get a dev environment setup for Rails is using DigitalOcean droplets, I've lost all interest. It would always fail at compiling something for Rails. I would have loved to partake in the Rails hype back in 2012, but over the years the install / setup process was always a nightmare.
I went with Python because I never had this issue. Now with any AI / CUDA stuff its a bit of a nightmare to the point where you use someone's setup shell script instead of trying to use pip at all.
Lets be honest here - whilst some experiences are better/worse than others, there doesn't seem to be a dependency management system that isn't (at least half) broken.
I use Go a lot, the journey has been
- No dependency management
- Glide
- Depmod
- I forget the name of the precursor - I just remembered, VGo
- Modules
We still have proxying, vendoring, versioning problems
Python: VirtualEnv
Rust: Cargo
Java: Maven and Gradle
Ruby: Gems
Even OS dependency management is painful - yum, apt (which was a major positive when I switched to Debian based systems), pkg (BSD people), homebrew (semi-official?)
Dependency Management is the wild is a major headache, Go (I only mention because I am most familiar with) did away with some compilation dependency issues by shipping binaries with no dependencies (meaning that it didn't matter which version of linux you built your binary for, it will run on any of the same arch linux - none of that "wrong libc" 'fun'), but you still have issues with two different people building the same binary in need of extra dependency management (vendoring brings with it caching problems - is the version in the cache up to date, will updating one version of one dependency break everything - what fun)
NuGet for C# has always been fantastic, and I like Cargo, though sometimes waiting for freaking ever for things to build does kill me on the inside a little bit. I do wish Go had a better package manager trajectory, I can only hope they continue to work on it, there were a few years I refused to work on any Go projects because setup was a nightmare.
The main NuGet was problematic for a long time e.g. not providing any control over transitive dependencies (like pip at the time). You had to use https://fsprojects.github.io/Paket/ if you wanted safe and consistent resolution. NuGet since got their act together and it’s not as flawed now.
I agree. Here are some things that I (a science researcher and professor) like about R and CRAN:
1. There are a lot of build checks for problems involving mismatches between documentation and code, failed test suites, etc. These tests are run on the present R release, the last release, and the development version. And the tests are run on a routine basis. So, you can visit the CRAN site and tell at a glance whether the package has problems.
2. There is a convention in the community that code ought to be well-documented and well-tested. These tend not to be afterthoughts.)
3. if the author of package x makes changes, then all CRAN packages that use x will be tested (via the test suite) for new problems. This (again because of the convention of having good tests) prevents lots of ripple-effect problems.
4. Many CRAN packages come with so-called vignettes, which are essays that tend to supply a lot of useful information that does not quite fit into manpages for the functions in the package.
5. Many CRAN packages are paired with journal/textbook publications, which explain the methodologies, applications, limitations, etc in great detail.
6. CRAN has no problem rejecting packages, or removing packages that have problems that have gone unaddressed.
7. R resolves dependencies for the user and, since packages are pre-built for various machine/os types, installing packages is usually a quick operation.
PS. Julia is also very good on package management and testing. However, it lacks a central repository like CRAN and does not seem to have as strong a culture of pairing code with user-level documentation.
Do I get it right that this issue is within Windows? I've never heard of the issues you describe while working with Linux.. I've seen people struggle with MacOS a bit due to brew different versions of some library or the other, mostly self compiling Ruby.
The mmdetection library (https://github.com/open-mmlab/mmdetection/issues) also has hundreds of version-related issues. Admittedly, that library has not seen any updates for over a year now, but it is sad that things just break and become basically unusable on modern Linux operating systems because NVIDIA can't stop breaking backwards and forwards compatibility for what is essentially just fancy matrix multiplication.
I had issues on Mac, Windows and Linux... It was obnoxious. It led me to adopt a very simple rule: if I cannot get your framework / programming language up and running in under 10 minutes (barring compilation time / download speeds) I am not going to use your tools / language. I shouldn't be struggling with the most basic of hello worlds with your language / framework. I don't in like 100% of the other languages I already use, why should I struggle to use a new language?
On Linux good luck if you're not using anything besides the officially nvidia-supported Ubuntu version. Just 24.04 instead of 22.04 has regular random breakages and issues, and running on archlinux is just endless pain.
Have you tried conda? Since the integration of mamba its solver is fast and the breadth of packages is impressive. Also, if you have to support Windows and Python with native extensions, conda is a godsend.
It is not fast. Mamba and micromamba are still much faster than conda and yet lack basic features that conda has to provide. Everyone is dropping conda like a hot plate since the licensing changes in 2024.
I would recommend learning a little bit of C compilation and build systems. Ruby/Rails is about as polished as you could get for a very popular project. Maybe libyaml will be a problem once in a while if you're compiling Ruby from scratch, but otherwise this normally works without a hassle. And those skills will apply everywhere else. As long as we have C libraries, this is about as good as it gets, regardless of the language/runtime.
Have you tried JRuby? It might be a bit too large for your droplet, but it has the java versions of most gems and you can produce cross-platform jars using warbler.
And yet operating distributed systems built on it is a world of pain. Elasticsearch, I am looking at you. Modern hardware resources leave the limitations of running/scaling on top JVM to be an expensive, frustrating endeavor.
In addition to elasticsearch's metrics, there's like 4 JVM metrics I have to watch constantly on all my clusters to make sure the JVM and its GC is happy.
In-house app that uses jdbc, is easy to develop and needs to be cross-platform (windows, linux, aix, as400). The speed picks up as it runs, usually handling 3000-5000 eps over UDP on decade old hardware.
I'm surprised to hear that. Ruby was the first language in my life/career where I felt good about the dependency management and packaging solution. Even when I was a novice, I don't remember running into any problems that weren't obviously my fault (for example, installing the Ruby library for PostgreSQL before I had installed the Postgres libraries on the OS).
Meanwhile, I didn't feel like Python had reached the bare minimum for package management until Pipenv came on the scene. It wasn't until Poetry (in 2019? 2020?) that I felt like the ecosystem had reached what Ruby had back in 2010 or 2011 when bundler had become mostly stable.
Bundler has always been the best package manager of any language that I've used, but dealing with gem extensions can still be a pain. I've had lots of fun bugs where an extension worked in dev but not prod because of differences in library versions. I ended up creating a docker image for development that matched our production environment and that pretty much solved those problems.
That's one of the reason I prefer a dev environment (either physical install or VM) that matches prod. Barring that I would go with with a build system (container-based?) that can be local. Otherwise it's painful.
I went with Python because I never had this issue. Now with any AI / CUDA stuff its a bit of a nightmare to the point where you use someone's setup shell script instead of trying to use pip at all.