There are several open source tools for Java (Eclipse, Visual Studio plugins, Netbeans and others).
The reason I don't use them is not because they are bad, but because IntelliJ is so much better.
I even use IntelliJ Ultimate for non Java code like React, even though Visual Studio Code seems to be de-facto standard for React developers and guides.
You could have as easily used String instead of var and there would be no ambiguity. This gets worse for numbers where it is important to know if you are dealing with an integer, long or something even bigger.
I agree with you on switches and do like the """ feature, thought. It was a real pain in the rear to include the multiple + "\n" back in the old days. This is a very clean and intuitive improvement.
I think it has a lot to do with work culture. Many tend to mimic what others are doing in order to not stick out.
At my previous job some were able to change that by consistently using "modern" features of Java. It inspired others to change and eventually we ended up with a good code base.
Be the one to start the change by implementing new features using good code. This will give others "permission" to do the same. Also try to give soft suggestions in code reviews or pair programming of simpler ways to do it (don't push too hard)
At my current job all of us were eager to try the latest features from the start, so we never had to convince new hires.
I know I came off as a bit negative, but in fairness to them, they did more or less continue working on what I was doing using the newer Java 21 features, and after I got a few pretty interesting changes merged in some of the more junior engineers started using them too; particularly I was able to successfully evangelize against the use of `synchronized` in most cases [2] and got at least some people using queues to synchronize between threads.
It honestly has gotten a fair bit easier for me since I've been doing this for awhile; at my last job I was the most experienced person on my direct team (including my manager) and one of the more experienced software people at the company, so I was able to throw my weight around a bit more and do stuff how I wanted. I tried not to be a complete jerk about it; there were plenty of times people would push back on what I was doing and I would think about it and agree that they were probably right, but I outwardly rejected arguments that seemed to be based on "I didn't learn this in university so it's wrong".
I have had other jobs (at much bigger companies) where they were not amenable to this. I would try and use new features and my PRs would be rejected as a result, usually with some vague wording of "this way is faster", which I later found out was (as far as I can tell) always a lie.
[1] It is not hard to find my job history but I politely ask you do not post it here.
[2] I'm sure someone here can give me a contrived example of where `synchronized` makes sense but if you need mutexes I think you're almost always better off with a ReadWriteLock or ReentrantLock.
The JVM tends to use as much memory as it can for performance reasons. It is not a reliable indicator of how much memory it actually needs. Why spend resources on clearing memory if there's still unused memory left?
If memory is an issue, you can set a limit and the JVM will probably still work fine
Your comment is definitely true. However, if you study GC performance academic papers over the past three for four decades, they pretty much conclude GC overhead, amortized, can be on par with manual alloc/free[1], but usually the unwritten assumption is they have unbounded memory for that to be true. If you study how much memory in practice you need not to suffer a performance loss on an amortized basis, you'd arrive at 2-3x, so I'd claim it is fair to assume Java needs 2-3x as much memory as Swift/C++/Rust to run comfortably.
You can actually witness this to some degree on Android vs iPhone. iPhone comfortably runs with 4GB RAM and Android would be slow as dog.
[1]: I don't dispute the results, but I also like to note that as a researcher in Computer Science in that domain, you were probably looking to prove how great GC is, not the opposite.
> iPhone comfortably runs with 4GB RAM and Android would be slow as dog.
This has nothing to do with RAM. Without load, Android wouldn’t even push 2GB, it would be still slower than iPhone because of different trade-offs they make in architecture.
The point was GC cost in general, not which Java/JVM implementation you choose. Try comparing two Androids with the same chipset at 4GB vs 8GB RAM.
Anyhow, that was just an anecdotal unscientific experiment to give you some idea--obviously they are two different codebases. The literature is there to quantify the matter as I noted.
Android for a very long time lacked a quality JIT, AOT and GC implementation, and then each device is a snowflake of whatever changes each OEM has done to the device.
Unless one knows exactly what ART version is installed on the device, what build options from AOSP were used on the firmware image, and what is the mainline version deployed via PlayStore (if on Android 12 or later), there are zero conclusions that one can take out of it.
Also iOS applications tend to just die, when there is no more memory to make use of, due to lack of paging and memory fragmentation.
Frankly, it just takes some motivated senior devs and the tantalizing ability to put out the OP blog post and you've got something management will sign off on. Bonus points you get to talk about how amazing it was to use Apple tech to get the job done.
I don't think they seriously approached this because the article only mentioned tuning G1GC. The fact is, they should have been talking about ZGC, AppCDS, and probably Graal if pause times and startup times were really that big a problem for them. Heck, even CRaC should have been mentioned.
It is not hard to get a JVM to startup in sub second time. Here's one framework where that's literally the glossy print on the front page. [1]
Yep, resume-driven development. I remember at a previous company a small group of people pushed a Go rewrite to speed everything up. The serious speed improvements came from re-architecting (elimination of a heavy custom framework, using a message queue instead of handling requests synchronously etc). They would have been better off fixing the original system so that everything could benefit from the improvements, not just the tiny bits that they carved off.
Then the next annual report talked about improved scalability because of this amazing technology from Google.
Resume driven development would be using some random ass Java framework to pad on resume. Apple using Apple technologies seems rather like corporate mandate.
If Apple does not dogfood their own technology for production systems what chance do they have to tell 3rd party users that Swift is ready for prime time.
Delving into Java arcana instead of getting first hand experience in developing in Swift would've been great opportunity wasted to improve Swift.
However, they chose to replace an existing system with swift. The "arcana" I mentioned is start up options easily found and safe to apply. It's about as magical as "-O2" is to C++.
Sure, this may have been the right choice if the reason was to exercise swift. However, that shouldn't pretend like there was nothing to do to make Java better. The steps I described are like 1 or 2 days worth of dev work. How much time do you think a rewrite took?
Apple has explicitly stating that they want to try and move as much of their stuff to Swift as possible.
I’m sure you’re right, there must’ve been ways to improve the job of deployment. But if they wanted to reduce resource usage and doing it in Swift aligned with some other company goal it would make sense they might just go straight to this.
Once Amazon CEO was asked about new competitors trying to create cloud infrastructure fast. His reply was "You cannot compress experience"
Saving few weeks or months by learning 3rd party technology instead of applying and improving first party technology would be amateurish.
> However, that shouldn't pretend like there was nothing to do to make Java better.
This seems like constant refrain that Apple or anyone choosing their own tech over someone else's owe absolute fair shot to stuff they didn't choose. This is simply not the way world works.
Yes, there are endless stories companies spending enormous resources to optimize Java stack even up to working with Core Java team at Oracle to improve on JVM innards. But those companies are just (although heavy) user of core technology rather than developer of competing one. Apple is not one of those users, they are developers.
> Yes, there are endless stories companies spending enormous resources to optimize Java stack
And not what I'm advocating for. Sometimes rewrites are necessary.
What I'm advocating is exercising a few well documented and fairly well known jvm flags that aren't particularly fiddly.
The jvm does have endless knobs, most of which you shouldn't touch and instead should let the heuristics do their work. These flags I'm mentioning are not that.
Swapping g1gc for zgc, for example, would have resolved one of their major complaints about GC impact under load. If the live set isn't near the max heap size then pause times are sub millisecond.
> This seems like constant refrain that Apple or anyone choosing their own tech over someone else's owe absolute fair shot to stuff they didn't choose. This is simply not the way world works.
The reason for this refrain is because Java is a very well known tech, easy to hire for (which Amazon that you cite heavily uses). And Apple had already adopted Java and wrote a product with it (I suspect they have several).
I would not be saying any of this if the article was a generic benchmark and comparison of Java with swift. I would not fault Apple for saying "we are rewriting in swift to minimize the number of languages used internally and improve the swift ecosystem".
I'm taking umbridge to them trying to sell this as an absolute necessity because of performance constraints while making questionable statements on the cause.
And, heck, the need to tweak some flags would be a valid thing to call out in the article "we got the performance we wanted with the default compiler options of Swift. To achieve the same thing with Java requires multiple changes from the default settings". I personally don't find it compelling, but it's honest and would sway someone that wants something that "just works" without fiddling.
I remember the days when Apple developed their own JVM, ported WebObjects from Objective-C to Java, and even had it as the main application language for a little while, uncertain if the Object Pascal/C++ educated developers on their ecosystem would ever bother to learn Objective-C when transitioning to OS X.
Decades ago, I was working with three IBM employees on a client project. During a discussion about a backup solution, one of them suggested that we migrate all customer data into DB2 on a daily basis and then back up the DB2 database.
I asked why we couldn't just back up the client's existing database directly, skipping the migration step. The response? "Because we commercially want to sell DB2."
You tune for what you have/can get. Machines with less memory tend to have slower CPUs. That may make it impossible to tune for (close to) 100% CPU and memory usage.
And yes, Apple is huge and rich, so they can get fast machines with less memory, but they likely have other tasks with different requirements they want to run on the same hardware.
> It sounds good but in reality people end up spending time messing around with config files and annotations.
I use Spring Boot at my day job and write mostly web services. I don't spend time messing around with config files and annotations. When I create a service class, I annotate it with @Service, and that is mostly what I need.
Example:
@Service
public record ItemsService(ItemsRepository repo) {
public void doStuff(String country) {
var items = repo.findByCountry(country);
// do stuff with items
}
}
Later versions of Spring Boot has reduced a lot of the annotations necessary, like @Inject if you use constructors etc. There are of course other annotations and configurations, but 90% of what I do is similar to the example I gave above. Things may have changed since last you used it, but the amount of "magic" and annotations is often much less than what is posted in these types of discussions.
> which is something that I would have said would never happen if you had asked me five years ago.
I think a lot of people are noticing the changes Java has had in the previous years. The language has made a lot of improvements, and I feel that the mind set of the community has changed. The old enterprise way of factories and unnecessary abstractions have lost a lot of popularity, and is mostly still alive in legacy software/teams and universities who have not yet caught up.
Even Spring Boot is now a valid approach for getting sh*t done for startups. There are of course frameworks that are more light weight, or you can start from scratch and choose your own libraries to keep the size down. But SB is simply good enough for most use cases, and even supports native compilation now.
I still am not a huge fan of the Spring stuff, I have to use Spring Streams for work and I think it’s unpleasant to work with. It seems like the rest of the world has much more fun configuring YAML files than I do. I had to use Spring Boot at a previous job and it wasn’t for me, but honestly I really just hate working on web stuff.
But that’s obviously not the language’s fault. There are frameworks in Java that I think are great, like Vert.x; hell even going super low-level with NIO is straightforward enough if I really need control of HTTP stuff.
The stuff I really have the most fun working with is concurrent and distributed programs, and I think Java (or at least the JVM) is pretty hard to beat with that. Vert.x, Disruptor, and even the built-in JVM concurrency libraries (other than synchronized) are excellent; they have a Just Works quality to them.
And nowadays, GraalVM is good enough with its native compilation that you can avoid the long startup times and keep the memory under control, so it even is reasonably ok for custom command line tools.
That's why moduliths are becoming more popular. These are basically monoliths that enforce structure. The other advantage is that each "module" can be extracted as a micro-service later without much work.
The reason I don't use them is not because they are bad, but because IntelliJ is so much better.
I even use IntelliJ Ultimate for non Java code like React, even though Visual Studio Code seems to be de-facto standard for React developers and guides.