I disagree with this. The tooling around JVM is great or at least good enough.
Maven is mostly smooth sailing comparing to Python's env solutions or JS ecosystem. Maven is 21 years old. A quick search says Python has/had; pip, venv, pip-tools, Pipenv, Poetry, PDM, pyenv, pipx, uv, Conda, Mamba, Pixi.
Debugging is just fine. Modern debugging tools are there. There is remote debugging, (although limited) update & continue, evaluating custom expressions etc. I don't know what they complain about. If using Clojure, it is also possible to change running application completely.
Monitoring tools are also great. It is easy to gather runtime metrics for profiling or monitoring events. There are tools like Mission Control to analyse them.
That is interesting. I wonder if L1 is denser because it has to have more bandwidth. But doesn't that point to a space constraint rather than money? A combination of L1 & L2 will have a bigger capacity so it would be faster than pure L1 cache in the same space (for some/most workloads)?
I always thought cache layers was because of locality but that is my imagination :) The article talks about different access patterns of the cache layers which makes sense in my mind.
It also mentions density briefly:
> Only what misses L1 needs to be passed on to higher cache levels, which don’t need to be nearly as fast, nor have as much bandwidth. They can worry more about power efficiency and density instead.
> doesn't that point to a space constraint rather than money?
The space constraints are also caused by money. The reason we don't just add more L1 cache is that it would take up a lot of space, necessitating larger dies, which lowers yields and significantly increases the cost per chip.
That isn't true at all. The limited speed at which a signal can propagate itself across a chip and the added levels of muxing necessarily mean that there's a limit to how low the latency of a cache can be that's roughly proportional to the square root of its capacity.
It actually is true. You're also right that physics would eventually constrain die size, but it isn't the bottleneck that's keeping CPUs at their typical current size. This should be pretty obvious from the existence of (rare and expensive) specialty CPU dies that are much larger than typical ones. They're clearly not physically impossible to produce, just very expensive. The current bottleneck holding back die sizes is in fact costs, since larger die sizes cause the inevitable blemishes to ruin larger chunks of your silicon wafer each, cratering yields.
> added levels of muxing necessarily mean that there's a limit to how low the latency of a cache can be
L1 cache avoids muxing as much as possible, which is why it takes up so much die space in the first place.
The path of loading data from L1 is one of the tightest, most timing-critical parts of a modern CPU. Every cycle of latency here has very clear, measurable impact on performance, and modern designs typically have 4-5 cycle L1 load-to-use. Current AMD cores do really well against Intel ones despite clocking lower and being weaker on most types of resources simply because they have a 1 cycle advantage. If you had literally infinite cheap transistors available, it would not be a good idea to spend them on the L1 cache, because this would make the cpu slower.
> L1 cache avoids muxing as much as possible, which is why it takes up so much die space in the first place.
Every time you double the size of a cache, you need to add a single extra mux on the access path. Simply to be able to select from which half of the cache you want the result. You also increase the distance that a signal needs to propagate, but I believe for L1 the muxes dominate.
I don't know much about C#. It certainly looks more popular in gamedev circles.
When I played with this new java api. I wasn't worried about the FFI cost. It seemed fast enough to me. My toy application was performing about 0.77x of pure C equivalent. I think Java's memory model and heavy heap use might hurt more. Hopefully Java will catch up when it gets value objects with Project Valhalla. Next decade or so :)
Genuine curiosity - what would be your motivation to use Java over C# here aside from familiarity (which is perfectly understandable)? The latter takes heavy focus on making sure to provide features like structs and pointers with little to no friction, you can even AOT compile it and statically link SDL2 into a single executable.
In improbable case you may want to try it out, then all it needs is
- SDK from https://dot.net/download (or package manager of your choice if you are on Linux e.g. `sudo apt-get install dotnet-sdk-8.0`, !do not! use Homebrew if you are on macOS however, use .pkg installer)
Not the op, but at some point I did choose between the two paths/jobs assuming I will get more proficient in only one of them each year (which is true, I stayed junior in C#).
Why I chose Java boils down to two reasons:
- runs on linux (I know there is some version of c# that eventually opened up, but I kind of expect it to have lot of conditions for being cross platform, I assume that standard c# code is not crossplatform due to some reason (e.g. Com usage might be standard way of doing stuff), which would make finding crossplatform answers tedious)
- whole ecosystem is more open source and more involved parties (which I interpreted as abit less controlled by the corporate overlord, so if corporate overlord went rogue, greater chance that language would survive somehow)
Despite what some fanatics may claim, operating systems other than Windows are still second class citizens (saying this after five years of doing .NET development almost exclusively on Linux), especially for dev, and operating systems other than the big three are not supported at all. So no BSDs (even FreeBSD) or Solaris if you ever need it.
Since the open .NET is pretty young, and they still have trouble with community perception due to their past actions, finding high quality FOSS libraries may pose a problem depending on what you're doing. Pretty much everything from MS is open and high quality, but they don't provide everything under the sun.
And with Java you always have alternative runtimes in case this Oracle deal goes sideways for any reason.
GObject (GTK4 and similar): https://github.com/gircore/gir.core (significantly better and faster than Java alternatives, this is just one example among many)
Young: first OSS version was released 8 years ago
Solaris: might as well say "it runs COBOL but not .NET"
It's funny that everyone missed the initial context of the question and jumped onto parroting the same arguments as years ago, without actually addressing the matter at hand or saying anything of substance. Unsurprising show of ignorance by Java community. Please stay this way - will help the industry move on faster.
The premise is always the same - if something is missing in {technology I don't like}, it's a deal-breaker, and when it's not or was always there - it never mattered, or is harmful actually, that is, until {technology I like} gets it as well.
Neither point was ever true in the last ~10 years when it comes to gamedev (or where you want to use SDL) where Java was and continues to be a much weaker choice.
Java’s ecosystem is just vastly bigger. In many categories, Java has multiple open-source offerings vs .NET’s single, proprietary one that is often just a bad copy of one of the former libraries.
It was a learning exercise. Just playing around with clojure, raylib and this new api. I know all these can also be done with C# with some pros & cons.
I wasn't advocating java for gamedev. Just pointing that, this new api is a nice addition. And I am glad that jvm ecosystem is improving.
To be fair, if I was starting a game project I wouldn't stay in Java/C# level. Depending on the project, something like C, C++, zig might be more practical. Ironically I believe they would be easier for iterating ideas and deploy into different platforms (mobile, wasm etc.).
I have played with raylib bindings for clojure by using the new foreign function api. It was a lot of fun. SDL might be a better fit because it prefers pass by reference arguments [0].
Absolutely not. jlink is used to distribute applications (it includes your code, the Java libs you use, i.e. their jars, and the trimmed-down JVM with the modules you're using so that your distribution is not so big - typically around 30MB).
Java libraries are still obtained from Maven repositories via Maven/Gradle/Ant/Bazel/etc.
If you distribute libraries as jmod files, which few libraries do (in that case, jlink would automatically extract the native libraries and place them in the appropriate location).
I'd like to learn how they do it. Because last time I've looked at this, the suggested solution was to copy the binaries from claspath (eg: the jar) into a temporary folder then load it from there. It feels icky :)
Do NOT force the class loader to unload the native library, since
that introduces issues with cleaning up any extant JNA bits
(e.g. Memory) which may still need use of the library before shutdown.
Remove any automatically unpacked native library. Forcing the class
loader to unload it first is only required on Windows, since the
temporary native library is still "in use" and can't be deleted until
the native library is removed from its class loader. Any deferred
execution we might install at this point would prevent the Native
class and its class loader from being GC'd, so we instead force
the native library unload just a little bit prematurely.
Users reported occasional access violation errors during shutdown.
Ah, looking through the docs [1]; you have to use your own ClassLoader (so it can be garbage-collected), and statically-link with a JNI library which is unloaded when the ClassLoader is garbage-collected.
It is free to try. But may not be free for commercial use. Some random blog post says check it with Oracle if you want to sell your products/services with it.
"The GFTC is the license for GraalVM for JDK 21, GraalVM for JDK 17, GraalVM for JDK 20 and later releases. Subject to the conditions of the license, it permits free use for all users – even commercial and production use."
"What is the new “GraalVM Free Terms and Conditions (GFTC) including License for Early Adopter Versions” License?"
Diffing the licenses shows that the only change was to add clarity around graalvm as opposed to a generic program:
"For clarity, those portions of a Program included in otherwise unmodified software that is produced as output resulting from running the unmodified Program (such as may be produced by use of GraalVM Native Image) shall be deemed to be an unmodified Program for the purposes of this license."
Other than that, it appears the same. The thing I'm wondering about is how this matches with everyone saying it's "free to use" when the following is in both versions of the license (underscore _emphasis_ mine):
"(b) redistribute the unmodified Program and Program Documentation, under the terms of this License, provided that __You do not charge Your licensees any fees associated with such distribution or use of the Program__, including, without limitation, fees for products that include or are bundled with a copy of the Program or for services that involve the use of the distributed Program."
This to me reads as if:
- You cannot charge for licensing fees on graalvm built distributed pieces of software (licensing for a desktop application, cli program, game, etc)
- You cannot charge for licensing fees on services that involve the use of graalvm built software (SaaS software that uses the graalvm built binary in a k8s container, lambad, etc as part of the service topology)
My gut reaction (since this is Oracle) is that this was an ambiguity oversight in the language that is now being corrected to lock down this "free" usage of the premium version, but I may be overly pessimistic.
Is anyone using this in prod based on it being "free" and has any info from their legal team or otherwise?
Mostly looking to be corrected in my likely misinterpretation.
I am eyeing river wm. Because it has pluggable layout manager and controller via custom wayland protocols. Which means I can implement just those parts in my favourite lang to scratch the itch. Kudos to you going for the whole wm :)
An X11 wm is childs play compared to a Wayland compositor. I have been eyeing River and various other Wayland options too, in case I find myself compelled to switch to Wayland at some point, but the sheer complexity of the Wayland side has held me back.
Given that X11 is going away, the server (except for Xwayland) is unmaintained, and in a few years modern software just won't support it at all... the sooner you switch, the better.
There are plenty of Wayland compositors that supports X backends, so app support for X is moot because 1) most of the apps I run are my own - the only thing I really depend on is browsers, and 2) once Chrome and Firefox abandons X entirely, and no other browsers based on their engines supports X, there are options to run them with a Wayland compositor with an X backend in "kiosk mode" that basically gives them a single window just like before.
So that leaves how long I will have a working X server, and frankly I suspect I'll have ended up writing a display server years before that becomes an issue (whether that'll be X or Wayland or a mix, who knows, but I doubt I'll be able to resist very long).
It's just not a concern that's anywhere near the top of my list (or the top 100 of my list)
The funny thing is in theory it might work reasonably well, as Weston at least uses the DRI3 extension to get the underlying X server to just pass the DRI filedescriptors up, so if the rest also supports it you'd in theory just get a game of passing the fds through multiple layers and then the client doing direct rendering via shared memory. Though in practice there's of course plenty of things that might go wrong.
I go back and forth on this kind of thing; if the better option really is EOL, is it better to keep using it until it actually breaks, or is it better to take the pain up front? I don't have an actual answer in the general case, but on the bright side Xorg is still not dead even though the pro-Wayland crowd says so at every single opportunity; it has a nonzero number of maintainers, gets commits on a slow but regular cadence, and appears likely to continue working for the forseeable future. If we assume for a moment that the X server will really actually die when Red Hat stops maintaining it (questionable; the BSDs appear more attached to it than RH), that merely gives us until the end of RHEL 9 around 2032, which is far enough away that I feel comfortable not worrying about it.
Also, as sibling comment points out, a simple solution exists in the form of rootful xwayland; we can keep a working X11 environment and just swap out the actual rendering layer.
Or put differently: "Given that X11 is going away, the server (except for Xwayland) is unmaintained," just isn't true, and even if it was it wouldn't necessarily justify the claims you make on that basis.
A modern app needs to be able to run in datacenters and render on my GPU (whether desktop, laptop, tablet, or phone). That’s how and where our capacity is evolving.
If X dies, I assume we’re going to end up with Javascript or Wasm frontends to everything.
Maybe you'll end up with those frontends. I won't. Part of the point for me with my rewrites of essential parts of my computing experience is that I control an increasing proportion of my stack, and that proportion will keep increasing.
Maven is mostly smooth sailing comparing to Python's env solutions or JS ecosystem. Maven is 21 years old. A quick search says Python has/had; pip, venv, pip-tools, Pipenv, Poetry, PDM, pyenv, pipx, uv, Conda, Mamba, Pixi.
Debugging is just fine. Modern debugging tools are there. There is remote debugging, (although limited) update & continue, evaluating custom expressions etc. I don't know what they complain about. If using Clojure, it is also possible to change running application completely.
Monitoring tools are also great. It is easy to gather runtime metrics for profiling or monitoring events. There are tools like Mission Control to analyse them.