I once borrowed a book, to find a previous borrowers receipt in it, placed as a bookmark. Upon inspection it turned out that the previous borrower was myself(!) (I recognized the library card number), about ten years earlier.
So probably, no one had borrowed it in the time between. I was very happy the book had not been thrown out.
You can find entertaining stuff there. My interests can be really niche. I remember once finding an amazing book in our college library from the sixties or seventies about the use of LSD in treating psychiatric disorders. While I didn't agree with all the suggestions in there, it was a fascinating time capsule (with colour illustrations, many of them by patients). With the microdosing debate, it's probably relevant again.
Yet when I took the book off the shelf it looked like no one had touched it in many years.
What you are saying is especially true for fiction, less so for nonfiction. Many nonfiction topics are important and require a large volume of materials to remain as reference. For example, you never know when it might be important to know how something was manufactured 50 years ago, or what happened in Congress 20 years ago, or what a newspaper reported a hundred years ago. This makes it really hard to judge which items could be culled. I'm inclined to agree that borrow rates are relevant but they are not the only thing that matters. The possibilities of digitization and interlibrary loan make culling less risky, but someone still has to decide to keep unpopular reference materials for them to remain available.
Almost every library regularly throws out books, and all librarians I know are happy with this. New books arrive regularly, and unless you plan on your library growing unlimited, you need to, in general, a 1 in 1 out policy.
You have to rely on implementation for anything to do with what happens to memory after it is freed, or really almost anything to do with actual bytes in RAM.
For a start, it tells you the engine can actually be used to make a full finished game — which with hobby game engines isn’t a guarantee. If you want me to use an engine, I’d like at least one finished game, preferably even released on Steam.
I can't be sure, but this sounds entirely possible to me.
There are many, many people, and websites, dedicated to roleplaying, and those people will often have conversations lasting thousands of messages with different characters. I know a people whose personal 'roleplay AI' budget is a $1,000/month, as they want the best quality AIs.
The world doesn’t consider it reasonable for businesses to sell beer to kids, and expect us all to constantly follow our kids around to make sure they don’t get beer. Bars don’t get to say ‘woops, we got thousands of 9 year olds drunk, their parents should keep an eye on them’”.
And at this point, most kids, most people, spend more time online than outside walking around
> Bars don’t get to say ‘woops, we got thousands of 9 year olds drunk, their parents should keep an eye on them’”.
Because there's no whatsoever downside in requiring bars to not serve children (if we assume that it's just to not give alcohol to children); online age checks instead have very big negative consequences for the whole populace.
I really like slides.com, which is a web front end to reveal.js, I’ve used it for a few things, and it lets you export the reveal.js html and JavaScript, so you know you won’t lose it.
It’s not perfect, but as you say, whenever I’ve made a slide deck outside a gui I’ve regretted it. Quarto is better for documents, but still has rough edges.
I think the complaint is that the C version isn’t multi threading ignoring that Rust makes it much easier to have a correct multithreaded implementation. OP is conveniently also ignoring that the Rust ports that I reference Russinovich talking about are MS internal code bases where it’s a 1:1 port, not a rearchitecture or an attempt to improve performance. The defaults being better, no aliasing that the compiler takes advantage, and automatic struct layout optimization all largely explain that it ends up being 5-20% faster having done nothing other than rewrite it.
But critics seem to often never engage with the actual data and just blindly get knee jerk defensive.
Honestly, if these languages are only winning by 25% in microbenchmarks, where I’d expect the difference to be biggest, that’s a strong boost for Java for me. I didn’t realise it was so close, and I hate async programming so I’m definitely not doing it for an, at most, 25% boost.
It’s not about the languages only, but also about runtimes and libraries. The vert.x vertices are reactive. Java devrel folks push everyone from reactive to virtual threads now. You won’t see it perform in that ballpark. If you look at the bottom of the benchmark results table, you’ll find Spring Boot (servlets and a bit higher Reactor), together with Django (Python). So “Java” in practice is different from niche Java. And if you look inside at the codebase, you’ll see the JVM options. In addition, they don’t directly publish CPU and memory utilization. You can extract it from the raw results, but it’s inconclusive.
This stops short of actually validating the benchmark payloads and hardware against your specific scenario.
> So “Java” in practice is different from niche Java.
This is an odd take, especially when in the discussion of Rust. In practice when talking about projects using Rust as an http server backend is non-existent in comparison. Does that mean we just get to write off the Rust benchmarks?
I don’t understand what you’re saying. Typical Java is Spring Boot. Typical Rust is Axum and Actix. I don’t see why it would make sense to push the argument ad absurdum. Vert.x is not typical Java, its not easy to get it right. But Java the ecosystem profits from Netty in terms of performance, which does the best it can to avoid the JVM, the runtime system. And it’s not always about “HTTP servers” though that’s what that TechEmpower benchmark subject matter is about - frameworks, not just languages.
Your last sentence reads like an expression of faith. I’ll only remark that performance is relative to one’s project specs.
In some of those benchmarks, Quarkus (which is very much "typical Java") beats Axum, and there's far more software being written in "niche Java" than in "typical Rust". As for Netty, it's "avoiding the JVM" (standard library, really) less now, and to the extent that it still does, it might not be working in its favour. E.g. we've been able to get better results with plain blocking code and virtual threads than with Netty, except in situations where Netty's codecs have optimisations done over many years, and could have been equally applied to ordinary Java blocking code (as I'm sure they will be in due time).
Hey Ron, I’ve got deep respect for what you do and appreciate what you’re sharing, that’s definitely good to know. And I understand that many people take any benchmark as a validation for their beliefs. There are so many parameters that are glossed over at best. More interesting to me is the total cost of bringing that performance to production. If it’s some gibberish that takes a team of five a month to formulate and then costs extra CPU and RAM to execute, and then becomes another Perlesque incantation that no one can maintain, it’s not really a “typical” thing worth consideration, except where it’s necessary, scoped to a dedicated library, and the budget permits.
I don’t touch Quarkus anymore for a variety of issues. Yes, sometimes it’s Quarkus ahead, sometimes it’s Vert.x, from what I remember it’s usually bare Vert.x. It boils down to the benchmark iteration and runtime environment. In a gRPC benchmark, Akka took the crown in a multicore scenario - at a cost of two orders of magnitude more RAM and more CPU. Those are plausible baselines for a trivial payload.
By Netty avoiding the JVM I referred mostly to its off-heap memory management, not only the JDK APIs you guys deprecated.
I’m deeply ingrained in the Java world, but your internal benchmarks rarely translate well to my day-to-day observations. So I’m quite often a bit perplexed when I read your comments here and elsewhere or watch your talks. Without pretending I comprehended the JVM on a level comparable to yours, in my typical scenarios, I do quite often manage to get close to the throughput of my Rust and C++ implementations, albeit at a much higher CPU and memory cost. Latency & throughput at once is a different story though. I genuinely hope that one day Java will become a platform for more performance-oriented workloads, with less nondeterminism. I really appreciate your efforts toward introducing more consistency into JDK.
I didn’t make the claim that it’s worth it. But when it is absolutely needed Java has no solution.
And remember, we’re talking about a very niche and specific I/O microbenchmark. Start looking at things like SIMD (currently - I know Java is working on it) or in general more compute bound and the gap will widen. Java still doesn’t yet have the tools to write really high performance code.
But it does. Java already gives you direct access sto SIMD, and the last major hurdle to 100% of hardware performance with idiomatic code, flattened structs, will be closed very soon. The gap has been closing steadily, and there's no sign of change in the trend. Actually, it's getting harder and harder to find cases where a gap exists at all.
reply