First of all, memory architecture is not thrown out the window with a GC, there are plenty of considerations that can be done in managed languages as well. Also, many managed languages have value types, allowing for basically every memory pattern you would use in a low-level language.
Second of all, a GC is the superior way of handling many, random-allocations with no patterns. An allocation in a modern GC will literally be an integer increment (not even atomic!), and every later step will happen on another thread in parallel, not slowing down the thread doing the work. No malloc implementation can beat it (unless it doesn’t care about freeing). For the rare case where an arena allocator is a better approach, the aforementioned managed languages with value types are there.
Of course not, you can have an excellent memory architectures with a GC.
My point is that if you do that, well, then you've done the hard parts of living without a GC - so might as well ditch it altogether.
But the absolute vast majority of course doesn't have excellent memory architectures, so they'll suffer greatly from not being "burdened" with having to think about memory.
It is unfortunate that GC debate is all about performance. The programmer ergonomics alone makes the GC a worse choice. But then of course we still have GC tuning.
Most programs are not video decoders running a tiny tiny code on gigabytes of data, they contain quite a bit of code and many parts of it run irregularly. By not “paying attention” to memory allocations on these vast amount of cases, you get similar, or sometimes even better performance, safer and more correct software, faster. If it turns out to be a critical part, it is very easy to pay a bit more attention to the allocation story there.
So in like 90+% of cases, a GC is a huge boost to productivity, and this is proved by their extensive usage in the industry.
And in what world is a GC safer and more correct? And no, please don't argue that it is faster in any meaningful way.
> So in like 90+% of cases, a GC is a huge boost to productivity, and this is proved by their extensive usage in the industry.
It is not a boost in productivity. And the reason for why it has extensive use in the industry is because the sad state of programming languages have been that languages with GC are safe and high-level whereas languages without a GC are old and unsafe. That really has nothing to do with the GC though. Which is why I say that the GC has been the biggest mistake in software engineering.
Rust and Swift are beginning to change that in a very small way.
Rust actually solves the problem, although by taking a huge toll in the form of the borrow checker. It is a very good idea, but just see this thread, it is not applicable everywhere - which is okay. Rust is a low-level language made for low-level programs where absolute control over the execution is needed.
Swift chose a different approach, but their tradeoff was lower memory overhead vs performance. It is likely a worthwhile goal for mobile devices, but that is a niche. And by the way, for all practical purposes RC is a garbage collection algorithm, it just tracks dead links instead of live ones.
So there is one solution for correctness and safety without GC, which comes with plenty of warts. How exactly GC is not a boost in productivity?
The borrow checker does a whole lot more than replacing the GC though so it is very weird to point at it and say that the lack of a GC leads to that.
Modern C++ works very well too, but it is of course a huge language with a lot of legacy that makes it unsuitable or undesirable for a lot of things. But again, completely orthogonal to the GC.
You claim that every other sentence but present nothing to stand on. The interview linked above shows some perspective if you care. You are not relieved from thinking about memory with a GC.
GC is a sensible response to C-style memory management, but not to any form of manual memory management.
The problem is that the options for non-GC languages are so poor, so the most reasonable language choice gives you a GC whether you want it or not. Hence my main point.
So it's not like you pick Go because it has a GC. But you might pick Go despite it having a GC.
> The borrow checker does a whole lot more than replacing the GC though so it is very weird to point at it and say that the lack of a GC leads to that.
Well, yes and no. Sure, it helps with data races (but not with race conditions in general), but foremost it is a tool that allows for correct compile-time memory management. Compile-time memory management is only possible in a subset of programs, so rust as well has to use (A)RC at times. This is okay when used sparingly, but atomic increases are very expensive on today’s hardware.
I am familiar with RAII, but that is the exact same thing what Rust enforces at compile time, with the exact same shortcomings, so I don’t see how is it an argument against GC.
Reference counting can cause long pauses as well - as I said, it is the same problem, just looking at it from the other direction. If an object with many many references to plenty other objects die it can take quite a long time to deallocate, there are no free lunch. And then we haven’t even talked about cycles that need a similar background process to tracing GCs, without which it will leak.
While I am all for research into better ways to do RC, please look at the discussion - it is not at all clear that RC would be better, and even theoretically a tracing GC will win.
Yes and yes. Why on earth would "doesn't solve X" be an argument against that it does Z + Y ? Whereas a GC only does Z.
It was an alternative, that does less and doesn't put as much restrictions on the programmer compared to the borrow checker, as you complained about rust and equaled that with not having a GC.
And it is deterministic.
>While I am all for research into better ways to do RC, please look at the discussion - it is not at all clear that RC would be better, and even theoretically a tracing GC will win.
Win what and in what sense?
How every GC language has problems with GC tuning is in my opinion a clear indicator that the GC lost. With no real benefits short term and huge downsides down the line - even if you never hit any limits.
The borrow checker doesn’t give a complete answer to memory management, as dynamic allocation patterns by definition can’t be done at compile time unless you can solve the halting problem.
> How every GC language has problems with GC tuning is in my opinion a clear indicator that the GC lost.
Citation required. It Just Works in like 99% of cases, and it’s not like there is any solution that covers every edge case. Just look at the thread I linked, there is an example of C++’s RC lingering for a long time after the effective work is done freeing many things.
Did I say it did? The operations are deterministic but dynamic memory is not. But that alone is quite important. Lucky for the non-GC crowd though, as the languages often give you a stronger control over what is allocated on the stack and not.
It is an axiom at this point. GC battles are not exactly unheard of on HN. Don't worry, it gets fixed in the upcoming release - has been said for decade after decade.
No there is no solution that covers any case. But you pretty much always have to think about memory, so the appeal of using a GC when the only argument in its favor is that you don't have to think about memory is quite unclear.
Depends on how much attention one was paying during their data structures and algorithms lectures, and if the language also has support for value types, manual memory management and trailing lambdas or not.