This is a real misunderstanding. Go's GC is not “pretty much the best GC out there” by most GC standards (it's already quite good and keeps getting better, though). And it hasn't been “designed to try to compete with systems programming languages”.
Go as a language was designed like that, and special attention was made to allow as much value types as possible, to allocate as much things as you can on the stack, thus reducing GC pressure. For the first five years of Go or something, Go's GC was actually pretty bad (it was probably the most basic GC you could find in any somewhat popular language), but it wasn't too much of a deal because you can avoid it most of the time when it goes in your way (much more easily than in Java for instance).
After some time, they decided to enhance it, but they were on a budget (no lots of money spent on it” actually), so because in go you can avoid allocating memory on the heap, they decided to focus on GC latency instead of throughput (if the GC's throughput isn't good enough for you, you better reduce your allocations).
Overall go is a pretty fast language, and it's an engineering success, but it's in spite of its GC and thanks to other parts of the language's design, not because Go's GC is exceptionally good (it's not, and if you read my link you'd understand how).
Thanks for the answer. I guess I have been misled, since Go proponents (including in HN) always argue the latest iteration sof their GC (which was discussed a few times here) had one of the lowest latencies of most production languages; and that is what made it suitable for many tasks.
Since I don’t have a sense on the timescales, I will take your word for it that it was the reverse.
Go as a language was designed like that, and special attention was made to allow as much value types as possible, to allocate as much things as you can on the stack, thus reducing GC pressure. For the first five years of Go or something, Go's GC was actually pretty bad (it was probably the most basic GC you could find in any somewhat popular language), but it wasn't too much of a deal because you can avoid it most of the time when it goes in your way (much more easily than in Java for instance).
After some time, they decided to enhance it, but they were on a budget (no lots of money spent on it” actually), so because in go you can avoid allocating memory on the heap, they decided to focus on GC latency instead of throughput (if the GC's throughput isn't good enough for you, you better reduce your allocations).
Overall go is a pretty fast language, and it's an engineering success, but it's in spite of its GC and thanks to other parts of the language's design, not because Go's GC is exceptionally good (it's not, and if you read my link you'd understand how).