Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> After digging through the Go source code, we learned that Go will force a garbage collection run every 2 minutes at minimum. In other words, if garbage collection has not run for 2 minutes, regardless of heap growth, go will still force a garbage collection.

> We figured we could tune the garbage collector to happen more often in order to prevent large spikes, so we implemented an endpoint on the service to change the garbage collector GC Percent on the fly. Unfortunately, no matter how we configured the GC percent nothing changed. How could that be? It turns out, it was because we were not allocating memory quickly enough for it to force garbage collection to happen more often.

As someone not too familiar with GC design, this seems like an absurd hack. That this 2-minute hardcoded limitation is not even configurable comes across as amateurish even. I have no experience with Go -- do people simply live with this and not talk about it?



Funnily enough, something similar happened at Twitch regarding their API front end written in Go: https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i...


Interesting, they went a totally different route.

> The ballast in our application is a large allocation of memory that provides stability to the heap.

> As noted earlier, the GC will trigger every time the heap size doubles. The heap size is the total size of allocations on the heap. Therefore, if a ballast of 10 GiB is allocated, the next GC will only trigger when the heap size grows to 20 GiB. At that point, there will be roughly 10 GiB of ballast + 10 GiB of other allocations.


Wow, that puts Discord's "absurd hack" into perspective! I feel like the moral here is a corollary to that law where people will depend on any observable behavior of the implementation: people will use any available means to tune important performance parameters; so you might as well expose an API directly, because doing so actually results in less dependence on your implementation details than if people resort to ceremonial magic.


I mean if you read Twitch's hack they intentionally did it in code so they didn't need to tune the GC parameter. They wanted to avoid all environment config.


I missed that part. I thought they would use a parameter if it were available, because they said this:

> For those interested, there is a proposal to add a target heap size flag to the GC which will hopefully make its way into the Go runtime soon.

What's wrong with the existing parameter?

I'm sure they aren't going this far to avoid all environment config without a good reason, but any good reason would be a flaw in some part of their stack.


summary: Go 1.5, memory usage (heap) of 500MB on a VM with 64GiB of physical memory, with 30% of CPU cycles spent in function calls related to GC, and unacceptable problems during traffic spikes. Optimisation hack to somewhat fix problem was to allocate 10GiB, but not using the allocation at all, which caused a beneficial change in the GC behaviour!


With recent Go releases, GC pauses have become neglible for most applications. So this should not get into your way. However, it can easily tweaked, if needed. There is runtime.ForceGCPeriod, which is a pointer to the forcegcperiod variable. A Go program, which really needs to change this, can do it, but most programs shouldn't require this.

Also, it is almost trivial to edit the Go sources (they are included in the distribution) and rebuild it, which usually takes just a minute. So Go is really suited for your own experiments - especially, as Go is implemented in Go.


runtime.ForceGCPeriod is only exported in testing, so you wouldn't be able to use it in production. But as you said, the distribution could easily be modified to fit their needs.


Thanks, didn't catch that this is for testing only.


> especially, as Go is implemented in Go.

Well, parts of it. You can't implement "make" or "new" in Go yourself, for example.


You have to distinguish between the features available to a Go program as the user writes it and the implementation of the language. The immplementation is completely written in Go (plus a bit of low-level assembly). Even if the internals of e.g. the GC are not visible to a Go program, the GC itself is implemented in Go and thus easily readeable and hackeable for experienced Go programmers. And you can quickly rebuild the whole Go stack.


> You have to distinguish between the features available to a Go program as the user writes it and the implementation of the language.

I do, I'm just objecting to "Go is implemented in Go".


This reminds me of the ongoing saga of RUSTC_BOOTSTRAP[0][1]

The stable compiler is permitted to use unstable features in stable builds, but only for compiling the compiler. In essence, there are some Rust features that are supported by the compiler but only permitted to be used by the compiler. Unsurprisingly, various non-compiler users of Rust have decided that they want those features and begun setting the RUSTC_BOOTSTRAP envvar to build things other than the compiler, prompting consternation from the compiler team.

[0] https://github.com/rust-lang/cargo/issues/6627 [1] https://github.com/rust-lang/cargo/issues/7088


This is not entirely correct. These things that "can only be used by the compiler" are nightly features that haven't been stabilized yet. Some of them might never be stabilized, but you could always use them in a nightly conpiler, stability assurances just fly out the window then. This is also why using that environment variable is highly discouraged: it breaks the stability guarantees of the language and you're effectively using a pinned nightly. This is reasonable only in a very small handful of cases.


Yep. Beyond that, there is at least one place[0] where the standard library uses undefined behavior "based on its privileged knowledge of rustc internals".

[0]: https://doc.rust-lang.org/src/std/io/mod.rs.html#379


I don't see what is incorrect? Perhaps I was insufficiently clear that when I said "the compiler" I meant "the stable compiler" as opposed to more generally all possible versions of rustc. The stable compiler is permitted to use unstable features for its own bootstrap, example being the limited use of const generics to compile parts of the standard library.


But on what basis? What part of Go isn't implemented in Go?


I gave an example a bit further up, here [1].

[1] https://news.ycombinator.com/item?id=22240223


But this isn't a contradiction to the statement, that Go is implemented in Go. If you look at the sources of the Go implementation, the source code is 99% Go, with a few assembly functions (most for optimizations not performed by the compiler) and no other programming language used.


That's not correct. The implementation of "make", for example, looks like Go but isn't - it relies on internal details of the gc compiler that isn't part of the spec [1]. That's why a Go user can't implement "make" in Go.

[1] https://golang.org/ref/spec


In which language do you think is "make" implemented?


If I may interject: I believe you are both trying to make orthogonal points. calcifer's is trying to say that some features of Go are compiler intrinsics, and cannot be implemented as a library. You are making a different point, which is that those intrinsics are implemented in Go, the host language. Both statement can be true at the same time, but I agree that the terms were not used entirely accurately, causing confusion.


And yet, maps and slices are implemented in Go.

https://golang.org/src/runtime/map.go

https://golang.org/src/runtime/slice.go

I don't see why you couldn't do something similar in your own Go code. It just won't be as convenient to use as the compiler wouldn't fill in the type information (element size, suitable hash function, etc.) for you. You'd have to pass that yourself or provide type-specific wrappers invoking the unsafe base implementation. More or less like you would do in C, with some extra care to abide by the rules required for unsafe Go code.


Nothing you wrote contradicts what I said. You can't implement "make" in Go. The fact that you can implement some approximation of it with a worse signature and worse runtime behaviour (since it won't be compiler assisted) doesn't make it "make".


You still may have significant CPU overhead from the GC e.g. the twitch article (mentioned elsewhere in comments) measured 30% CPU used for GC for one program (Go 1.5 I think).

Obviously they consider spending 50% more on hardware is a worthwhile compromise for the gains they get (e.g. reduction of developer hours and reduced risk of security flaws or avoiding other effects of invalid pointers).


In this case, as they were running into the automatic GC interval, their program did not create much, if any garbage. So the CPU overhead for the GC would have been quite small.

If you do a lot of allocations, the GC overhead rises of course, but also would the effort of doing allocations/deallocations with a manual managing scheme. In the end it is a bit trade-off, what fits the problem at hand best. The nice thing about Rust is, that "manual" memory management doesn't come at the price of program correctness.


Languages that have GC frequently rely on heap allocation by default and make plenty of allocations. Languages with good manual memory management frequently rely on stack allocation and give plenty of tools to work with data on the stack. Automatic allocation on the stack is almost always faster than the best GC.


GC languages often do and also often do not. Most modern GC languages have escape analysis. So if the compiler can deduct that an object does not escape the current scope, it is stack allocated instead of heap allocated. Modern JVMs do this and Go does this also. Furthermore, Go is way more allocation friendly than e.g. Java. In Go an array of structs is a single item on the heap (or stack). In Java, you would have an array of pointers to separately allocate on the heap (Java is just now trying to rectify this with the "record" types). Also, structs are passed by value instead of reference.

As a consequence, the heap pressure of a Go program is not necessarily significantly larger than that of an equivalent C or Rust program.


Escape analysis is very limited and what I found in practice, it often doesn't work in real code, where not all the things are inlined. If a method allocates an object and returns it 10 layers up, EA can't do anything.

In contrary, in e.g. C I can wrap two 32-bit fields in a struct and freely pass then anywhere with zero heap allocations.

Also, record types are not going to fix the pointer chasing problem with arrays. This is promised by Valhalla, but I'be been hearing about it for 3 years or more now.


> Also, it is almost trivial to edit the Go sources (they are included in the distribution) and rebuild it, which usually takes just a minute. So Go is really suited for your own experiments - especially, as Go is implemented in Go.

Ruby 1.8.x wants to say "Hello"


It does sound like Discord's case was fairly extraordinary in terms of the degree of the spike:

> We kept digging and learned the spikes were huge not because of a massive amount of ready-to-free memory, but because the garbage collector needed to scan the entire LRU cache in order to determine if the memory was truly free from references.

So maybe this is one of those things that just doesn't come up in most cases? Maybe most services also generate enough garbage that that 2-minute maximum doesn't really come into play?


Games written in the Unity engine are (predominately) written in C#, a garbage collected language. Keeping large amounts of data around isn't that unusual since reading from disk is often prohibitively slow, and it's normal to minimize memory allocation/garbage generation (using object pools, caches etc), and manually trigger the GC in loading screens and in other opportune places (as easy as calling System.GC.Collect()). At 60 fps each frame is about 16ms. You do a lot in those 16ms, adding a 4ms garbage collection easily leads to dropping a frame. Of course whether that matters depends on the game, but Unity and C# seem to handle it well for the games that need tiny or no GC pauses.

But (virtually) nobody is writing games in Go, so it's entirely possible that it's an unusual case in the Go ecosystem. Being an unsupported usecase is a great reason to switch language.


If there's an example of getting great game performance with a GC language, Unity isn't it. Lots of Unity games get stuttery, and even when they don't, they seem to use a lot of RAM relative to game complexity. Kerbal Space Program even mentioned in their release notes at one point something about a new garbage collector helping with frame rate stuttering.

I started up KSP just now, and it was at 5.57GB before I even got to the main menu. To be fair, I hadn't launched it recently, so it was installing its updates or whatever. Ok, I launched it again, and at the main menu it's sitting on 5.46GB. (This is on a Mac.) At Mission Control, I'm not even playing the game yet, and the process is using 6.3GB.

I think a better takeaway is that you can get away with GC even in games now, because it sucks and is inefficient but it's ... good enough. We're all conditioned to put up with inefficient software everywhere, so it doesn't even hurt that much anymore when it totally sucks.


Right; Go is purpose-built for writing web services, and web services tend to be pretty tolerant of (small) latency spikes because virtually anyone who's calling one is already expecting at least some latency


> Go is purpose-built for writing web services

Is this true? Go was built specifically for C++ developers, which, even when Go was first release, was a pretty unpopular language for writing web services (though maybe not at Google?). That a non-trivial number of Ruby/Python/Node developers switched was unexpected. (1)

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


your quote is just corroborating what the reply is saying. go was written for web services which were written in c++ at google.


The linked article doesn't say anything about web services. Just C++. I believe Rob Pike was working on GFS and log management, and Go was always initially pitched at system programming (which is not web services).

> Our target community was ourselves, of course, and the broader systems-programming community inside Google. (1)

(1) http://www.informit.com/articles/article.aspx?p=1623555


C# uses a generational gc iirc so it may be better suited for a system where you have a relativly stable collection that does not need to be fully garbage collected all the time and have a smaller and more volitile set of objects that will be gc'ed more often. I don't think the current garbage collector in go does anything similar to that.


Yeah, that's the ideal pattern in C#. You have to be smart-ish about it, but writing low GC pressure code can be easier than you think. Keep your call stacks shallow, avoid certain language constructs (i.e. LINQ) or at least know when they really make sense for the cost (async).

IDK if this is true for earlier versions, but as of today C# has pretty clear rules: 16MB in desktop or 64MB in server (which type is used can be set via config) will trigger a full GC [1]. Note that less than that may trigger a lower level GC, but those are usually not the ones that are noticed. I'm guessing at least some of that is because of memory locality as well as the small sizes.

On the other hand, in a lot of the Unity related C# posts I see on forums/etc, passing structs around is considered the 'performant' way to do things to minimize GC pressure.

[1] https://docs.microsoft.com/en-us/dotnet/standard/garbage-col... [2] https://blog.golang.org/ismmkeynote


This might have changed with more recent updates, but I was under the impression that the Mono garbage collector in Unity was a bit dated and not as up-to-date as a C# one today.


Unity has recently added the "incremental GC" [1] which spreads the work of the GC over multiple frames. As I understand it this has a lower overall throughput, but _much_ better worst case latency.

[1] https://blogs.unity3d.com/2018/11/26/feature-preview-increme...


Heap caches that keep things longer than a GC cycle are terrible under GC unless you have a collector in the new style like ZGC, Azul or Shenandoah.


Systems with poor GC and the need to keep data for lifetimes greater than a request should have an easy to use off heap mechanism to prevent these problems.

Often something like Redis is used as a shared cache that is invisible to the garbage collector, there is a natural key with a weak reference (by name) into a KV store. One could embed a KV store into an application that the GC can't scan into.


100%. In Java, you would often use OpenHFT's ChronicleMap for now and hopefully inline classes/records in Java 16 or so.


Ehcache has an efficient off-heap store: https://github.com/Terracotta-OSS/offheap-store/

Doesn't Go have something like this available? It's an obvious thing for garbage-collected languages to have.


You can usually resort to `import C` and using `C.malloc` to get an unsafe.Pointer.


What feature of "the new style" makes them more suitable in this case?


They have very short pause times even for very large heaps with lots of objects in them as they don't have to crawl the entire live tree when collecting.


A GC scan of a large LRU (or any large object graph) is expensive in CPU terms because the many of the pointers traversed will not be in any CPU cache. Memory access latency is extremely high relative to how fast CPUs can process cached data.

You could maybe hack around the GC performance without destroying the aims of LRU eviction by batching additions to your LRU data structure to reduce the number of pointers by a factor of N. It's also possible that a Go BTree indexed by timestamp, with embedded data, would provide acceptable LRU performance and would be much friendlier on the cache. But it might also not have acceptable performance. And Go's lack of generic datastructures makes this trickier to implement vs Rust's BtreeMap provided out of the box.


Yes, this is a maximally pessimal case for most forms of garbage collection. They don't say, but I would imagine these are very RAM-heavy systems. You can get up to 768GB right now on EC2. Fill that entire thing up with little tiny objects the size of usernames or IDs for users, or even merely 128GB systems or something, and the phase where you crawl the RAM to check references by necessity is going to be slow.

This is something important to know before choosing a GC-based language for a task like this. I don't think "generating more garbage" would help, the problem is the scan is slow.

If Discord was forced to do this in pure Go, there is a solution, which is basically to allocate a []byte or a set of []bytes, and then treat it as expanse of memory yourself, managing hashing, etc., basically, doing manual arena allocation yourself. GC would drop to basically zero in that case because the GC would only see the []byte slices, not all the contents as individual objects. You'll see this technique used in GC'd languages, including Java.

But it's tricky code. At that point you've shucked off all the conveniences and features of modern languages and in terms of memory safety within the context of the byte expanses, you're writing in assembler. (You can't escape those arrays, which is still nice, but hardly the only possible issue.)

Which is, of course, where Rust comes in. The tricky code you'd be writing in Go/Java/other GC'd language with tons of tricky bugs, you end up writing with compiler support and built-in static checking in Rust.

I would imagine the Discord team evaluated the option of just grabbing some byte arrays and going to town, but it's fairly scary code to write. There are just too many ways to even describe for such code to end up having a 0.00001% bug that will result in something like the entire data structure getting intermittently trashed every six days on average or something, virtually impossible to pick up in testing and possibly even escaping canary deploys.

Probably some other languages have libraries that could support this use case. I know Go doesn't ship with one and at first guess, I wouldn't expect to find one for Go, or one I would expect to stand up at this scale. Besides, honestly, at feature-set maturity limit for such a library, you just end up with "a non-GC'd inner platform" for your GC'd language, and may well be better off getting a real non-GC'd platform that isn't an inner platform [1]. I've learned to really hate inner platforms.

By contrast... I'd bet this is fairly "boring" Rust code, and way, way less scary to deploy.

[1]: https://en.wikipedia.org/wiki/Inner-platform_effect


> I don't think "generating more garbage" would help

To be clear: I wasn't suggesting that generating garbage would help anyone. Only that in a more typical case, where more garbage is being generated, the two minute interval itself might never surface as the problem because other things are getting in front of it.


It comes from a desire to run in the exact opposite direction as the JVM, which has options for every conceivable parameter. Go has gone through a lot of effort to keep the number of configurable GC parameters to 1.


Anyone who pushes the limits of a machine needs tuning options. If you can't turn knobs you have to keep rewriting code until you happen to get the same effect.


There's definitely a happy medium. One setting may indeed be too few, but JVM's many options ends in mass cargo-cult copypasta, often leading to really bad configurations.


Haven’t really seen anyone trying to use JVM options to get performance benefits without benchmarks for their specific use case the last 10 years or so.


Tuning options don't work well with diverse libraries, though. If you use 2 libraries and they both are designed to run with radically different tuning options what do you do? Some bad compromise? Make one the winner and one the loser? The best you can do is do an extensive monitoring & tuning experiment, but that's quite involved as well and still won't get you the maximum performance of each library, either.

At least with code hacking around the GC's behavior that code ends up being portable across the ecosystem.

There doesn't seem to really be a good option here either way. This amount of tuning-by-brute-force (either by knobs or by code re-writes) seems to just be the cost of using a GC.


That might be true, but from a language design PoV it isn't convincing to have dozens of GC-related runtime flags a la Java/JVM. If you need those anyway, this might point to pretty fundamental language expressivity issues.


[flagged]


This was the first time I've seen that annoying cAsE meme on HN and I pray it's the last. It is a lazy way to make your point, hoping your meme-case does all the work for you so that you don't have to say anything substantial.

Or do you think it adds to the discussion?


It indicates a mocking, over-the-top tone to indicate the high level of contempt I have for my originally-stated paraphrase (and the people who have caused software dev decisionmaking to be that way). So yes, I think it does add to the discussion.


It's annoying to read, so while it does get across the mocking tone, the reaction of annoyance at the author is far stronger.


Surely tuning some GC parameters is less effort that having to do a rewrite in another language.


If you want to force it you can call "runtime.GC()" but that's almost always a step in the wrong direction.

It is worth it to read and understand: https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i...


I think for most applications (especially the common use-case of migrating a scripting web monolith to a go service), people just aren't hitting performance issues with GC. Discord being a notable exception.

If these issues were more common, there would be more configuration available.

[EDIT] to downvoters: I'm not saying it's not an issue worth addressing (and it may have already been since they were on 1.9), I was just answering the question of "why this might happen"


Or, in the case of latency, just wait a few months because the Go team obsesses about latency (no surprise from a Google supported language). Discord's comparison is using Go1.9. Their problem may well have been addressed in Go1.12. See https://golang.org/doc/go1.12#runtime.


You are able to disable GC with:

  GOGC=off
As someone mentions below.

More details here: https://golang.org/pkg/runtime/


Keeping GC off for a long running service might become problematic. Also, the steady state might have few allocations, but startup may produce a lot of garbage that you might want to evict. I've never done this, but you can also turn GC off at runtime with SetGCPercent(-1).

I think with that, you could turn off GC after startup, then turn it back on at desired intervals (e.g. once an hour or after X cache misses).

It's definitely risky though. E.g. if there is a hiccup with the database backend, the client library might suddenly produce more garbage than normal, and all instances might OOM near the same time. When they all restart with cold caches, they might hammer the database again and cause the issue to repeat.


> ...all instances might OOM near the same time.

CloudFront, for this reason, allocates heterogeneous fleets in its PoPs which have diff RAM sizes and CPUs [0], and even different software versions [1].

> When they all restart with cold caches, they might hammer the database again and cause the issue to repeat.

Reminds me of the DynamoDB outage of 2015 that essentially took out us-east-1 [2]. Also, ELB had a similar outage due to unending backlog of work [3].

Someone must write a book on design patterns for distributed system outages or something?

[0] https://youtube.com/watch?v=pq6_Bd24Jsw&t=50m40s

[1] https://youtube.com/watch?v=n8qQGLJeUYAt=39m0s

[2] https://aws.amazon.com/message/5467D2/

[3] https://aws.amazon.com/message/67457/


Google's SRE book covers some of this (if you aren't cheekily referring to that). E.g. chapters 21 and 22 are "Handling Overload" and "Addressing Cascading Failures". The SRE book also covers mitigation by operators (e.g. manually setting traffic to 0 at load balancer and ramping back up, manually increasing capacity), but it also talks about engineering the service in the first place.

This is definitely a familiar problem if you rely on caches for throughput (I think caches are most often introduced for latency, but eventually the service is rescaled to traffic and unintentionally needs the cache for throughput). You can e.g. pre-warm caches before accepting requests or load-shed. Load-shedding is really good and more general than pre-warming, so it's probably a great idea to deploy throughout the service anyway. You can also load-shed on the client, so servers don't even have to accept, shed, then close a bunch of connections.

The more general pattern to load-shedding is to make sure you handle a subset of the requests well instead of degrading all requests equally. E.g. processing incoming requests FIFO means that as queue sizes grow, all requests become slower. Using LIFO will allow some requests to be just as fast and the rest will timeout.


Your comment reminds me of this excellent ACM article by Facebook on the topic: https://queue.acm.org/detail.cfm?id=2839461

I've read the first SRE book but having worked on large-scale systems it is impossible to relate to the book or internalise the advice/process outlined in it unless you've been burned by scale.

I must note that there are two Google SRE books in-circulation, now: https://landing.google.com/sre/books/


How does Go allow you to manage memory manually? Malloc/free or something more sophisticated?


It doesn't. If you disable the GC… you only have an allocator, the only "free" is to run the entire GC by hand (calling runtime.GC())


So other comments didn't mention this, per se, but Go gives you tools to see what memory escapes the stack and ends up being heap allocated. If you work to ensure things stay stack allocated, it gets freed when the stack frees, and the GC never touches it.

But, per other comments, there isn't any direct malloc/free behavior. It just provides tools to help you enable the compiler to determine that GC is not needed for some.


It doesn't. You could start and stop the GC occasionally, maybe?


Typically a GC runtime will do a collection when you allocate memory, probably when the heap size is 2x the size after the last collection. But this doesn't free memory when the process doesn't allocate memory. The goal is to return unused memory back to the operating system so it's available for other purposes. (You allocate a ton of memory, calculate some result, write the result to a file, and drop references to the memory. When will it be freed?)


Seems like Go is more suitable for the “spin up, spin down, never let the GC run” kind of scenario that is being pushed by products like AWS Lambda and other function as a service frameworks.


Why do you think it is? Go has a really great gc which mostly runs in parallel to your program with gc stops only in the doman of less than milliseconds. Discord ran into a corner case where they did not create enough garbage to trigger gc cycles, but had a performance impact due to scheduled gc cycles for returning memory to the OS (which they wouldn't need to do either).


Because many services eventually become performance bottlenecked either via accumulation of users or accumulation of features. In either case eventually performance becomes very critical.


Sure, but that doesn't make Go unsuitable for those tasks on a fundamental basis. Go is very high performance. Whether Go or another language is the best match very much depends on the problem at hand and the especial requirements. Even in the described case they might have tweaked the GC to fit their bill.


GC pauses aside can Go match the performance of Rust when coded properly? Would sorting an array of structs in Go be in the same ballpark as sorting the same sized array of structures in Rust? I don't know a whole lot about how Go manages the heap under the covers.


Sure. That kind of code is very similar in both languages.

Rust will probably be faster because it benefits from optimizations in LLVM that Go likely doesn't have.

Go arrays of structs are contiguous, not indirect pointers.


Go always feels like an amateur language to me, I’ve given up on it. This feels right in line - similar to the hardcode GitHub magic.


I could be wrong, but I don't believe there is "hardcoded[d] GitHub magic".

IIRC I have used GitLab and Bitbucket and self-hosted Gitea instances the same exact way, and I'm fairly sure there was an hg repo in one of those. Don't recall doing anything out of the ordinary compared to how I would use a github URL.


There are a couple of hosting services hardcoded in Go. I believe it was about splitting the URL into the actual URL and the branch name.


https://github.com/golang/go/blob/e6ebbe0d20fe877b111cf4ccf8...

Ouch, Go never ceases to amaze. The Bitbucket case[0] is even more crazy, calling out to the Bitbucket API to figure out which VCS to use. It has a special case for private repositories, but seems to hard-code cloning over HTTPS.

If only we had some kind of universal way to identify resources, that told you how to access it...

[0]: https://github.com/golang/go/blob/e6ebbe0d20fe877b111cf4ccf8...


Thanks for the reference to prove me wrong.

Wow, that's sad. I'm glad it works seamlessly, don't get me wrong, but I was assuming I could chalk it up to defacto standards between the various vendors here.


This is in line with Go's philosophy, they try to keep the language as simple as possible.

Sometimes it means an easy thing in most other languages is difficult or tiresome to do in Go. Sometimes it means hard-coded values/decisions you can't change (only tabs anyone?).

But overall this makes for a language that's very easy to learn, where code from project to project and team to team is very similar and quick to understand.

Like anything, it all depends on your needs. We've found it suits ours quite well, and migrating from a Ruby code base has been a breath of fresh air for the team. But we don't have the same performance requirements as Discord.


"Simple" when used in programming, doesn't mean anything. So let's be clear here: what we mean is that compilation occurs in a single pass and the artifact of compilation is a single binary.

These are two things that make a lot of sense at Google if you read why they were done.

But unless you're working at Google, I struggle to guess why you would care about either of these things. The first requires sacrificing anything resembling a reasonable type system, and even with that sacrifice Go doesn't really deliver: are we really supposed to buy that "go generate" isn't a compilation step? The second is sort of nice, but not nice enough to be a factor in choosing a language.

The core language is currently small, but every language grows with time: even C with its slow-moving, change-averse standards body has grown over the years. Currently people are refreshed by the lack of horrible dependency trees in Go, but that's mostly because there aren't many libraries available for Go: that will also change with time (and you can just not import all of CPAN/PyPy/npm/etc. in any language, so Go isn't special anyway).

If you like Go for some aesthetic of "simplicity", then sure, I guess I can see how it has that. But if we're discussing pros and cons, aesthetics are pretty subjective and not really work talking about.


I don't agree with your definition of simplicity.

I like Go and I consider it a simple language because:

1. I can keep most of the language in my head and I don't hit productivity pauses where I have to look something up.

2. There is usually only one way to do things and I don't have to spend time deciding on the right way.

For me, these qualities make programming very enjoyable.


> I don't agree with your definition of simplicity.

You mean where I explicitly said that "simple" didn't mean anything, so we should talk about what we mean more concretely?

> 1. I can keep most of the language in my head and I don't hit productivity pauses where I have to look something up.

The core language is currently small, but every language grows with time: even C with its slow-moving, change-averse standards body has grown over the years.

> 2. There is usually only one way to do things and I don't have to spend time deciding on the right way.

Go supports functional programming and object-oriented programming, so pretty much anything you want to do has at least two ways to do it--it sounds like you just aren't familiar with the various ways.

The problem with having more than one way to do things isn't usually choosing which to use, by the way: the problem is when people use one of the many ways differently within the same codebase and it doesn't play nicely with the way things are done in the codebase.

This isn't really a criticism of Go, however: I can't think of a language that actually delivers on there being one right way to do things (most don't even make that promise--Python makes the promise but certainly doesn't deliver on it).


Does Go support functional programming? There's no support for map, filter, etc. It barely supports OOP too, with no real inheritance or generics.

I've been happy working with it for a year now, though I've had the chance to work with Kotlin and I have to say, it's very nice too, even if the parallelism isn't quite easy/ convenient to use.


It supports first-class functions, and it supports classes/objects. Sure, it doesn't include good tooling for either, but:

1. map/filter are 2 lines of code each. 2. Inheritance is part of mainstream OOP, but there are some less common languages that don't support inheritance in the way you're probably thinking (i.e. older versions of JavaScript before they caved and introduced two forms of inheritance). 3. Generics are more of a strong type thing than an OOP thing.


Offtopic but what are you missing when you have to use tabs instead of spaces? I can understand different indentation preferences but I can change the indentation width per tab in my editor. And then everyone can read the code with the indentation they prefer, while the file stays the same.


> everyone can read the code with the indentation they prefer, while the file stays the same.

Have you ever worked in a code base with many contributors that changed over the course of years? In my experience it always ends up a jumble where indentation is screwed up and no particular tab setting makes things right. I've worked on files where different lines in the same file might assume tab spacing of 2, 3, 4, or 8.

For example, say there is a function with a lot of parameters, so the argument list gets split across lines. The first line has, say, two tabs before the start of the function call. The continuation line ideally should be two tabs then a bunch of spaces to make the arguments line up with the arguments from the first line. But in practice people end up putting three or four tabs to make the 2nd line line up with the arguments of the first line. It looks great with whatever tab setting the person used at that moment, but then change tab spacing and it no longer is aligned.


On the good side, the problem of mixing tabs and spaces does normally not appear in Go sources, as gofmt always converts spaces to tabs, so there is no inconsistant indentation. Normally I prefer spaces to tabs because I dislike the mixing, but gofmt solves this nicely for me.


Please explain to me how this works for the case I outlineed, eg:

        some_function(arg1, arg2, arg3, arg4,
                      arg5, arg6);
For the sake of argument, say tabstop=4. If the first line starts with two tabs, will the second line also have two tabs and then a bunch of spaces, or will it start with five tabs and a couple spaces?


You wouldn't use an alignment-based style, but a block-based one instead:

  some_function(
      arg1,
      arg2,
      arg3,
      arg4,
      arg5,
      arg6,
  );
(I don't know what Go idiom says here, this is just a more general solution.)


Checking the original code on the playground, Go just reindents everything using one tab per level. So if the funcall is indented by 2 (tabs), the line-broken arguments are indented by 3 (not aligned with the open paren).

rustfmt looks to try and be "smarter" as it will move the argslist and add linebreaks to it go not go beyond whatever limit is configured on the playground, gofmt apparently doesn't insert breaks in arglists.


You should NOT do such alignment anyway, because if you rename "some_function" to "another_function", then you will lose your formatting.

Instead, format arguments in a separate block:

    some_function(
        arg1, arg2, arg3, arg4,
        arg5, arg6);
When arguments are aligned in a separate block, both spaces and tabs work fine.

My own preference is tabs, because of less visual noise in code diff [review].


In an ideal world, I'd think you would put a "tab stop" character before arg1, then a single tab on the following line, with the bonus benefit that the formatting would survive automatic name changes and not create an indent-change-only line in the diff. Trouble being that all IDEs would have to understand that character, and compilers would have to ignore it (hey, ASCII has form feed and vertical tab that could be repurposed...).


Or you could use regular tab stop characters to align parts of adjacent lines. That's the idea behind elastic tabstops: http://nickgravgaard.com/elastic-tabstops/

Not all editors, however, support this style of alignment, even if they support plugins (looks at vim and its forks).


> In my experience it always ends up a jumble where indentation is screwed up and no particular tab setting makes things right.

Consider linting tools in your build.


It's just an example of something that the Go team took a decision on, and won't allow you to change. I mean, even Python lets you choose. I don't really have a problem with it however, even if I do prefer spaces.


There's a difference between making decisions that are really open to bikeshedding, and making sweeping decisions in contexts that legitimately need per app tuning like immature GCs.

The Azul guys get to claim that you don't need to tune their gc, golang doesn't.


Hmm ..this is why Azul's install and configure guide run in hundreds of pages. All the advanced tuning, profiling and configuring OS commands, setting contingency memory pools are perhaps for GCs which Azul does not sell.


I mean, they'll let you because the kind of customers want to be able to are the kinds of customers that Azul targets. But everything I've heard from their engineers is that they've solved a lot of customer problems by resetting things to defaults and just letting it have a giant heap to play with.

Not sure how that makes the golang position any better.


Python chose spaces a la PEP8 by the way.


PEP8 isn't a language requirement, but a style guide. There are tools to enforce style on Python, but the language itself does not.


Same thing with Go... Tabs aren't enforced, but the out of the box formatter will use tabs. PyCharm will default to trying to follow PEP8, and GoLand will do the same, it will try to follow the gofmt standards.

See:

https://stackoverflow.com/questions/19094704/indentation-in-...


You can use tabs to indent your python code. Ok, you might be lynched if you share your code but as long as you don't mix tabs and spaces, it is fine.


Same with Go, you can use spaces.



I don't know about anyone else, but I like aligning certain things at half-indents (labels/cases half an indent back, so you can skim the silhouette of both the surrounding block and jump targets within it; braceless if/for bodies to emphasize their single-statement nature (that convention alone would have made "goto fail" blatantly obvious to human readers, though not helped the compiler); virtual blocks created by API structure (between glBegin() to glEnd() in the OpenGL 1.x days)).

Thing is, few if any IDEs support the concept, so if I want to have half-indents, I must use spaces. Unfortunately, these days that means giving up and using a more common indent style most of the time, as the extra bit of readability generally isn't worth losing automatic formatting or multi-line indent changes.


So you are the person that ruins it for everyone (are you emacs user by any chance?). Tabs are more versatile, you can even use proportional fonts with them. Projects end up using tabs because many people end up mixing them together (unknowingly or in your case knowingly using configuration that is unavailable in many IDEs).

BTW when you mix spaces with tabs you eliminate all benefits that tabs give (for example you no longer can dynamically change tab size without ruining formatting.


If I were an emacs user, I'd figure out how to write a plugin to display tab-indented code to my preferences.

No, I used to be a notepad user (on personal projects, not shared work) (you can kinda see it in the use of indentation to help convey information that IDEs would use font or colour to emphasize), and these days use tabs but longingly wish Eclipse, etc. had more options in their already-massive formatting configuration dialogues.


The reason I asked is that I believe this behavior is what Emacs does by default (actually don't know if by default, but saw this from code produced by Emacs users) e.g.

<tab>(inserts 4 spaces)<tab>(replaces 4 spaces into a tab that is 8 columns)<tab>(adds 4 spaces after the tab)<tab>(replaces with two tabs and so on)

Unless I misunderstood what formatting you were using.


You can use empty scope braces for this task in most languages. It's not a "half-indent" but it gives you the alignment and informs responsible variable usage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: