Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Three ways to think about Go channels (dolthub.com)
162 points by ingve on June 26, 2024 | hide | past | favorite | 137 comments


I’ve worked with a few large code bases that use channels. In all those code bases, they were about as maintainable as GOTO-based control flow, except that GOTO makes you have unique label names. All the channel-based code I’ve seen just has 100’s of call sites like “chan->send()” and “chan->recv()” sprinkled around, and doesn’t even have the discipline to put related senders and receivers in the same source file.

At least old-school syntax like “GOTO foobar_step_23” and “LABEL foobar_step_23” is grepable).

I greatly prefer programs that “color” functions sync/async, and that use small state machines to coordinate shared state only when necessary.

Go technically supports this, but it doesn’t seem like it is idiomatic (unlike rust, the go compiler won’t help with data races, and unlike C++, people don’t assume that the language is a giant foot-gun).


> All the channel-based code I’ve seen just has 100’s of call sites like “chan->send()” and “chan->recv()” sprinkled around, and doesn’t even have the discipline to put related senders and receivers in the same source file.

Any language that supports true multithreaded parallelism provides foot-rail-guns, no matter if you’re locking, asyncing or channeling. For concurrent data structures, you can’t lazily rely on the compiler to catch your errors. Not even in eg Rust. You should never sprinkle concurrent primitives just because.

For every channel created, you need to determine how the invariants of both the channel itself and the contents that pass through it. This is, for the most part, an easier exercise than with locks, because channels provide a way to think about ownership, and crucially, transfer of ownership between threads (goroutines). It also decouples the response to an event from the production of it, whereas with locks you have to dispatch the “next action” from the code that produced the event. I’ve never had a language that’s so easy to do things like “spin up 10 concurrent requests at a time with individual timeouts, cancel outstanding requests if any of them succeeds or canceled by the user, and ensure all tasks are torn down before returning”. It’s near-trivial to get right with channels, contexts and waitgroups.

In either case, since concurrency is so difficult, I would say that the Go docs and resources are quite lightweight on how to use their primitives correctly. Right now, it’s a bit “now finish the rest of the fucking owl”. You need discipline to use them, but there’s no parental figures around to tell you how to watch out for the dangers.


Kind of sick of seeing this misconception so I want to clear it up: async/await (as most commonly implemented) != pre-emptive and truly parallel multithreading.

All parallel execution is asynchronous, but not all asynchronous execution is parallel.

async/await interleaves execution in a cooperative fashion: a "fiber" runs until it yields by doing something blocking (such as blocking I/O, the most often cited use case). There are no data races and no need to use synchronization primitives since there is never a case where 2 fibers modify a piece of data at the same time: because execution is interleaved, the operations are always ordered one after the other. Whenever a fiber blocks, execution is returned to the scheduler, which resumes execution in one of the other fibers which were blocked.

fork(pthread_create)/join(pthread_join) parallelizes execution: a "thread" runs until it is done or is killed externally (the scheduler can also pause execution). A thread X can wait for another thread Y to be done by calling join(Y). There is a need to use atomics or other synchronization primitives such as channels (or mutexes, or barriers, or semaphores) when 2 threads share a variable in order to make sure only 1 at a time modifies, because although threads can be ran in an interleaving fashion (and sometimes they are interrupted (pre-empted)), they are usually ran in parallel on different CPU cores/threads (if you have a SMT CPU, which most nowadays are: n CPU cores and each CPU core has 2 "CPU threads").

P.S. A CPU thread != OS thread


Thank you, this is indeed true, though it's important to be aware of implementation differences - poll-based Futures that are "merged" into a single big Task like Rust, or continuation-scheduling granular hot-started Tasks like C#.

e.g. in C#, the following is true:

    using var http = new HttpClient() {
        BaseAddress = new("https://news.ycombinator.com/")
    };

    // Runs sequentially
    var page1 = await http.GetStringAsync("?p=1");
    var page2 = await http.GetStringAsync("?p=2");

    // Submitted sequentially, everything after the first yield completes in parallel, IO does not block the caller
    var page3 = http.GetStringAsync("?p=3");
    var page4 = http.GetStringAsync("?p=4");

    // Waits for page3, then page4 to complete, they are effectively parallel
    DoSomething(page1, page2, await page3, await page4);
But in Rust, this requires explicit join handle, and if you would like to parallelize/fork the execution flow - you have to schedule a separate task. It requires more effort but in return you get lower async overhead, which is critical for areas Rust is applied to (and host-starting Tasks or Futures would not be easily compatible with borrow checker I presume).


async/await is just syntax. There's nothing saying that two async calls can't run fully in parallel. "Blocking" is subjective too; in a way even JS is doing some parallelism.


I covered my ass:

> async/await (as most commonly implemented)

Javascript is famously single threaded, which is probably the runtime on which most async/await code runs today.

But yes, Rust for instance has the Tokio async runtime which is parallel, although it also has cooperative async runtimes.

For those who are reading this and feeling unsure: Go has parallel execution, and it calls `fork()` `go`. Go doesn't feature a `join()`, for that you must use channels:

    func main() {
     done := make(chan bool)
    
     go func() {
      fmt.Println("goroutine running")
      time.Sleep(2 * time.Second) // simulate some work
      done <- true
     }()
    
     <-done
     fmt.Println("goroutine finished")
    }
Funny thing: if you have a goroutine which is waiting to receive some value from a channel, but it never does this, and the other handles to the channel are gone, you get a goroutine leak! The goroutine is blocked, and unrecoverable/unkillable.


Isn’t a WaitGroup a more common way to do join?

https://pkg.go.dev/sync#WaitGroup


That's right. So if Golang wanted to do async/await, it would probably be in a fully parallel way.


I mean, the top comment wasn't talking about runtime characteristics, they were talking about the patterns resulting from using co-routines with messaging vs async/await.


You said "All parallel execution is asynchronous, but not all asynchronous execution is parallel."

But later you mentioned threads running in parallel on different CPU cores.

The way I always understood it, is that this is the only real form of parallelism able to happen on a CPU. But that does seem kind of incompatible with your statement at the beginning.


The way I think about it is, when Rob Pike says "Concurrency is not parallelism", I really rewrite it as "All concurrency is parallelism, but not all parallelism is concurrency." If you just think of concurrency as an instance of parallelism, you start to realize there are many ways to achieve concurrency, and thread-safe queues are one of the better ways to do it.

Individual CPU cores are parallel because they are executing multiple instructions simultaneously (even though they are from the same instruction stream).


> All concurrency is parallelism, but not all parallelism is concurrency

As sibling notes, you have it exactly backwards. It should read "All parallelism is concurrency, but not all concurrency is parallelism." I thought maybe this was a typo, but then you said:

> If you just think of concurrency as an instance of parallelism, you start to realize there are many ways to achieve concurrency, and thread-safe queues are one of the better ways to do it.

This is demonstrating parallelism as an instance of concurrency, not the other way around. A thread-safe queue can be processed by multiple tasks running interleaved in a single thread in which case it is executing concurrently, but not in parallel.

> Individual CPU cores are parallel because they are executing multiple instructions simultaneously

This is an implementation detail of the CPU. Not all CPUs execute instructions in parallel.


> "All concurrency is parallelism, but not all parallelism is concurrency."

The other way around. Alternating two executions on one CPU core is concurrency, but not parallelism.


> All the channel-based code I’ve seen just has 100’s of call sites like “chan->send()” and “chan->recv()” sprinkled around

This just sounds like badly written chaotic code. Unfortunately there are some people who seem to think "Go has channels, therefore, I need to use channels as much as possible", but that's almost always a mistake.

Channels shouldn't be used very frequently, but they can be very handy especially in combination with select and/or buffered channels. Usually there should be a clear and "obvious" API.


Go does have the `- race` flag for test + build commands [1], which can help (not 100% iirc, but some) with datarace detection.

[1] https://go.dev/doc/articles/race_detector


You may as well say variables are bad because people are bad at naming variables. I mean, yeah? I don't call my channels "chan", I call them "outputChannel" or "inputChannel" or "foobarProducer" or "twiddleListeners" or whatever.


it's idiomatic in go to use short names like chan though. I ignore that personally and use names like yours, but it's not been especially common in the go code or docs I've seen.


> it's idiomatic in go to use short names like chan though

There's a linter which I like[1] that checks variable name length based on the scope of its usage.

The idea is a variable name can be short if it is used in a small lexical scope (e.g. a function parameter in a one-line function) because it won't affect readability much. But names should be descriptive if used in a larger scope (e.g. a constant used across a file).

1. https://github.com/blizzy78/varnamelen


oh this sounds great! thanks for sharing


Copying docs like this as idiomatic code seems problematic in any language. The docs have no clue what your domain is and couldn't give a sensible name for a variable in it.

For example, lets look at "string". Most examples are gonna just call it a "str" or even "s". But if what you have is someone's name you should get called out for calling it a "str" rather than "name".

You shouldn't name all your mathy variables x, y, i, j.

You shouldn't name your channels "chan" or "c". Unless its just generic channel code.

I don't mind short domain variable names. But "chan" isn't that. It's like calling all your ints "num" or all your strings "str".


> outputChannel … inputChannel Channels are read/write. What’s the difference between an input and output ch


It's in how you use it. One piece of code's output channel might be another piece of code's input channel.


This is why I think each instance of a channel should be a global singleton as soon as they escape a local context. IMO you could solve this by having each instance of a channel be a unique type that must be annotated as that type everywhere it's used. Channels are generic, but you should not generally use them generically.

In Go you can use a type alias to make this work:

type ExitChan <-chan nil

Unfortunately it doesn't really enforce anything at the compiler level, but at the very least you can grep for it.


I'm confused about why we're comparing GOTO control flow with channels, since they're completely unrelated. I guess you can make a mess with each when you use them inappropriately? With GOTO, you should avoid it because there are better options (notably, conditional statements).

Unlike GOTO, there isn't really a better alternative for synchronizing parallel programs. Parallel programming (or even concurrent programming, for that matter) is just harder than sequential programming, and if you don't know what you're doing (regardless of whether you use channels or not) you're going to end up with a mess. Channels help to tame parallel programs, but they can't compensate for programmers who don't know how to write correct parallel programs.

And while Rust can prevent against data races, data races are a tiny sliver of parallel programming bugs. Far more common are race conditions and deadlocks, against which Rust is powerless.

> I greatly prefer programs that “color” functions sync/async, and that use small state machines to coordinate shared state only when necessary.

Async/await similarly doesn't solve the problem of undisciplined programmers making a mess. If you give undisciplined programmers async/await and shared state, they'll make a mess as easily as they will with goroutines and channels. If you're hiring people who can't be trusted with shared memory parallelism, then you have to take away parallelism or mutability or you have to train them to write correct parallel programs.


I think GP was comparing them because they are both capable of producing spaghetti code. But you can create spaghetti code with functions and modules, too (and I've seen plenty of examples of that). It shouldn't disqualify the entire technique.


> I'm confused about why we're comparing GOTO control flow with channels, since they're completely unrelated. I guess you can make a mess with each when you use them inappropriately? With GOTO, you should avoid it because there are better options (notably, conditional statements).

Notes On Structured Concurrency goes into some depth on this comparison:

https://vorpus.org/blog/notes-on-structured-concurrency-or-g...

If channel sends are being used as invocations, then this description can apply just as well to channels as it does to the go statement.


> Notes On Structured Concurrency goes into some depth on this comparison:

Yeah, this is why we use `sync.WaitGroup`s and similar abstractions in Go. Using bare channels (like using `goto`) is fine for small local things, but if you're building abstractions you should use patterns like those discussed in the article (for example: https://go.dev/play/p/UoB8ECDbaTw --I'm sure this can be abstracted further).


Function coloring is orthogonal to async or not. I also greatly prefer function coloring, but I'm agnostic when it comes to either explicit async/await or sync code in green threads. For example, Haskell has function coloring, but it also has Go-style lightweight threads and sync code. It's the nicest experience IMO.


I think the reason channels become harder to understand is that you jump to having a network topology of senders and receivers to reason about.

With async/await, everything is still a function. Your IDE and debugger help you understand when and where a function is called. When debug-breaking on a function you have the stack trace.

Async/await gets you fast context switching for IO ops as it is a single OS process (no threads for the OS to constantly switch).

But once you go beyond a single OS process, then you need something like messaging or shared memory on top of async/await to communicate between OS processes/threads.

I think Golang jumps straight to messaging which is better for multiple cores, but misses the first step of async/await which is easier to reason about when you have a single process on a single core.



though Go has generics now, so there is that.

Also the link to SliceTricks goes to stale content


Golang should have async/await and exceptions, both of which can be implemented as syntactic sugar on top of existing features.


Every go developer I know who likes the language disagrees with this sentiment (including myself). If you really want async/await Go is probably not the language for you. Thats ok! There are a lot of languages that other people like that I do not want to use.

We don't need to have one language to rule them all. Its fine for languages to make different choices and trade-offs.


And yet they keep adding features that golang devs scream in unison aren't needed, until they are added.

It's a weird cult-like thing.

And go is reaching the point where like Java it gets crammed down your throat. Which is probably why golang should start adding even more features


Lots of Golang users told me generics aren't needed until they were added. But let's leave "cult" accusations out of this.


Nowhere did I suggest one language should rule them all. Golang has advantages like efficiency and greenthreading, but that's not because they omitted async/await and exceptions. I just think that was a mistake. And I do use Golang on my team.


Having used both async/await and goroutines, I much prefer the latter. But yes, goroutines and channels are powerful tools that require understanding and discipline.


I used languages with exceptions for 16-17 or so years and I've been using Go for 7-8 years.

It took me quite a while (years) to go from thinking that returning error (conventionally as the last return value) was a bit clunky to thinking that this is actually a very good idea. I don't mind the if err != nil pattern. I can't think of a single experienced Go programmer that is sufficiently annoyed by this to complain. I sometimes see people new to the language complain about this, but it tends to fade as they adapt.

With the ability to join errors, do sensible comparisons etc, errors work well for me when needing to express high level errors and their low level root causes. (eg the high level error may be "failed to read configuration" and the root cause might be "permission denied when opening file X").

I've come to realize that I like Go style errors better than exceptions for two reasons. First off, a lot of code in, for instance, Java uses unchecked exceptions. Which means that the whole idea that you have to handle exceptions goes out the window.

This, of course, varies from codebase to codebase, but the it is common enough that having exceptions available in a language doesn't mean people will use them in a way that forces discipline. Which kind of makes exceptions pointless. They just become syntax for emitting and handling errors. And awkward syntax at that. Which is the second reason. Exceptions tend to introduce extra scopes you have to deal with. This is annoying and in some cases it can get somewhat complicated.

Exceptions, the way they are commonly used, are a weak enforcement mechanism made more dangerous by people pretending it isn't. If you want discipline, neither exceptions nor Go errors are the way to go. You probably want to look to something like Rust.

I have a hard time seeing how exceptions would make Go better. Yes, if someone could come up with some syntactic sugar for Go 2 to deal with errors, that would be nice, but I have seen no proposals for this that represent actual improvement.

`if err != nil` is actually extremely readable and to justify new syntax for expressing this it would have to be a _lot_ better. I'm open to listening if people can come up with something.

Exceptions didn't end up doing what people hoped it would do in Java. We should learn from that and try not to pointlessly repeat this mistake. Besides, Go has already made its choice. Retrofitting it is only going to do harm.


The entire difference is whether or not you have to type "if err != nil { return err }" repeatedly. Some languages force you to handle exceptions, and some force you to handle non-exception errors.


Colored functions are a huge hit to take, and the things you do to make colored functions tolerable are also things you can do to mask off the complexity of channels from library callers. Async/await for Go seems like it would not be a win.

I think exceptions are pretty much a dead letter in new language designs at this point? You could get people on board for real algebraic types and matching.


Since the boat has already sailed for Golang having async/await from the start, adding it now would require maintaining compatibility with uncolored functions. Waiting for a channel to have data from a go func is similar. JS has similar compatibility with uncolored code using Promises. Python's migration to async/await is a lot worse because it didn't have coherent parallelism features to begin with.

Exceptions are a smaller deal than async/await. Basically instead of manually writing "if err" everywhere, it's implicit if you aren't catching it. The most popular languages do this. Rust "?" syntax is also a decent compromise.


Go has exceptions. Always has.


Yes, but when writing idiomatic Go one would prefer error types over using panic (panic is Go's term for an exception).

The parent is really complaining about idiomatic Go. Which is fair, because idiomatic Go feels like writing Java 6 (that is to say, making lots of compromises due to limitations of the language).


> panic is Go's term for an exception

No. An exception is a datatype. Not completely unlike an error type in concept, but intended for erroneous conditions that could have theoretically been caught at compile time (i.e. the programmer screwed up), as opposed to conditions that are not decidable at compile time (i.e. something happened in the outside world).

In Go, panic creates an exception, which may be the source of your confusion. The exception contains any metadata you passed to panic along with other data, like a stack trace. This is not panic in and of itself, though.

> but when writing idiomatic Go one would prefer error types over using panic

Not really. Idiomatic Go says that errors are in no way special and is faulty to think of them as being special. They are just values like any other. Go exceptions allow attaching any value as metadata.

The official line is that panic stack traversal should not cross package boundaries, but internal use is perfectly acceptable. Even the Go standard library does it.


Thank you for taking the time to reply so thoughtfully!

So, if I'm understanding you correctly:

An exception is an unexpected failure at runtime that could have been caught by the programmer. For example, you might have a switch block with a default block saying "this can never happen". The programmer might miss a case in the switch block leading to an exception. This would be analogous to an unchecked exception (RuntimeException) in Java -- these exceptions are not required to be caught.

In Go, `panic` creates an instance of the type `exception`. When `panic` is executed it attaches metadata like the stack trace to the created exception. When I say "exception" this is the behavior that most people think about.

An error is for an expected failure, like a file potentially not existing when performing I/O operations. Like you said (and this matches my understanding), error types are just a normal type -- really any type that implements the `Error` interface. There are a lot of utilities functions/libraries that act on that `Error` interface, like assertions for unit tests, the multierr library, etc.

`Error` is similar to a checked exception (Exception) in Java, though it misses the behavior you expect like requiring it to be caught, and the try-catch syntax. Additionally, you don't have the stack trace/metadata automatically added.

Go programmers seem to _heavily_ prefer errors over exceptions, even when they should be using `panic`.

My problem with errors in Go are that they are too easy to ignore accidentally and they don't contain stack traces, so they can be hard to track down. I inherited a codebase where we had _hundreds_ of unchecked errors being silently ignored.

I understand that these problems go away if you have linters or a very detail-oriented team, but I unfortunately have no power over the actions of my team before I join. It's also _very_ hard to convince a team that they've been doing things incorrectly by ignoring errors.


> My problem with errors in Go are that they are too easy to ignore accidentally

This is a problem for values of all types, not just errors. One I'm not sure we've figured out how to solve[1]. There are a few languages out there that force variable assignment to try and address the problem, but even then there is really nothing to say that you haven't accidentally ignored the variable assigned.

> It's also _very_ hard to convince a team that they've been doing things incorrectly by ignoring errors.

To be fair, if you are able to completely leave out entire blocks of logic without anyone noticing even under the most cursory of testing, perhaps it wasn't actually needed? Forgetting entire code branches isn't exactly a subtle bug.

[1] Short of going all the way to formal proofs.


It's a problem for values of all types, which is why exceptions are usually handled specially in other languages. I get that it seems impure, but everything you call has some way to fail <1% of the time that you probably want to handle very differently from any other outcome. It's sensible for a language to force you to either designate a different code path for that or let it bubble up (edit: which, to clarify, Golang does not enforce with error return types, which is what the other person dislikes about it).


I've read a lot of code in my life. I see that exception handlers are sometimes used to carry errors, virtually always exceptions, but almost never anything else. Who is it that you are think are passing around email addresses and geocooreinates using exception handlers?

But even assuming it is done sometimes, is the developer going to actually handle it? I don't know how many times I've come across "catch" blocks that are empty or something equally inappropriate. Nothing was gained. It turns out that programmers will still forget no matter how hard you try to hold their hand.

I'm not sure there is any other solution than to test it, and once you get into testing, entire blocks of logic missing are going to stick out like a sore thumb. Forgetting an entire branch is not exactly a subtle bug. At that point it really doesn't matter what language constructs you do or don't have available.


> I don't know how many times I've come across "catch" blocks that are empty or something equally inappropriate. Nothing was gained. It turns out that programmers will still forget no matter how hard you try to hold their hand.

You are right, but the difference is that the programmer is making the choice to ignore that error versus accidentally ignoring the error.


There was no indication that it was an active choice. I'm speaking to where it was clearly incorrect behaviour, not where one honestly wanted to ignore the condition. I'm assuming these programmers would write out the boilerplate (perhaps even automatically by some IDE feature) and then forgot to return to it to fill in the logic.

And fair enough. It would be just as easy to forget to do that as it would be to forget to handle errors in any other language. There is no silver bullet here.


Adding a try{...} with an empty catch{} takes a lot more effort than simply forgetting to do something with an error return value.


> This is a problem for values of all types, not just errors. One I'm not sure we've figured out how to solve[1]. There are a few languages out there that force variable assignment to try and address the problem, but even then there is really nothing to say that you haven't accidentally ignored the variable assigned.

The difference is that the happy case is well-tested. If you aren't handled the error-free route you probably will notice that very quickly unless you aren't testing your code, even manually.

e.g. it doesn't matter if you ignore normal variable assignments because the programmer will usually catch that themselves. They will not likely catch all of the possible error assignments without some help.

> To be fair, if you are able to completely leave out entire blocks of logic without anyone noticing even under the most cursory of testing, perhaps it wasn't actually needed? Forgetting entire code branches isn't exactly a subtle bug.

In this case we were silently ignoring errors which caused multiple types of issues that we had to then manually track down. This is an entire class of defects that can be avoided with better tooling, either at the language level with checked exceptions, or with a linter like errcheck.


errcheck seems very sensible.


This is confusing, so I'll put it this way, Golang doesn't have exceptions the way someone coming from Javascript, Java, Python, ObjC... would understand: thrown anywhere and automatically propagated up the call stack unless caught.


Like randomdata said, Go does have this with panic/recover, it's just that Go programmers (from what I have seen) greatly prefer Error types which do not automatically propagate up the stack until caught.

Go _doesn't_ have any form of checked exceptions, though, which are required by the compiler to be caught by the caller.


A minor nitpick, but checked exceptions are a feature entirely unique to Java (maybe also one or two JVM languages?), so it really shouldn't be discussed so often when touching this topic.


That's fair! My point is that being forced to check exceptions can be beneficial. Go does not have any mechanism for this.

If you want to use a Result/Optional type I think that is way better than exceptions. As long as the compiler can enforce that you are acknowledging (or explicitly ignoring) both the happy and error cases. Ideally the error case for the Result/Optional type would also have an attached stack trace.


Golang programmers indeed don't prefer panic/recover, but that's not comparable to exceptions in other languages.


Panic/recover is perfectly equivalent to try/catch in most languages. There is some small difference related to exactly how un caught panics are handled (in Java, an un caught exception kills the thread that raised it; in Go, it kills the whole Go process); and the amount of built-in type checking for recover vs catch (but you can do manual typechecking and re-panic from a recover if you want). But otherwise they are almost perfectly equivalent.

Of course, Go code uses panics much more sparingly than Java or C# or JS or Python use exceptions.


It's different syntax, less convenient, and less obvious what you're try-catching. Also you can't scope it to just part of the function.

  func outer() {
    defer catch() {
        if r := recover(); r != nil {
            fmt.Println("Recovered:", r)
        }
    }()
    // ...
    doSomething() // Can panic
    doSomethingElse() // Can also panic
  }
vs JS:

  const outer = () => {
    try { doSomething() } catch(err) { console.log(err) }
    try { doSomethingElse() } catch(err) {
      console.error("dang it bobby", err)
      throw err
    }
  }
But in any real Golang situation, you'll be working with libraries or teammates who return (type, error) instead of using panics. You might say this isn't inherent to the language, but it kinda is when all the built-in standard libraries like net/http work this way and the official language style guide tells you to do this.

  func outer() {
    r, err := doSomething()
    if err != nil {
      fmt.Println("Recovered:", r)
    }
    r, err := doSomethingElse()
    if err != nil {
      fmt.Println("Recovered else:", r)
    }
  }


> Also you can't scope it to just part of the function.

Why not?

    func outer() {
        func() {
            defer catch() {
                if r := recover(); r != nil {
                    fmt.Println("Recovered from doSomething:", r)
                }
            }()
            doSomething() // Can panic
        }()
        func() {
            defer catch() {
                if r := recover(); r != nil {
                    fmt.Println("Recovered from doSomethingElse:", r)
                }
            }()
            doSomethingElse() // Can also panic
        }()
    }

I don't get why we keep getting these "expert" comments from people who have never used Go before.

Hell, if you long for try/catch for some reason, you can even get creative:

    func try(fn func(), catch func(any)) {
        defer func() {
            if r := recover(); r != nil {
                catch(r)
            }
        }()
        fn()
    }

    func outer() {
        try(func() {
            doSomething()
        }, func(r any) {
            fmt.Println(r)
        })
        try(func() {
            doSomethingElse()
        }, func(r any) {
            fmt.Println(r)
        })
    }
But then you soon start to realize how awful try/catch actually is, so...


No, I think you just showed that trying to use Go as if it was designed to use panic as an exception is ugly. Other languages use exceptions perfectly well and I would say consequently have a much better error handling story than go. I really like go - a lot. But its error handling is truly awful.


Try-catch is awful if you're forcing it in a language that doesn't really support it. I provided a JS try-catch example above; is there something wrong with it?

The recover is still function-wide in your example, which is what I said. You can nest funcs just to deal with this, but it's ugly, and code reviewers won't like it. I use Go on my team, and yeah I'm not as expert as the team who created Go here, but idk why you keep saying I've never used it. My complaint about "if err" is pretty common among Go users, who would understand the joke of calling it "Errlang."


> I provided a JS try-catch example above; is there something wrong with it?

Yes, it suffers all the same problems. And then doesn't even help you with the errors once you get them! You then have to resort to this kind of craziness to do something with the error:

    try {
        doSomething()
    } catch(error) {
        if (error instanceof FooError) {
            console.log('foo')
        } else if (error instanceof BarError) {
            console.log('bar')
        } else if (error instanceof BazError) {
            console.log('baz')
        } else {
            console.log('unknown')
        }
    }
Which leaves you wondering why you didn't just write:

    err := doSomething()
    switch {
    case errors.Is(err, ErrFoo):
        fmt.Println("foo")
    case errors.Is(err, ErrBar):
        fmt.Println("bar")
    case errors.Is(err, ErrBaz):
        fmt.Println("baz")
    default:
        fmt.Println("unknown")
    }
At least Java gives you multiple catch blocks to make things slightly more sane.

But I get the impression that those who find benefit in passing errors using exception handlers don't handle errors. If you don't have to worry about errors in the code you write, then I think there is a good case to be made that errors as exceptions is a better approach.

That is, after all, the difference between scripts and systems. Scripts can simply fail, leaving the user to try again. Errors as exceptions, or something in the same vein, make sense as the predominant mechanism in scripting languages. Systems, on the other, have to deal with failure. You don't get to just bubble it up and let the user deal with it. This is where errors as exceptions becomes a nightmare. Go is unabashedly a systems language. It is not meant to be a scripting language.

Different tools for different jobs.


The point of exceptions is to make it convenient to propagate errors upwards and unlikely that you accidentally ignore an error, as discussed in the other thread. This doesn't magically do your error handling too, but the Golang example isn't any nicer.

They aren't different jobs, though. The two most common uses of JS are backend systems (like you'd often use Golang for) and web frontends, not scripts. Backends will usually catch errors in one place, an HTTP or similar handler, sending back the appropriate status code with maybe a payload. Golang backends do something similar, which is why you see so much "if err != nil... return err" in practice.

Python is more for scripts aka CLIs, but Python backends are fairly common too. And Golang CLIs are also common.


This Golang guru talk of "handling errors" is almost entirely BS in my experience. I've almost never seen a Go library or code example that does anything other than bubble an error it got from its lower levels up to its callers, with some extra context if you're lucky. The rare exceptions are either some retries for network operations, or just ignoring the error and returning some default value.


I hear this in C++ too since we can't use exceptions here. "Unlike some other toy languages, we don't ignore errors, we handle them head-on." Uh no you don't, in fact it took a lot to make people stop ignoring errors or "handling" errors by crashing the entire service.

But at least we have a rule checking that you actually did something with the return statuses.


How are panic/recover not comparable to exceptions?


Well ok, you can draw the similarity in how they both automatically bubble up. But the catching behavior is different, and they look different. Overall they weren't designed for similar uses.


There is nothing to be confused about. The syntax may be slightly different to other languages, but there is no meaningful difference in the design. If you understand exceptions and exception handlers in those other languages, you understand them in Go. There is no exactly a lot you can do with exceptions to see them differ in any big way.

Why is it that Go attract so many "experts" who have clearly never used the language even just once?


I understand how to use them in Golang, it's just annoying to have to keep checking and returning errors everywhere.


The language doesn't impose this upon you. You can use the exception handling system to pass values around, just like you might in the aforementioned languages. Even Go's standard library does it. Read the encoding/json source sometime.

You might impose it upon yourself when you start to understand the pitfalls of using exceptional handlers for anything other than exceptions, but such is engineering. Everything comes with tradeoffs.


One technical challenge to using panic to represent normal exceptions is that most go code is then not exception safe. This was a design choice for the ecosystem if not the language itself (see eg: Effective go about panic not crossing package boundaries, or Google's style guide).

Within a package it can work, but then you'll find language support lacking. For example defer is scoped to a function so you need to write pretty unusual code to make the equivalent of a try block.


What's imposed on a Golang user is two choices, panic/recover or regular error handling, and almost always you take the latter. There's no option of using exceptions the same way you would in most other languages.


Same as all of those other languages mentioned. What gives you the impression that Go is somehow magically different? Or have I misinterpreted you?


Almost every single usage of channels in Go i've encountered was a mistake, buggy and hard to understand. To me nowadays seeing channels in use in code i just see it as a code smell and think how difficult it would be to remove them.

There are multiple ways to use channels wrong, the compiler will not help you and most of the docs and "tutorials" will not cover that. The best thing you can do in Go is stay as far away from channels as you can.


My thought is that channels isn't bad but they are too low-level. Sometimes they are used as a queue. Sometimes they are used as a future. Indeed, channels combine features like synchronization, signaling, and data transfer. It is this versatility that makes code unreadable.

The worst is when people use channels simultaneously with mutex.


I must admit, I dislike Go channels. They are a hell to debug, as they are unnamed. And they're anonymous, so you can easily get a stacktrace filled with thousands of identical stacktraces that you can't correlate with logs.

Golang needs to have a way to manage the channels better. Naming them and waiting on them would simplify a lot of crusty stuff. Naming is becoming possible, goroutines can already have pprof labels (that are even inherited between goroutines!), so just adding pprof labels to stacktraces will help a lot.

But unfortunately, Go creators are allergic to anything that brings thread-local variables closer.


> They are a hell to debug, as they are unnamed. And they're anonymous

That was one odd thing that stood out to me about Go. I am coming from Erlang so a process having process ID we can keep track of, terminate it, trace, etc, is fundamental to be able to reason about and operate a system. Advertising that they can handle millions of lightweight goroutines but then having no obvious way to identify them, and monitor their lifetimes is kind of strange.

On the lower level, I sort of understand why they did it: they focused on typed channels. So they can have multiple channels of different types potentially talking to the same goroutine. Now, having both named goroutines and channels would be more complicated, and they tried to keep things "simple". Erlang's processes on the other hand, have implicit mailboxes, there is no "mailbox1" and "mailbox2" there is only one, but it's also easier there because there is no static typing. There things are simple because only processes have identities but not mailboxes.


So Erlang unifies both the gorountine and the channel into a single construct with an identifier?


We could say that yeah. It's not unlike an os process with a pid. You can monitor the process using a pid, you can kill it, you can trace it, gets it stack trace etc. But unlike an OS process it also has a mailbox associated with it, so you can send it message and it can receive and pattern match on them.


With the advent of generics, there’s no reason we can’t get libraries that wrap common objects.

As a thought experiment, I could see Optionals, Named Channels, Collections etc being implemented in some common but non-stdlib library. Of course now we’ve recreated all those languages people hate when they talk about go… but those features exist for a reason.


I don't think you're getting useful optionals without matching.


There are type switch statements, at least, and libraries like https://github.com/phelmkamp/valor can use them to provide more concise abstractions. The really frustrating thing is that interfaces can't have methods that introduce new type parameters, so we're stuck with standalone functions like `optional.Map(optionalFoo, barFunc)` instead of `optionalFoo.map(barFunc)`>


I’d take Java-style optionals but yea matching would be welcome to me.


Stacks printed by the receiver are generally incomplete/useless for systems that communicate. The error is often in the sender, but is reported by the receiver. This isn't unique to Go and channels.

Consider a Node app that calls a C++ web service. The C++ web service returns "503 Service Unavailable" because your request causes the server to segfault. The useful stack trace for fixing this problem is on the C++ side in a core dump, not whatever Node prints when the HTTP request fails.)

What you want to do on the Go side is have your sender be able to relay an error to the receiver, and to wrap errors:

    func doWork(w work) (result, error) {
        return errors.New("too lazy to do work")
    }

    func doWorkQueue(q <-chan work, r chan<- result) {
        for w := range q {
           result, err := doWork(w)
           if err != nil {
              r <- errorResult(errors.Wrapf(err, "doWork(workID=%v)", w.ID))
              continue
           }
           r <- successResult(w, result)
        }
    }
    ...
    go doWorkQueue(...)
    submitWork(work)
    for r := range resultCh {
        if err := r.Err(); err != nil {
            log.Error("failed to do work: %v", errors.Wrap(err, "recv result"))
        }
    }
 
This now errors with something like:

    main.go:20: failed to do work: 
    main.go:20: recv result:
    main.go:9: doWork(workID=42): 
    main.go:2: too lazy to do work
        
Or if your `errors` library doesn't grab the caller func/line number, just "failed to do work: recv result: doWork(workID=42): too lazy to do work". This should be enough to debug the problem, regardless of what side the problem happens on.

(At work we use a hacked up copy of github.com/pkg/errors, which grabs the entire stack trace at each of the errors.Wrap/errors.New call sites. This results in an exceedingly verbose trace that takes up your entire screen, but is at the very least... thorough.)

The reason that error wrapping is essential is because stack traces don't capture critical information, like what iteration of the loop you're on, how many times your retry loop ran, which work id failed and returned "too lazy to do work", etc. The error wrapping is where you get to add this in. This is, again, the same as every language... the random exception you throw in your HTTP client when the server is down isn't aware of the work item id that caused this exception, so you have to catch and re-throw with that information or you have an undebuggable mess.

What's nice about Go is that it's really easy to add this contextual information, either with fmt.Errorf in the standard library, or with very small functions that capture some information automatically (errors.Wrap/errors.Wrapf). I will say that not doing this is my #1 complaint, and it's exceedingly common to just "return err". I have spent 4 years fixing the work codebase to wrap errors, and people still add new unwrapped errors (because our linter allows an escape hatch, errors.EnsureStack, which some people really like). It then results in some oncall engineer wasting a week debugging a simple problem. Sigh! But that's humans being humans; Go makes it very easy to do the right thing. You just have to tell your team to do the right thing and to make them want the right thing.


Of course stack traces are a silver bullet. However, it helps to be able to correlate stacks to requests from logs. It's especially helpful when you're debugging deadlocks.


> But unfortunately, Go creators are allergic to anything that brings thread-local variables closer.

This is the most frustrating thing about Go for me. They use thread-local storage within the stdlib, but refuse to let us plebs have access to it.


May I offer you fast thread-locals in C# in these trying times?


Just an egg please


This almost kind of feels like you're reinventing the Actor model (e.g. Erlang/Elixir).


go concurrency is built on a similar concept (concurrent sequential processes), if I'm not mistaken. the languages aren't very far apart in this regard to start with!


Yeah, I'm actually pretty familiar with CSP (I'm getting a degree in a related topic using a timed variant of it), though Go breaks from the formal CSP defintion language in a few ways (if nothing else vanilla CSP doesn't officially allow buffered channels).

I do find the Actor model a bit nicer at an engineering level, simply because I find being able to refer to things by name to be a bit more intuitive, and since Actors (at least in Erlang) can manage their own state it's a lot easier to kind of shape them into doing anything you want (e.g. buffered or non-buffered). I'm much less familiar with the formal definitions of Actors though.


I'll admit that I like Go channels because Hoare was the professor when I was doing my doctorate and so CSP was what I used in my thesis[1], but it's worth understanding what unbuffered channels give you: it's message passing with synchronization. They are very simple to reason about and make writing concurrent code a breeze.

[1] https://blog.jgc.org/2024/03/the-formal-development-of-secur...


I come from the complete opposite side: I have been writing some concurrentML code in guile scheme recently, and it has really made me understand why I always disliked go's channels.

There are just so many things that are just slightly wrong to make it unpleasant. Better than many other things, but still kind of frustrating. I really think unbuffered channels should have been the default.


i like the pattern, but message passing where the message is actually just a pointer to a mutable struct ( as allowed in go) kind of defeat the purpose, doesn't it ?


No, it’s an idiomatic way of expressing who’s in charge of a given piece of data.

There’s nothing in the type system preventing data races. But that’s orthogonal to channels. They just help you to structure the code.


It kinda does yeah, you probably shouldn’t do that


It’s fine if that’s the only (writable) pointer to the struct. Which is something go doesn’t enforce, but Rust does.

I often dream of Go with Rust’s borrow checker. Sigh…


Wouldn't that be Rust?


Rust doesn’t have a direct analogue to go’s concurrency primitives. The pieces are there, but you have to jump through hoops to use them effectively.


> One literal interpretation of Pike's quote is that message passing is different than sharing memory for pedantic reasons. Copying values between sender/receiver stacks is safer than sharing memory. But it is also possible to send values with pointers into channels, so that doesn't really prevent developers from abusing the model. I don't think avoiding shared memory is a top of mind consideration for developers deciding whether to use channels.

I think the point of Pike's quote is that, when a goroutine gets a pointer received from a channel, it gets the ownership of the values referenced by the pointer and other goroutines give up the ownership. This is a discipline Go programmers should hold but not a rule enforced by the language.


As I like to say, Rust may be one of the only languages that has built ownership right into its type system, but the problems that "creates" with programming in Rust are actually problems revealed by programming in Rust, not created. The ownership problems are 100% there in other languages too, it just isn't a compiler error. It can help everyone, in any threaded language, to be thinking like Rust, even if the compiler and type system do not directly help you.

A simple and useful degenerate case of this is that whenever any message is sent, complete ownership of the entire transitively-reachable set of values contained in that message is by default transferred to the receiver. This is worthwhile even if you must explicitly construct some safe value to pass in order to maintain this promise. This keeps things generally easy to reason about without the full complication and power that Rust offers. At the very least, whenever I violate this, I have lots of comments about what and why on both sides of that transaction in my code base.

(I do sort of wish there was a variant of a "go" statement that I could use that made me statically promise that all communication in and out of a given goroutine must be solely in the form of copied values, so I could guarantee that goroutine was an "actor". This is, admittedly, just me wishing very pie-in-the-sky. I doubt it could be turned into a practical proposal.)


> the problems that "creates" with programming in Rust are actually problems revealed by programming in Rust, not created.

I would dearly like to see Rust advocates be more careful when they talk about this aspect of their beloved language.

The ownership semantics of safe Rust programs prevent programs from being written, and in the process, eliminate entire classes of bug. But they also prevent a literally infinite number of correct programs from being written, without resorting to the unsafe escape hatch.

Sometimes that's a good tradeoff. Often you can design your program around those restrictions and enjoy the benefits they bring. But what you said is much too strong a claim! Ownership semantics prevent or inhibit a great deal of simple patterns which are in fact possible to implement correctly. The doctrine that every difficulty a programmer runs into trying to color inside the lines of safe Rust is a case of Rust preventing them from doing something wrong, is simply incorrect.

Rust's memory model is highly opinionated, and in fact, restrictive. The pitch, and it's a strong one, is that the benefits of working within that model are worth learning how to work within those restrictions. But it's also very clear that ownership semantics create problems as well as prevent them. Telling people that the tradeoffs are worth it is good advocacy. Pretending there are no tradeoffs is not.


Yeah, I feel like it's generally extraneous to my main point to observe that Rust isn't perfect at it, and it can be wrong, in consequential ways.

My main point is, the general concept of ownership and having to care about it is present everywhere, though. If you prefer to say that it reveals it but imperfectly, I'm down with that. But I think it's important to understand it isn't entirely 100% responsible for creating it. Every shared-state threaded language has the issues, it just doesn't have a type system and compiler (imperfectly) helping you with them. It's not a license to write Go or any other language while oblivious to ownership issues just because the compiler won't complain.


My point wasn't actually that Rust's ownership model could be improved. While that's also true, features like being able to take disjoint mutable borrows of two fields of a struct are generally agreed to be good things to have, and I expect the language team to solve them eventually.

It's simpler than that: Rust prevents you from doing a lot of correct things in the safe subset of the language. It does it for a good reason, because that lets the compiler prove a bunch of nice properties, but it's a fundamental tradeoff: in exchange for not having to deal with bugs in shared mutable access to data, it doesn't let you write programs that way. I actually think that Rust is a great choice for multithreaded programs which want to follow a one-writer-many-readers pattern, because that's very difficult to do correctly without the borrow checker.

But when you say this:

> The ownership problems are 100% there in other languages too, it just isn't a compiler error.

A straightforward read of that is that anything you can't do in safe Rust is a bug. That's very far from true, it's in fact backward: what you can do in safe Rust isn't an ownership bug, by construction. But I see this implication reversed in a lot of Rust advocacy, and it directly contributes to the Rust fatigue which you'll see on this very website, and many other places.

It's the difference between prohibiting potentially unsound ownership policies (what Rust does) and prohibiting only unsound ownership policies (which safe Rust does not).

> My main point is, the general concept of ownership and having to care about it is present everywhere

I completely agree with this, fwiw.


To be technical Rust "creates" a lot of errors that aren't their because it doesn't, and can't, have full understanding of the control flow. Well it's less create and more complains about non issues. It why overtime code that used to be invalid Rust has become allowed as they've improved the borrow checker.


This remark isn't relevant to the subject under discussion.

You're talking about the fact that Rust's lifetime analysis was initially a simple lexical check, and has evolved toward properly understanding control flow beyond just lexical scope (e.g. it is aware that both branches of an if/else cannot be taken).

This has nothing to do with ownership, a completely separate part of the type system from lifetime analysis, and how it prevents data races by controlling the sharing of data between concurrent processes, which is what this discussion is about.


Y’all should really give Erlang/Elixir a try… this stuff is so much more trivial to deal with in that ecosystem and it pains me that it doesn’t get as much attention as Go does.


They're very different languages and have different deployment dynamics. Elixir is a higher-level language. Other languages are lower-level. People will always be frustrated to see a language "shoehorned" into a higher- or lower- level setting where their preferred language might fit. We're an Elixir/Go/Rust shop. You could not use Rust and Go interchangeably for what we use them for, and you couldn't use Elixir almost anywhere we use Go.


I am curious about Elixir, but I really can't bring myself to spend time mastering a language with dynamic types. You are removing one class of errors from your application, but at the same time inviting a new one that has been solved decades ago.

I'm aware that this is a very controversial take, because lots of people love duck typed languages, but after working in large codebases, they are a hard pass for me. There's a reason why TypeScript was created for JavaScript, Sorbet was created for Ruby, and type hints is so popular with Python. I think I've seen something about gradual typing being introduced in Elixir, but gradual typing is still a long ways from enforcing type safety. Until then, I'll stick with Go.

If I am mistaken and there is a "TypeScript for Elixir", then I would love to know about it.


I've worked with strongly typed languages and duck typed languages. I've also worked a lot with Go and Elixir. In my opinion, the tradeoff between the elegance of Elixir's actor-based functional style vs. Go's warty imperative style is more compelling than static typing vs. duck typing.

I love working with Elixir. Go is something I tolerate.


That's a fair trade if you don't like Go, but I love working with it, so for me it has to be something more compelling than trading safe types for actor-based functional style. Go's imperativeness never bothered me.


Elixir is in the process of adding a gradual type system right now FWIW.


Indeed, I mentioned it in my comment. Thanks.


I agree. Having worked a lot with both, I think Elixir is by far the better language, especially for web programming.


As an avid Go user, I think async/await is probably a nicer construct for most usecases. But Go channels work fine as long as you keep to the basics and documented patterns.

I can recommend making a utility function which accepts a set of anonymous functions and a concurrency factor. I've since extended this function with a version which accepts a rate limiter, jitter factor, and retry count. This handles most cases where I need concurrency (batches) in a simple and safe way.


I guess the grass is always greener on the other side. As a Rust user, I’m constantly thinking “why didn’t they just give us go channels instead of this crazy async/await nonsense?”


Rust has one form of channels in the standard library. More kinds available from crates.io. If you prefer channels, use them!


Because one crate uses one channel implementation, and another crate uses a different one… this is really something that should be solved at the language / stdlib level, as Go does, but Rust falls a little bit short. There mpmc and the Send trait, but they alone are insufficient. So instead we get crate dependency hell that is only somewhat mitigated by cargo, and colored functions due to the async implementation.


Rust async/await is less nice than Go coroutines. There are things you can't do and weird rules around Rust async code. Every Go chan allows multiple readers and multiple writers, but Rust stdlib and tokio default to single-reader queues.


Channels and async/await aren’t really equivalent features. Beyond the fact that they both deal with concurrency.

You can do channels (message passing) on top of async await.


Well writen. Will recommend to all newcomers from other languages.

As for uncovered topics or part 2 - long running Go channels can be a nightmare.

You need to implement kind of observability for them.

You must find the way to stop/run again/upgrade and even version payload to handle your channels/goroutine.

Common problem with channels overuse is so called goroutine leaks. Happens more often than most devs think. Especially, if lib writers initiate goroutines in init() to maintain cache or do some background cleanup job. It's good to scan all used packages for such surprises.

You might also find concepts like "durable execution" or "workflow" engines down the road.


One pattern for long running channels is to have another polling / heartbeat channel on the goroutine. This is often redundant as you can achieve the basic functionality with contexts, but it can be useful when you need to implement some reporting, or you need to implement more advanced load balancing or back off strategies.


I didn’t realize why channels are part of the language vs a standard library feature until I read this. Now it makes sense that it’s about compiler optimizations with goroutines.


Go channel behaviors are pretty annoying. For one I always forget the panic scenarios (e.g. writing to a closed channel), I feel like the type system could've done more here.

I recently wrote a simple function that maps out tasks concurrently, can be canceled by a context.WithCancel, or if a task fails. The things that cancel the task mapper need to coordinate very carefully on both sides of the channel so that they're closed and publishers stop sending in the right sequence. The amount of switches/cancels/signals quickly explode around the coordination if you too cute on how to do it (e.g. read from the error channel to stop the work).

Frankly I'm not sure I still got it right [1]. And this is probably the most unsettling part. Rereading the code I can't possibly remember the cancellation semantics and ordering of the short mess I created. Now I'm wondering if mutexes would've made for more understandable code.

[1] https://gist.github.com/pnegahdar/1783f0a4e03dc9a3da43478994...


Writing to a closed channel at all is generally a design smell. Generally this is consumers closing channels, which is not a good idea. Only producers should close channels, generally only a producer that is the sole owner of the channel, and then, being the sole owner, it should "know" that it closed the channel and by its structure never write to it again. You probably are overcomplicating matters and need just one top-level channel, which in modern Go is the one contained in a context.Context, to be the one and only stop signal in the system.

I think everyone goes through a bit of a complication phase with channels, I recognize your issues in code I've written myself, this is definitely not a "only a bad person would have this problem" post. But, yes, there probably is an organization that solves this problem. There's an art to using them properly. A good 50% of that art may well be that consumers should never close channels.

(Another one is the utility of sending a channel as part of the message in a different channel. It is intuitively easy to think that channels must be very expensive, but when they're not participating in a select, they're just a handful of machine words in RAM. It is perfectly sensible to send a message to a "server" process that contains the channel in it to send the reply to, because it only costs a few machine words in allocation and precisely the one sync operation it will ever participate in. Channels do not have to amortize their costs with lots of messages; a 1-message channel is practical. This also cleaned up some complicated code I had before, trying to prematurely optimize something that was already very cheap.)

The other thing is, if you haven't looked at https://pkg.go.dev/golang.org/x/sync/errgroup , you may want to. golang.org/x/ is the not-as-well-known-as-it-should-be extended standard library; things the Go team are not willing to put into the 1.0 backwards compatibility promise so they retain the ability to change things if necessary, but otherwise de facto as high quality as the standard library, modulo some reasonably well-labeled exceptions. Contra some claims that it is impossible to abstract in Go, many of these common concurrency patterns have been abstracted out and you can and should grab them off the shelf.


People are missing a key insight about channels, which is that close is a write. Typically, you want one piece of code reading, and another piece of code writing, and that's where their code goes wrong. A panic is appropriate in this case, as it's always a bug.

The underlying feature that people are looking for is a TCP-like "either side can kill the other side" feature. This is what contexts are.

   func read(ctx context.Context, c <-chan any) {
       for {
           select {
           case <-ctx.Done():
               return
           case x, ok := <-c:
               if ok {
                   fmt.Println(x)
               } else {
                   return
               }
           }
       }
   }

   func write(ctx context.Context, c chan<- any) {
       for i := 0; i < 10; i++ {
           select {
           case <-ctx.Done():
               close(c)
               return
           case c <- i:
           }
       }
   }
In this example, the type system prevents the read function from writing to the channel (avoiding the panic in the writer if the reader were to close the channel), and the context causes both sides to exit cleanly. You can plumb in the cancel function returned from context.WithCancel/context.WithCancelCause to allow the reader to kill the writer. (Incidentally, the reason you have to do this is because context.Done() is type <-chan, not type chan, so you can't just close(ctx.Done()).)

The writer can still kill the reader by closing the channel, because closes are writes.


More importantly close is a _broadcasting_ write. Which is an incredibly useful feature when you want to build nested structures out of your channel paths.


Yup! This is exactly how context.WithCancel is implemented; <-ctx.Done() is the signal that everything that cares about cancellation listens on. The returned CancelFunc just does `close(thatChannel)`.

I prefer explicitly using contexts for cancellation, but in a pinch:

   c := make(chan struct{})
   foo(c)
   bar(c)
   close(c)
is a useful pattern. (I do this a lot in tests that involve concurrency; the test calls t.Cleanup(func() { close(c) }) and then if the test fails with t.Fatal or something, all background work is killed.)


Exactly right. These "done channels" were extremely widely used as a doneCh parameter before contexts were introduced in 1.7.


x/sync is great. I use things like singlelfight frequently.

> Generally this is consumers closing channels, which is not a good idea.

I've heard this too. I updated my comment to include the code snippet. But any general tips on when the consumer needs to cancel the rest of the work because of an error?

I don't recall now but I think I ended up writing my own for fast-fail and generic support. Really hoping the x/sync packages get generic support soon!

> a 1-message channel is practical

Great tip, every time I add a little signal one I feel like I did something wrong, maybe that is the move.


"But any general tips on when the consumer needs to cancel the rest of the work because of an error?"

Yes, another channel which everyone watches for cancellation.

In modern Go, that should be a context and it's .Done() channel.

This is another "I'm not saying you suck because I did this myself a while before I figured it out", but having a channel around just to cancel things is fine. It doesn't cost anything significantly extra. I have some still in my code bases where I've just never had any reason to upgrade to contexts.


Not speaking on whether the language should make this easier without an external library, but wouldn't https://github.com/sourcegraph/conc help in that scenario? It has context-aware and error-aware goroutine pools, seems like the exact fit for what you are trying to do. Although admittedly I dive too deep into your code.


Might have simplified this with errgroup [0].

[0] https://pkg.go.dev/golang.org/x/sync/errgroup


Channels can be a useful primitive, but without more structure they're tough to reason about. I really wish that the Go team would provide more implementations of common patterns and publicize them; things like x/sync/errgroup are fantastic, and I'd love to see more.


Timely article for me, since I'm thinking of using channels in a real-world application for the first time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: