Search had been a blocker, but that's coming soon; beyond that not sure that there's any reason other than inertia. Alacritty is totally fine, but excited for the Ghostty focus on performance (obviously), and the font rendering stuff looks excellent (though TBD how much of that we can "just use" vs needing to copy-pasta)
Author here. Seemed like the least bad of the options.
Being able to comment out sections of a config file easily is a prime use-case; and that really implies using newlines as delimiters, and well, you fall into this trap..
Why is the ability to comment-out entire sections of a config file a primary use case? What are the motivating requirements for this feature?
That aside, you don't need semantically-meaningful indentation to support commenting-out whole sections, see e.g. any braces-based lexer/parser that supports `/* ... */` style comments.
We use node.js to run a number of language servers and formatters (which are often written in node due to the VSCode ancestry...).
There've been a lot of requests to disable language servers by default; but I think that's not the right default for most users – things should work out of the box.
That said, better control over this is definitely something we will add.
We went back on forth on this a lot, but it boiled down to wanting only one dependency graph per module instead of two. This simplifies things like security scanners, and other workflows that analyze your dependencies.
A `// tool` comment would be a nice addition, it's probably not impossible to add, but the code is quite fiddly.
Luckily for library authors, although it does impact version selection for projects who use your module; those projects do not get `// indirect` lines in their go.mod because those packages are not required when building their module.
Thank you for working on it. It is a nice feature and still better than alternatives.
I'm not a library author and I try to be careful about what dependencies I introduce to my projects (including indirect dependencies). On one project, switching to `go tool` makes my go.mod go from 93 lines to 247 (excluding the tools themselves) - this makes it infeasible to manually review.
If I'm only using a single feature of a multi-purpose tool for example, does it matter to me that some unrelated dependency of theirs has a security issue?
>If I'm only using a single feature of a multi-purpose tool for example, does it matter to me that some unrelated dependency of theirs has a security issue?
How is anyone supposed to know whether there's an issue or not? To simplify things, if you use the tool and the dependency belongs to the tool, then the issue can affect you. Anything more advanced than that requires analyzing the code.
What if I'm already using techniques, such as sandboxing, to prevent the tools from doing anything unexpected? Why bring this entire mess of indirect dependencies into my project if I'm just using a tool to occasionally analyze my binary's output size? Or a tool to lint my protobuf files?
If it's a build dependency, then you have to have it. If you don't like the size of the tool then take it up with the authors. I'm not a Go programmer by the way, this is all just obvious to me.
The functionality we're discussing can be used for tools that are not build dependencies. They may be important for your project and worth having contributors be on the same version but not part of the build.
It will still add the dependencies of those tools as indirect dependencies to your go.mod file, that is what's being discussed.
If you use the tool to develop your project then it is basically a build dependency. That is a sweeping generalization, but it's essentially correct in most cases.
That is a bit much to ask for IMO. In any case, the project may not be aware of how any given developer will use the tool. So who is to say that if you change the order of two parameters to the tool, the tool might not take a different path and proceed to hack your computer? You really don't want any of this problem. What you should ask for is for the tools' dependencies to be listed separately, and for each tool to follow the Unix philosophy of "do one thing well."
There is a lot of merit to this statement, as applied to `go tool` usage and to security scanning. Just went through a big security vendor analysis and POCs. In the middle I saw Filippo Valsorda post [1] about false positives from the one stop shops, while govulncheck (language specific) did not have them. At the same time, there was one vendor who did not false positive with the reachability checks on vulns. While not always as good, one-stop-shops also add value by removing a lot of similar / duplicated work. Tradeoffs and such...
The similar/duplicated stuff can be rolled into libraries. Just don't make the libraries too big lol. I suspect there's less duplicated stuff than you think. Most of it would be stuff related to parsing files and command parameters, I guess.
Reachability analysis on a tool that could be called by something outside of the project? We're talking about tools here after all - anything that can run `go tool` in that directory can call it. The go.mod tool entry could just be being used for versioning.
it probably doesn't, and good vulnerability scanners like govulncheck from the go team won't complain about them, because they're unreachable from your source code.
now, do you care about some development tool you're running locally has a security issue? if yes, you needed to update anyway, if not, nothing changes.
> it boiled down to wanting only one dependency graph per module instead of two
Did you consider having tool be an alias for indirect? That would have kept a single dependency graph per module, while still enabling one reading one’s go.mod by hand rather than using ‘go mod’ to know where each dependency came from and why?
I know, a random drive-by forum post is not the same as a technical design …
Having not looked at it deeply yet, why require building every time it's invoked? Is the idea to get it working then add build caching later? Seems like a pretty big drawback (bigger than the go.mod pollution, for me). Github runners are sllooooow so build times matter to me.
`go tool` doesn't require a rebuild, but it does checking that the tool is up-to-date (which requires doing at least a bit of work).
This is one of the main advantages of using `go tool` over the "hope that contributors to have the right version installed" approach. As the version of the tool required by the project evolves, it continues to work.
Interestingly, when I was first working on the proposal, `go run` deliberately did not cache the built binary. That meant that `go tool` was much faster because it only had to do the check instead of re-running the `link` step. In Go 1.24 that was changed (both to support `go tool`, but also for some other work they are planning) so this advantage of `go tool` is not needed anymore.
For me (maintainer of Zed's vim mode) it comes down to a few things:
1. LSPs differ per-language, and so I'm never sure whether I'll get lucky today or not. It's more reliable for small changes to talk about them in terms of the text.
2. LSPs are also quite slow. For example in Zed I can do a quick local rename with `ga` to multi-cursor onto matching words and then `s new_name` to change it. (For larger or cross-file renames I still use the LSP).
3. I err as a human continually, for example in Rust a string is `"a"` and a char is `'a'`. It's easy for my javascript addled brain to use the wrong quotes. I don't know of any LSP operation that does "convert string literal into char literal" (or vice versa), but in vim, it's easy.
We are slowly pulling in support for various vim plugins; but the tail is long and I am not likely to build a vim-compatible Lua (or VimScript :D) API any time soon.
For example, most of vim-surround already works so you could get the most used parts of mini.surround to work with just a few keybindings `"s a":["vim::PushOperator", { "AddSurrounds": {} }]`, but rebuilding every plugin is a labor of love :D.
I’ve been working on a configuration format [1] that looks surprisingly similar to this!
That said, the expectation in CONL is that the entire structure is one document. A separate syntax for multiline comments also enables nice things like syntax highlighting the nested portions.
I do like the purity of just a = b, but it seems harder to provide good error messages with so much flexibility
Funnily enough, I've been working on solving the same problem concurrently. Though in my _very_ biased opinion; I think CONL is easier to read and write:
Agreed that this is much better than the OP. That said, my general opinion is that a whitespace-only indentation should be avoided especially in the serialization format due to the inherent ambiguity of whitespace characters and resulting human mistakes. When I designed CSON [1] I strived to make it as readable as possible without the indentation for that reason.
Nice – I like your verbatim syntax for multiline strings!
I went with indentation because a very common use-case in a configuration file is commenting out lines. Even with CSON-like comma rules, you still need to balance your {} and []s. Indentation balances itself most* of the time.
Indentations are still desired for most human tasks indeed! But you can have indentations and groupings at once, one complementing each other. As you've noticed, CSON's verbatim syntax was intentionally designed so that it remains valid without any indentation but your instinct really wants to align those lines anyway. (A similar approach can be seen in Zig verbatim strings, which seem to be designed independently from CSON and make me much more confident about this choice.)
Search had been a blocker, but that's coming soon; beyond that not sure that there's any reason other than inertia. Alacritty is totally fine, but excited for the Ghostty focus on performance (obviously), and the font rendering stuff looks excellent (though TBD how much of that we can "just use" vs needing to copy-pasta)