I think the problem is that it's too onerous to run your own instance, but being on anything but the "default" instance means dealing with volunteer moderators imposing their worldview on the available discourse.
Creating a Mastodon account shouldn't mean supporting the particular political affiliation of the moderators, but I think it feels that way for many of the instances.
I think that Ladybird has driven a lot of the effort, otherwise we'd just see browsers continuing to use Chromium with backports to allow v2 being worked on.
Ladybird was already progressing rapidly within SerenityOS well before it was officially launched, and I think that's given people a new inspiration for how plausible it is to create a browser from scratch. I'm really pleased we're seeing Servo having a resurgence too.
It’s indeed rapidly progressing feature-wise, but I have yet to see an explanation for how they intend to manage security once market adoption happens.
Ladybird is written in C++, which is memory-unsafe by default (unlike Rust, which is memory-safe by default). Firefox and Chrome also use C++, and each of them has 3-4 critical vulnerabilities related to memory safety per year, despite the massive resources Mozilla and Google have invested in security. I don’t understand how the Ladybird team could possibly hope to secure a C++ browser engine, given that even engineering giants have consistently failed to do so.
> Firefox and Chrome also use C++, and each of them has 3-4 critical vulnerabilities related to memory safety per year, despite the massive resources Mozilla and Google have invested in security.
And part of Firefox/Chromes security effort has been to use memory safe languages in critical sections like file format decoders. They're far too deeply invested in C++ to move away entirely in our lifetimes, but they are taking advantage of other languages where they feasibly can, so to write a new browser in pure C++ is a regression from what the big players are already doing.
I just checked out Servo, and like all browsers it has a VERY large footprint of dependencies (notably GStreamer/GOject, libpng/jpeg, PCRE). Considering browsers have quite the decent process isolation (the whole browser process vs heavily sandboxed renderer processes), I wonder how tangible the Rust advantage turns out to be.
I just looked at the top CVEs for chrome in 2025. There are 5 which allow excaping the sandbox, and the top ones seem to be V8 bugs where the JIT is coaxed into generating exploitable code.
One seems to be a genuine use-after-free.
So I can echo what you wrote about the JS engine being most exploitable, but how is Rust supposed to help with generating memory-safe JITed code?
I know they have said that. But it feels a bit strange to me to continue to develop in C++ then, if they eventually will have to rewrite everything in Swift. Wouldn't it be better to switch language sooner rather than later in that case?
Or maybe it doesn't have to take so much time to do a rewrite if an AI does it. But then I also wonder why not do it now, rather than wait.
That is the plan, but they are stalled on that effort by difficulties getting Swift's memory model (reference counting) to play nice with Ladybird's (garbage collection)
I think there was some work with the Swift team at Apple to fix this but there haven't been any updates in months
I know that that’s the plan, but I believe it when I see it. Mozilla invented entire language features for Rust based on Servo’s needs. It’s doubtful whether a language like Swift, which is used mostly for high-level UI code, has what it takes to serve as the foundation of a browser engine.
C# has incomplete and often compromised versions of the constructs F# mostly took from OCaml, and as you extend those exhaustive guarantees towards formal verification you bump into F*.
C#s adoption of language features shows their utility but they’re not a replacement, per se. Without a clear functional answer in certain language and parallel computing scenarios MS would be ignored. Scala and Kotlin are comparable answers to comparable pressures on the JVM, and even keeping pace there with new and exciting tools/libraries requires some proper functional representation on the .Net platform.
F# will disappear when/if those other languages do, and already has lots of what C# is chasing with a more elegant syntax. It inherits VM and project improvements from C#, so the biggest threat to long term investment is something like the crippling changes made to FSharp Interactive (FSI), during the .Net Core transition. Otherwise it seems to be in a safe place for the foreseeable future.
Supporting as in maintenance mode, at least VB.NET. Thankfully F# is more community driven, but the CLR ecosystem is definitely getting C#-centric in the use of idioms and features from newer C# versions, which increasingly affects F# interop while they catch up.
.NET has always been both the biggest blessing and the biggest curse for F#.
We have access to millions of libraries. I look at BEAM languages and OCaml every once in a while but can’t quite drag myself over there, knowing that in .NET, just as an example, I can choose between a dozen JSON serialisation libraries that have been optimised and tuned comprehensively for decades.
But then, those libraries are also our curse. If you consume them, everything is OO so you either give up on functional purity and start writing imperative F# code, or you have to spend time writing and maintaining a F# idiomatic wrapper around it.
Similarly I was working recently on project to develop a library which was going to have downstream consumers. The problem lent itself really well to domain modelling in F#. But I knew that my downstream users would be C# devs. I could invest the time and write my library as “functional core, imperative shell”. But then I decided that since the interface would be OO anyway, I might as well just write it in C#.
Thankfully what keeps F# going is the wonderful community around it, not Microsoft. I know some people (outside of Microsoft) have worked on a standalone F# compiler but it’s still very early stages. Maybe one day.
Although you inevitably end up writing some OOP code in F# when interacting with the dotnet ecosystem, F# is a really good OOP language. It's also concise, so I don't spend as much time jumping around files switching my visual context. Feels closer to writing python.
The C# team admits to looking at how F# features work, but also keeps trying to make it clear that C# doesn't have a goal of entire eating F#.
C# still doesn't see itself as a functional programming language, even as it has added so many features. It may never get first-class currying or the broader ideas like generalized computation expressions, for instance. It certainly won't get F#'s cleaner syntax with fewer mandatory semicolons and whitespace nesting rather than curly brackets.
F# probably isn't going to disappear for a lot of similar reasons that GHC (the Glasgow Haskell Compiler) didn't disappear when F# was started (nor when key contributors left Microsoft). F# often already sees more outside open source contributors than contributions from Microsoft employees.
Is there a blog post anywhere that explains why it has taken so long to get search working? Ghostty is such a nice app but that's such a fundamental feature that there has to be a good reason... It's the only thing keeping me on iTerm2.
I don't want to presume your use case, but Ghostty has a command for dumping the buffer to a file, which I use for processing output "too late" to use grep.
Try a pager instead. Batcat is more feature rich but there's always the good old less (and more) command. Both work great with grep. I do things like the following multiple times a day
$ cat foo.log | less
$ cat foo.log | $PAGER
$ cat foo.log | grep 09-23-2025 | less
Side note / pro tip: on a new line in the terminal press control-x control-e. If you're in zsh you need to edit your config but this will work out of the box for bash.
I do. Setting up log files for each and every command seems tedious. I would rather just cmd+f to search and have it work for everything, as it is a property of the terminal, rather than a specific command.
I can appreciate that per message replies are handy in modern messages. But if you were having _such_ an in-depth conversation at the peak of IRC, you may have moved to email to make it async and allow for more fully thought-out replies.
Email is still a great way to communicate with people today.
Creating a Mastodon account shouldn't mean supporting the particular political affiliation of the moderators, but I think it feels that way for many of the instances.
reply