Destructors are vastly superior to the finally keyword because they only require us to remember a single time to release resources (in the destructor) as opposed to every finally clause. For example, a file always closes itself when it goes out of scope instead of having to be explicitly closed by the person who opened the file. Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks. Not to mention how branching and conditional initialization complicates things. You can often pair up constructors with destructors in the code so that it becomes very obvious when resource acquisition and release do not match up.
I couldn't agree more. And in the rare cases where destructors do need to be created inline, it's not hard to combine destructors with closures into library types.
To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.
In a function that inserts into 4 separate maps, and might fail between each insert, I'll add a scope exit after each insert with the corresponding erase.
Before returning on success, I'll dismiss all the scopes.
I suppose the tradeoff vs RAII in the mutex example is that with the guard you still need to actually call it every time you lock a mutex, so you can still forget it and end up with the unreleased mutex, whereas with RAII that is not possible.
Scope guards are neat, particularly since D has had them since 2006! (https://forum.dlang.org/thread/dtr2fg$2vqr$4@digitaldaemon.c...) But they are syntactically confusing since they look like a function invocations with some kind of aliased magic-value passed in.
A writable file closing itself when it goes out of scope is usually not great, since errors can occur when closing the file, especially when using networked file systems.
You need to close it and check for errors as part of the happy path. But it's great that in the error path (be that using an early return or throwing an exception), you can just forget about the file and you will never leak a file descriptor.
You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
Java solved it by having exceptions be able to attach secondary exceptions, in particular those occurring during stack unwinding (via try-with-resources).
The result is an exception tree that reflects the failures that occurred in the call tree following the first exception.
The entire point of the article is that you cannot throw from a destructor. Now how do you signal that closing/writing the file in the destructor failed?
You are allowed to throw from a destructor as long as there's not already an active exception unwinding the stack. In my experience this is a total non-issue for any real-world scenario. Propagating errors from the happy path matters more than situations where you're already dealing with a live exception.
For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?
If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.
> The entire point of the article is that you cannot throw from a destructor.
You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.
> So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errors
It's weird how you tried to frame a core design feature of the most successful programming language in the history of mankind as "useless".
Perhaps the explanation lies in how you tried to claim that exceptions had any place in "communicating non-fatal errors", not to mention that your scenario, handling non-fatal errors when destroying a resource, is fundamentally meaningless.
Perhaps you should take a step back and think whether it makes sense to extrapolate your mental models to languages you're not familiar with.
It's not the same thing at all because you have to remember to use the context manager, while in C++ the user doesn't need to write any extra code to use the destructor, it just happens automatically.
To be fair, that's just an artifact Python's chosen design. A different language could make it so that acquiring the object whose context is being managed could require one to use the context manager. For example, in Python terms, imagine if `with open("foo") as f:` was the only way to call `open`, and gave an error if you just called it on its own.
Destructors and finally clauses serve different purposes IMO. Most of the languages that have finally clauses also have destructors.
> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.
I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
> Most of the languages that have finally clauses also have destructors.
Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.
Which languages am I missing which have both try..finally and destructors?
In C# the closest analogue to a C++ destructor would probably be a `using` block. You’d have to remember to write `using` in front of it, but there are static analysers for this. It gets translated to a `try`–`finally` block under the hood, which calls `Dispose` in `finally`.
using (var foo = new Foo())
{
}
// foo.Dispose() gets called here, even if there is an exception
Or, to avoid nesting:
using var foo = new Foo(); // same but scoped to closest current scope
These also is `await using` in case the cleanup is async (`await foo.DisposeAsync()`)
I think Java has something similar called try with resources.
try (var foo = new Foo()) {
}
// foo.close() is called here.
I like the Java method for things like files because if the there's an exception during the close of a file, the regular `IOException` block handles that error the same as it handles a read or write error.
void bar() {
try (var f = foo()) {
doMoreHappyPath(f);
}
catch(IOException ex) {
handleErrors();
}
}
File foo() throws IOException {
File f = openFile();
doHappyPath(f);
if (badThing) {
throw new IOException("Bad thing");
}
return f;
}
That said, I think this is a bad practice (IMO). Generally speaking I think the opening and closing of a resource should happen at the same scope.
Making it non-local is a recipe for an accident.
*EDIT* I've made a mistake while writing this, but I'll leave it up there because it demonstrates my point. The file is left open if a bad thing happens.
In Java, I agree with you that the opening and closing of a resource should happen at the same scope. This is a reasonable rule in Java, and not following it in Java is a recipe for errors because Java isn't RAII.
In C++ and Rust, that rule doesn't make sense. You can't make the mistake of forgetting to close the file.
That's why I say that Java, Python and C#'s context managers aren't remotely the same. They're useful tools for resource management in their respective languages, just like defer is a useful tool for resource management in Go. They aren't "basically RAII".
> You can't make the mistake of forgetting to close the file.
But you can make a few mistakes that can be hard to see. For example, if you put a mutex in an object you can accidentally hold it open for longer than you expect since you've now bound the life of the mutex to the life of the object you attached it to. Or you can hold a connection to a DB or a file open for longer than you expected by merely leaking out the file handle and not promptly closing it when you are finished with it.
Trying to keep resource open and close in the same scope is an ownership thing. Even for C++ or Rust, I'd consider it not great to leak out RAII resources from out of the scope that acquired them. When you spread that sort of ownership throughout the code it becomes hard to conceptualize what the state of a program would be at any given location.
As someone coming from RAII to C#, you get used to it, I'd say. You "just" have to think differently. Lean into records and immutable objects whenever you can and IDisposable interface ("using") when you can't. It's not perfect but neither is RAII. I'm on a learning path but I'd say I'm more productive in C# than I ever was in C++.
I agree with this. I don't dislike non-RAII languages (even though I do prefer RAII). I was mostly asking a rhetorical question to point out that it really isn't the same at all. As you say, it's not a RAII language, and you have to think differently than when using a RAII language with proper destructors.
Pondering - is there a language similar to C++ (whatever that means, it's huge, but I guess a sprinkle of don't pay for what you don't use and being compiled) which has no raw pointers and such (sacrificing C compatibility) but which is otherwise pretty similar to C++?
Rust is the only one I really know of. It's many things to many people, but to me as a C++ developer, it's a C++ with a better template model, better object lifetime semantics (destructive moves <3) and without all the cruft stemming from C compat and from the 40 years of evolution.
The biggest essential differences between Rust and C++ are probably the borrow checker (sometimes nice, sometimes just annoying, IMO) and the lack of class inheritance hierarchies. But both are RAII languages which compile to native code with a minimal runtime, both have a heavy emphasis on generic programming through templates, both have a "C-style syntax" with braces which makes Rust feel relatively familiar despite its ML influence.
You can move the burden of disposing to the caller (return the disposable object and let the caller put it in a using statement).
In addition, if the caller itself is a long-lived object it can remember the object and implement dispose itself by delegating. Then the user of the long-lived object can manage it.
> You can move the burden of disposing to the caller (return the disposable object and let the caller put it in a using statement).
That doesn't help. Not if the function that wants to return the disposable object in the happy path also wants to destroy the disposable object in the error path.
Technically CPython has deterministic destructors, __del__ always gets called immediately when ref count goes to zero, but it's just an implementation detail, not a language spec thing.
I don't view finalizers and destructors as different concepts. The notion only matters if you actually need cleanup behavior to be deterministic rather than just eventual, or you are dealing with something like thread locals. (Historically, C# even simply called them destructors.)
There's a huge difference in programming model. You can rely on C++ or Rust destructors to free GPU memory, close sockets, free memory owned through an opaque pointer obtained through FFI, implement reference counting, etc.
I've had the displeasure of fixing a Go code base where finalizers were actively used to free opaque C memory and GPU memory. The Go garbage collector obviously didn't consider it high priority to free these 8-byte objects which just wrap a pointer, because it didn't know that the objects were keeping tens of megabytes of C or GPU memory alive. I had to touch so much code to explicitly call Destroy methods in defer blocks to avoid running out of memory.
For GCed languages, I think finalizers are a mistake. They only serve to make it harder to reason about the code while masking problems. They also have negative impacts on GC performance.
It would suffice to say I don't always agree with even some of the best in the field, and they don't always agree with each other, either. Anders Hejlsberg isn't exactly a random n00b when it comes to programming language design and still called the C# equivalent a "destructor", though it is now known as a finalizer in line with other programming languages. They are things that clean up resources at the end of the life of an object; the difference between GC'd languages and RAII languages is that in a GC'd runtime the lifespan of an object is non-deterministic. That may very well change the programming model, as it does in many other ways, but it doesn't make the two concepts "fundamentally different" by any means. They're certainly related concepts...
They are related but fundamentally different. It is a vital semantic difference (influencing the programming model itself) since destructors (C++ style) are synchronous and deterministic while finalizers (Java style) are asynchronous and non-deterministic.
I grasp the entirety of why people differentiate "finalizers" from "destructors", but in my opinion, the practical differences in their application are not the result of the concept itself being fundamentally different, it's a result of object lifetimes being different between GC'd and non-GC'd languages. In my opinion, the concept itself is pretty close to identical. You want to clean up resources at the end of the lifetime of an object. And yes, it's practically a mess because the object lifetime ends at a non-deterministic point in the future and usually not even necessarily on the same thread. Being a big fan of Go and having had to occasionally make use of finalizers for lack of a better option in some limited scenarios, I really genuinely do grasp this, but I dispute that it has anything to do with whether or not a language has try...finally, anymore than it has anything to do with a language having any other convenient structured control flow measures, like pattern matching or else blocks on for loops.
(I do also realize that finalizer behavior in some languages is weird, for performance reasons and sometimes just legacy reasons. Go is one such language.)
But I think we've both hit a level of digression that wouldn't be helpful even if we were disagreeing about the facts (which I don't really think we are. I think this is entirely about frames of reference rather than a material dispute over the facts.) Forgetting whether finalizers are truly a form of destructor or not, the point I was trying to make really was that I don't view RAII/scoped destructors as being equivalent or alternatives to things like `finally` blocks or `defer` statements. In C++ you basically use scope guards for everything because they are the only option, but I think C++ would still ultimately benefit from at least having `finally`. You can kind of emulate it, but not 100%: `finally` blocks are outside of the scope of the exception and can throw a new exception, unlike a destructor in an exception frame. Having more options in structured control flow can sometimes add complexity for little gain, but `finally` can genuinely be useful sometimes. (Though I ultimately still prefer errors being passed around as value types, like with std::expected, rather than exception handling blocks.)
I believe the reason why we don't have languages (that I can think of) that demonstrate this exact combination is specifically because try/catch exception blocks fell out of favor at the same time that new compiled/"low-level" programming languages started picking up steam. A lot of new programming language designs that do use explicit lifetimes (Zig, Rust, etc.) simply don't have try...catch style exception blocks in the first place, if they even have anything that resemble exceptions. Even a lot of new garbage collected languages don't use try...catch exceptions, like of course Go.
Now honestly I could've made a better attempt at conveying my position earlier in this thread, but I'm gonna be honest, once I realized I struck a nerve with some people I became pretty unmotivated to bother, sometimes I'm just not in the mood to try to win over the crowd and would rather just let them bury me, at least until the thread died down a bit.
> not the result of the concept itself being fundamentally different, it's a result of object lifetimes being different between GC'd and non-GC'd languages.
This is the fundamental misunderstanding. The RAII ctor/dtor pattern is a very general mechanism not limited to just managing object (in the OO sense) lifetimes. That is why you don't need finally/defer etc. in C++. You can get all of these policies using just this one mechanism.
The correct way to think about it is as scoped entry and exit function calls i.e. a scoped guard. For example, every C++ programmer writes a LogTrace class to log function (or other scope) entry and exit messages. This is purely exploiting the feature to make function calls with nothing whatever to do with objects (in the sense of managing state) at all. Raymond gives a good example when he points to how wil::scope_exit takes a user-defined lambda function to be run by a dummy object's dtor when it goes out of scope.
> I don't view RAII/scoped destructors as being equivalent or alternatives to things like `finally` blocks or `defer` statements. In C++ you basically use scope guards for everything because they are the only option, but I think C++ would still ultimately benefit from at least having `finally`.
Scope guards using ctor/dtor mechanism is enough to implement all the policies like finally/defer etc. That was the point of the article.
> You can kind of emulate it, but not 100%: `finally` blocks are outside of the scope of the exception and can throw a new exception, unlike a destructor in an exception frame. Having more options in structured control flow can sometimes add complexity for little gain, but `finally` can genuinely be useful sometimes.
Exception handling is always tricky to implement/use in any language since there are multiple models (i.e. Termination vs. Resumption) and a language designer is often constrained in his choice. Wikipedia has a very nice explanation - https://en.wikipedia.org/wiki/Exception_handling_(programmin... In particular, see the Eiffel contract approach mentioned in it and then the detailed rationale in Bertrand Meyer's OOSC2 book - https://bertrandmeyer.com/OOSC2/
> This is the fundamental misunderstanding. The RAII ctor/dtor pattern is a very general mechanism not limited to just managing object (in the OO sense) lifetimes. That is why you don't need finally/defer etc. in C++. You can get all of these policies using just this one mechanism.
> The correct way to think about it is as scoped entry and exit function calls i.e. a scoped guard. For example, every C++ programmer writes a LogTrace class to log function (or other scope) entry and exit messages. This is purely exploiting the feature to make function calls with nothing whatever to do with objects (in the sense of managing state) at all. Raymond gives a good example when he points to how wil::scope_exit takes a user-defined lambda function to be run by a dummy object's dtor when it goes out of scope.
Hahaha. It is certainly not a fundamental misunderstanding.
All scope guards are built off of stack-allocated object lifetimes, specifically the scope guard itself. That is not "my opinion" or "my perspective", it is the reality. Try constructing a scope guard that isn't based off of the lifetime of an object on the stack. You can't do this, because the fact that it is tied to an object's lifespan is the point. One of the few points in C++'s favor is the fact that this relatively elegant mechanism can do so much.
> Scope guards using ctor/dtor mechanism is enough to implement all the policies like finally/defer etc. That was the point of the article.
You can kind of implement Go-style defer statements. Since Go-style defer statements run at the end of the current function rather than scope, you'd probably want a scope guard that you instantiate at the beginning of a function with a LIFO queue of std::functions that you can push to throughout the function. Seems like it works to me, not particularly elegant to use. But can you emulate `finally`? Again, no. FTA:
> In Java, Python, JavaScript, and C# an exception thrown from a finally block overwrites the original exception, and the original exception is lost. Update: Adam Rosenfield points out that Python 3.2 now saves the original exception as the context of the new exception, but it is still the new exception that is thrown.
> In C++, an exception thrown from a destructor triggers automatic program termination if the destructor is running due to an exception.
C++'s behavior here is actually one of the reasons why I don't like C++ exceptions very much, and have spent a lot of my time on -fno-exceptions (among many other reasons.)
> The article already points out the main issues (in both non-GC/GC languages) here but it is actually much more nuanced. While it is advised not to throw exceptions from a dtor C++ does give you std::uncaught_exceptions() which one can use for those special times when you must handle/throw exceptions in a dtor. More details at ...
Again, you can't really 100% emulate `finally` behavior using C++ destructors, because you can't throw a new exception from a destructor. `std::uncaught_exceptions()` really has nothing to do with this at all. Choosing not to throw in the destructor is not the same as being able to throw a new exception in the destructor and have it unwind from there. C++ just can't do the latter. You can typically do that in `finally`.
When Java introduced `finally` (I do not know if Java was the first language to have it, though it certainly must have been early) it was intended for just resource cleanup, and indeed, I imagine most uses of finally ever were just for closing files, one of the types of resources that you would want to be scoped like that.
However, in my experience the utility of `finally` has actually increased over time. Nowadays there's all kinds of random things you might want to do regardless of whether an exception is thrown. It's usually in the weeds a bit, like adjusting internal state to maintain consistency, but other times it is just handy to throw a log statement or something like that somewhere. Rather than break out a scope guard for these things, most of the time when I see this need arise in a C++ program, instead the logic is just duplicated both at the end of the `try` and `catch` blocks. I bet if I search long enough, I could find it in the wild on GitHub search.
> All scope guards are built off of stack-allocated object lifetimes, specifically the scope guard itself. That is not "my opinion" or "my perspective", it is the reality. Try constructing a scope guard that isn't based off of the lifetime of an object on the stack. You can't do this, because the fact that it is tied to an object's lifespan is the point. One of the few points in C++'s favor is the fact that this relatively elegant mechanism can do so much.
You are still looking at it backwards. C++ chose to tie user-defined object lifetimes to lexical scopes (for automatic storage objects defined in that scope) via stack-based creation/deletion because it was built on C's abstract machine model. Thus the implicit function calls to ctor/dtor were necessitated which turned out to be a far more general mechanism usable for scope-based control via function calls.
But the lifetime of a user-defined object allocated on the heap is not limited to lexical scope and hence the connection between lexical scope and object lifetime does not exist. However the ctor/dtor are now synchronous with calls to new/delete.
So you have two things viz. lexical scope and object lifetime and they can be connected or not. This is why i insist on disambiguating both in one's mental model.
Java chose the heap-based object lifetime model for all user-defined types and thus there is no connection between lexical scope and object lifetimes. It is because of this that Java had to provide the finally block to provide some sort of lexical scope control even-though it is GC-based. The Java object model is also the reason that finalize in Java is fundamentally different to dtor in C++ which i had pointed out earlier.
> You can kind of implement Go-style defer statements. Since Go-style defer statements run at the end of the current function rather than scope, you'd probably want a scope guard that you instantiate at the beginning of a function with a LIFO queue of std::functions that you can push to throughout the function. Seems like it works to me, not particularly elegant to use.
We started this discussion with your claim that dtors and finalize are essentially the same which i have refuted comprehensively.
Now you want to discuss finally and its behaviour w.r.t exception handling. In the absence of exceptions RAII gives you all of finally-like behaviour.
In the presence of exceptions;
> C++'s behavior here is actually one of the reasons why I don't like C++ exceptions very much, ... Again, you can't really 100% emulate `finally` behavior using C++ destructors, because you can't throw a new exception from a destructor. `std::uncaught_exceptions()` really has nothing to do with this at all. Choosing not to throw in the destructor is not the same as being able to throw a new exception in the destructor and have it unwind from there. C++ just can't do the latter.
This is again a misunderstanding. I had already pointed you to Termination vs. Resumption exception handling models with a particular emphasis on meyer's contract-based approach to their usage. Now read Andrei Alexandrescu's classic old article Change the Way You Write Exception-Safe Code — Forever - https://erdani.org/publications/cuj-12-2000.php.html
Both C++ and Java use the Termination model but because the object model of C++ vs. Java is so very different (C++ has two types of object lifetimes viz. lexical scope for automatic and program scope for heap-based with no GC while Java only has program scope for heap-based reclaimed by GC) their implementation is necessarily different.
C++ does provide std::nested_exception and related api (https://en.cppreference.com/w/cpp/error/nested_exception.htm...) to handle chaining/handling of exceptions in any function. However the ctor/dtor are special functions because of the behaviour of the object model detailed above. Thus the decision was made to not allow a dtor to throw while an uncaught exception is in flight. Note that this does not mean a dtor cannot throw (though it has been made implicit noexcept from C++11) but only that the programmer needs to take care when to throw or not. An uncaught exception means there has been a violation of contract and hence the system is in a undefined state; and hence there is no point in proceeding further.
This where the std::uncaught_exceptions comes in which the stack overflow article i linked to earlier quotes Herb Sutter;
A type that wants to know whether its destructor is being run to unwind this object can query uncaught_exceptions in its constructor and store the result, then query uncaught_exceptions again in its destructor; if the result is different, then this destructor is being invoked as part of stack unwinding
due to a new exception that was thrown later than the object’s construction.
Now the dtor can catch the uncaught exception and do proper logging/processing before exiting cleanly.
Finally, note also that Java itself has introduced new constructs like try-with-resources which should be used instead of try-finally for resources etc.
The worst part about MS Office isn't the direct user experience, because I can usually choose to use other software. The worst part is that I and everybody else are subjected to the documents that Office produces. Their defaults and their UX inevitably produce stuff that is hard to read and inconsistent, unless you fight the software really hard and make sacrifices with your desired output. And there's no escape from it. Another specimen of Word's 2.5 cm margins, 200-character lines in poorly designed knockoff Helvetica will probably find its way to my mailbox before the end of the day.
I'm not the person you are replying to, but like all of technology, you just find the latest (or most public) change made, and then fire your blame-cannon at it.
Excel crashed? Must be that new WiFi they installed!
In the chain of events that led to Cloudflare's largest ever outage, code they'd rewritten from C to Rust was significant factor. There are, of course, other factors that meant the Rust-based problem was not mitigated.
They expected a maximum config size but an upstream error meant it was much larger than normal. Their Rust code parsed a fraction of the config, then did ".unwrap()" and panicked, crashing the entire program.
This validated a number of things that programmers say in response to Rust advocates who relentlessly badger people in pursuit of mindshare and adoption:
* memory errors are not the only category of errors, or security flaws. A language claiming magic bullets for one thing might be nonetheless be worse at another thing.
* there is no guarantee that if you write in <latest hyped language> your code will have fewer errors. If anything, you'll add new errors during the rewrite
* Rust has footguns like any other language. If it gains common adoption, there will be doofus programmers using it too, just like the other languages. What will the errors of Rust doofuses look like, compared to C, C++, C#, Java, JavaScript, Python, Ruby, etc. doofuses?
* availability is orthagonal to security. While there is a huge interest in remaining secure, if you design for "and it remains secure because it stops as soon as there's an error", have you considered what negative effects a widespread outage would cause?
This is generally BS apologetics for C. If that was in C this would have just been overrunning the statically allocated memory amount and would have resulted in a segfault.
Rust did its job and forced them to return an error from the lower function. They explicitly called a function to crash if that returned an error.
We don't know how the C program would have coped. It could equally have ignored the extra config once it reached its maximum, which would cause new problems but not necessarily cause an outage. It could've returned an error and safely shut down the whole program (which would result in the same problem as Rust panicking).
What we do know is Cloudflare wrote a new program in Rust, and never tested their Rust program with too many config items.
You can't say "Rust did its job" and blame the programmer, any more than I can say "C did its job" when a programmer tells it to write to the 257th index of a 256 byte array, or "Java did its job" when some deeply buried function throws a RuntimeException, or "Python did its job" when it crashes a service that has been running for years because for the first time someone created a file whose name wasn't valid UTF-8.
Footguns are universal. Every language has them, including Rust.
You have to own the total solution, no matter which language you pick. Switching languages does not absolve you of this. TANSTAAFL.
> You can't say "Rust did its job" and blame the programmer,
You absolutely can. This is someone just calling panic in an error branch. Rust didn’t overrun the memory which would have been a real possibility here in C.
The whole point is that C could have failed in the exact same way but it would have taken extra effort to even get it to detect the issue an exit. For an error the programmer didn’t intend to handle like in this case, it likely would have just segfaulted because they wouldn’t bother to bounds check.
> TANSTAAFL
The way C could have failed here is a superset of how Rust would. Rust absolutely gives you free lunch, you just have to eat it.
“haha rust is bad” or something, is’s a silly take. these things hardly, if ever, are due to programming language choice and rather due to complicated interactions between different systems.
Cloudflare was crowing that their services were better because “We write a lot of Rust, and we’ve gotten pretty good at it.”
The last outage was in fact partially due to a Rust panic because of some sloppy code.
Yes, these complex systems are way more complex than just which language they use. But Cloudflare is the one who made the oversimplified claim that using Rust would necessarily make their systems better. It’s not so simple.
This is what reasonable people disagree on. My employer provides several AI coding tools, none of which can communicate with the external internet. It completely removes the exfiltration risk. And people find these tools very useful.
Are you sure? Do they make use of e.g. internal documentation? Or CLI tools? Plenty of ways to have Internet access just one step removed. This would've been flagged by the trifecta thinking.
Yes. Internal documentation stored locally in Markdown format alongside code. CLI tools run in a sandbox, which restricts general internet access and also prevents direct production access.
I see where you're coming from. But I often find that when I have some idea or challenge that I want to solve, I get bogged down in details (like how do I build that project)... before I even know if the idea I _wanted_ to solve is feasible.
It's not that I don't care about learning how to build Rust or think that it's too big of a challenge. It's just not the thing I was excited about right now, and it's not obvious ahead of time how sidetracked it will get me. I find that having an LLM just figure it out helps me to not lose momentum.
Maybe not slower once it has warmed up, though for memory-bandwidth bound use cases I would still say the lack of mutable records has you fighting the language to get reasonable cache locality (and everybody will hate your code for not being good Java). The fact that everything is a pointer kills the CPU execution pipeline and cache.
But even for I/O bound applications it still feels slow because excessive memory usage means more swap thrashing (slowing down your entire OS), and startup time suffers greatly from having to fire up VM + loading classes and waiting for the JIT to warm up.
I can start a C/C++/Rust based web server in under a second. The corresponding server in Java takes 10 seconds, or minutes once I have added more features.
The article got off on the wrong foot from the start by separating the purpose from the product. To my mind the purpose is the product and always will be.
> To my mind the purpose is the product and always will be.
A lot of industries would disagree with you. There are plenty of products where the physical form and direct purpose of the product itself is quite disjoint from the product they are actually selling.
For example, Hermès doesn't sell bags to carry stuff in - they sell status symbols. Restaurants don't sell food for sustenance - they sell a dining experience. Car companies only tangentially sell modes of transportation - they are treated more like fashion items in practice. Items like wedding rings have zero purpose - what they are selling is a physical manifestation of an emotion.
If everything we ever interacted with was 100% utilitarian, we'd be living in a very dull world.
You misunderstand my point. I'm not saying that the purpose of Hermès is to sell bags. I'm saying that the _product_ that Hermès sells is status, and the product of a restaurant is a dining experience.
We could instead say “promote a utopia where everyone is treated fairly and empathetically and everyone’s needs are met without destroying the planet or a need for government”. That’d “fix” the current problem and more, the issue is what exactly can we do to “promote” that change.
It's actionable if you have some imagination. Raise funds for a nonprofit. Start lobbying on both sides of the aisle. Enlist an advertising company to show the dystopian future if something like chat control comes into effect, poll for focus groups and target them. Find ways to undermine and expose the forces that are pushing for authoritarian legislation.
reply