Honest truth? I didn't Google it first lol. I just checked if there was a crate with the same name and carried on. I only found out about the Ruby ones after I posted on Reddit for the first time. By then it was too late since I had already reserved the crate name and it was being downloaded
I actually have a side project doing this. I mapped an actor Id in ractor to a PID in erlang and used a rust create to implement the network protocol for erlang. I ran out of time on this side project kind of deal but it's like 90% there and can do a full cluster join and send and receive messages between remote actors on nodes.
The crate that helped this is erl_dist. I'd love to open source the code at some point, but it's not quite there yet
Yeah I do lol. It was the big motivation since we were writing more and more rust and missed the concurrency model. Please feel free to ping me for any further questions
It's come a long way since it started and I'm thrilled I can talk about it publicly now and it's usage at Meta (some at least lol).
While the state is indeed a separate struct in ractor there's actually a good reason for this. It's because the state is constructed by the actor and it's guaranteed that the construction is the state is managed by the startup flow and panic safe.
Imagine opening a socket, if you have a mutable self the caller who is going to spawn that actor needs to open the socket themselves and risk the failure there. Instead of the actor who would eventually be responsible for said socket. This is outlined in our docs the motivation for this. Essentially the actor is responsible for creation of their state and any risks associated with that.
- for the async trait point we actually do support the native async traits without the boxing magic macro. It's a feature you can disable if you so wish but it impacts factories since you can't then box traits with native future returns https://github.com/slawlor/ractor/pull/202
Thanks! I'm happy to see actors getting some solid use in the industry to provide better thread-management safety and remove a lot of concurrency headaches.
Question for you, I was poking around in the codebase and how do you handle your Signal priorities? Like if a link died, and there's 1000 messages in the queue already, or if it's a bounded mailbox, would the link died (or even stop) messages be delayed by that much?
Have you looked into prioritization of those messages such that it's not globally FIFO?
Great question, I did some digging into the source code of beam to help answer if signals should have special priority, and the conclusion (with the help of someone else from the elixir community) was that signals have no special priority over regular messages in beam. So I decided to take this same approach, where a regular message is just a `Signal::Message(M)` variant, and everything sent to the mailbox is a signal.
So gracefully shutting down an actor with `actor_ref.stop_gracefully().await` will process all pending messages before stopping. But the actor itself can be forcefully stopped with `actor_ref.kill()`
So it does indeed even guarantee that the server is acting in a trustworthy manner due to the public auditing scenario. We will shortly be making our audit logs publicly available to show that the verified crypto proof the client performs does indeed match the publicly available records. The academic works SEEMless (https://eprint.iacr.org/2018/607) and Parakeet (https://www.ndss-symposium.org/ndss-paper/parakeet-practical...) jointly outline how this all works from a technical perspective.
While we do maintain the directory, we are held to an honest standard by our audit logs. Should any auditor find invalid records, they can publicly hold us accountable.
WhatsApp announced that they’re adding key transparency to enhance end-to-end encryption verification within their suite of apps. There’s a blog post going into details on the engineering blog including announcing that the core logic for managing an auditable key directory is being open-sourced on Github: https://github.com/facebook/akd
It's phrased as if it's an answer to a question, which it would be if it was the result of a prompt to chatgpt. Of course, a person could deliberately emulate that style as well, making this an unreliable way to determine whether a comment was written by a bot.
> Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
Using chatGPT to make a comment isn't necessarily a bad thing. I'd say the curiosity about how the comment was written is a good thing, as long as it's not a criticism.
Sure, but it's still pretty rude because it's usually baseless. And as you said, it doesn't even really matter. If the comment is bad, just downvote it. If it's good enough, why even bring it up?
"Don't ask rude questions", usually said by those with something to hide.
Obviously there's a current novelty factor with cgpt answers, and i for one am happy people 'challenge' them.
If the comment isn't what you like, ignore it?
This is starting witch hunts for ai generated comments, attempting to discredit the comment by its format, not its content.
This violates the goodwill we all share in conversation on HN.
The weakest link here, is that Facebook has to respect US laws.
They don't have a choice there.
So, if US law permits or requests in some way interception of communications, or that operators have to report certain activities, then your right to secrecy is done.
Of course, a random user won't have its dog food or gardening communications intercepted, but once you trigger certain patterns, welcome to the new "user trials / feature flags / beta".
Not saying it specifically for WhatsApp, it's valid for any US-based app
-> and broadly any app where the founders may eventually be arrested by the US (as the US has a lot of extra-jurisdiction power).
(think about it, how easy it would be to decrypt Mega.nz file, for example in a real-life scenario. One push of code on one URL to send back the part about the # sign, and done, or to activate new trials in Google Chrome, or to push a Play Store update to single users, etc...).
I'd be really surprised that Zuck takes responsibility and ends up in jail because he refuses to execute a legal request regarding imminent terrorism attack (risking penal risk and being charged as helping the criminals, well, there's a plus; that's more time to spend in the Metaverse).
The most likely scenario, is that the US-gov is very powerful and capable to enforce laws in their own country and that you have to respect the laws if you want you company to continue.
You're describing exactly the problem that key transparency helps to solve.
With this rolled out, the WhatsApp app itself will be able to detect, by default without any manual verification, if FB attempts to MITM the connection.
While this doesn't make it technically impossible for Facebook to modify the app and servers, it does make it organizationally almost impossible to do so secretly. Such a move would require the involvement of numerous individuals across multiple teams and would be noticeable to security researchers through changes to the app.
This approach is taking off in a bunch of similar problem spaces (web PKI, code signing, etc), so very exciting to see it applied here.
Randomly, and somewhat weirdly, Facebook actually offered one of the first Certificate Transparency monitoring tools, which made it possible to monitor all certificates issued for your domain using a very similar approach: https://www.facebook.com/notes/3497286220327506/
You're making my point: some Chinese Skype variant did this, back in 2009, and got caught.
There's just no way, in real life, for Facebook to add what you're describing to one of the most prominent messaging apps in the world without somebody noticing.
I'm not here to tell you that your WhatsApp messages are perfectly secure. If the CIA wants to read your messages they'll probably just hit you with the wrench instead of some FB exec. But I do think that transparency logs are deeply under-appreciated for their ability to make undetected mass-surveillance dramatically more challenging.
> There's just no way, in real life, for Facebook to add what you're describing to one of the most prominent messaging apps in the world without somebody noticing.
That assumes somebody is digging through each update and the thousands of classes. FFS the OG Facebook app was already blowing past the limits of Android in 2013 [1], and the current Whatsapp app isn't much better - just look at the current APK file:
25MB of already compressed Dalvik code, probably double that if you restore it to Java class files and triple to quadruple that in Java source files. It's impossible to audit that there is no routine pushing keys to, say, the usual analytics backend they use - and to make it worse, according to APKMirror, they push updates every few days [2].
Although my biggest question is... it's a fucking messenger app. Why does it produce a larger binary content than a full-blown Linux kernel?!
Also, conversely, the kernel doesn't do that much. Most of the Linux kernel source consists of device drivers, which compile to modules rather than get bundled into vmlinuz. Many of these modules are also rarely built if ever. The kernel itself really is a pretty small fraction of the complete software bundle that makes a Linux system functional.
> There's just no way, in real life, for Facebook to add what you're describing to one of the most prominent messaging apps in the world without somebody noticing
Your point moved from "key transparency is the defense" to "someone will notice". But if your defense is the hope of "someone noticing" you're in for a big surprise. Sometimes things go unnoticed. Look no further than OpenSSL, open source, used by billions, deployed by companies worth as much as small countries, and yet nobody noticed Heartbleed for years.
So I'll be very cynical that some development flag targeting a handful of people in an app like WhatsApp and then is removed will be so noticeable that it's a strong defense.
I think you are trying to say "it's never 100% secure", and the parent agrees with you. The parent is just saying "this is making it more secure (but not 100% secure)".
Or just hack the phone of those few clients with another attack vector. Doesn't mean that security is entirely useless. It depends on the threat model.
There are also tons of ways to exfiltrate data through known channels in ways that are difficult for security researchers to distinguish from otherwise secure app analytics code.
A crash/exception logging system, say, might appear to researchers to anonymize data, but it would be very possible for code to be written that happens to raise a mundane exception when specific users or geofences see specific words on screen, in a way where that list of users/geofences/words could be controlled by non-technical teams. The log message itself doesn't even need to carry sensitive data; its existence alone, when the trigger conditions are known, can be used to carry out a highly targeted attack.
Even open-source systems can be vulnerable to this: see e.g. https://github.com/signalapp/Signal-iOS/blob/eaed4da06347a3a... and consider the ways it might be possible for a small group of people at Signal to cause a specific set of messages to be seen as corrupt without raising any flags to the community auditing the code.
Of course, lack of visibility into runtime errors can lead to vulnerabilities as well. I don't think the solution is for us as a community to advocate for removing all error analytics in distributed systems. But we can't ever forget that: all analytics surfaces are attack surfaces.
Somehow I think this is still possible.
The engineers behind WhatsApp seem to be very talented, and they may be able to convince Zuck that an open client would increase trust in Meta's brands, and increase usage (which can then be used to promote other Meta's products).
If they keep the server-side closed, it's totally fair I think.
This solves a real issue. Key transparency for SSL certificates introduced years ago by Google surfaced a lot of misissued certificates and fixed large hole in the whole system. Of course, this is for WhatsApp and the impact is smaller, but still. Congrats to Meta team.
With local Intel suggesting malware being installed when admins to go the station for "registration". Remember, this was before all the state sponsored malware came into light a few years ago so we locally have known this for quite some time now.
> So, if US law permits or requests in some way interception of communications, or that operators have to report certain activities, then your right to secrecy is done.
Yep. FISA Section 702 allows that but supposedly only if you're not in the US and not an US citizen. Will an American get caught up the in the net? Maybe? Oh and it doesn't require a warrant. It's set to expire the end of this year but they've been known to renew it. https://www.eff.org/702-spying
Actually, this is exactly what Pavel Durov (Mark Zuckerberg’s counterpart and founder of the Russian Facebook vkontakte) did when Russian authorities asked him to reveal who helped organize the Maidan protests in 2013/2014
Pretty soon he started receiving the standard “tax evader” treatment (i.e. offices being ransacked, veiled personal threats etc.), his shareholders pushed him out and he and his brother fled the country and started Telegram.
Pavel is a true libertarian who’s stood for his beliefs against his own government, and lost control of his company as a result. Unlike Moxie Marlinspike (founder of WhatsApp and also Signal) who claims he is an “anarchist”, Pavel walked the walk.
When he started Telegram, he pissed off his US investor who got heat for Telegram being used by ISIS to communicate. The investors were pissed off that they never made a profit on Telegram and was mentally associated with helping ISIS. Although Pavel eventually did take action: https://www.wsj.com/articles/telegram-app-tackles-islamic-st...
Pavel also claims his team was approached by the CIA multiple times and they successfully resisted it. Telegram offices are nowhere to be found in Dubai: https://m.youtube.com/watch?v=Pg8mWJUM7x4
That is how you run a free speech absolutist social network that governments all want to control. Telegram is probably the most secure and trusted centralized social network (with Signal a distant second).
But that is insane. We don’t have to trust Pavel or Moxie to be our “last line of defense.” Why do we rely on giant, centralized corporations to host all our private conversations?
This was my response to Moxie’s critique of Web3 and decentralization:
As mentioned in the linked article, E2EE group chats are more or less impractical due to the identity verification problem. This initiative is intended to help with that. I will also point out that large group chats are impractical due to the simple fact that not everyone will know everyone else. So someone can just leak the messages.
The Telegram method of dealing with this is obviously not the only way, but it is a legitimate way.
>Not to mention that even when chats are e2e encrypted, they are encrypted using their proprietary algorithm?
The algorithm is public. It is a straightforward application of well known primitives. It is hardly proprietary.
> The algorithm is public. It is a straightforward application of well known primitives. It is hardly proprietary.
Note that its predecessor, was very much not that (e.g. https://words.filippo.io/dispatches/telegram-ecdh/ was a vulnerability in it, and it stuck to some weird choices of crypto primitives/key sizes for a pretty long time). This colors my expectations about the current version slightly.
That’s like Mehdi Hassan nitpicking small factual inaccuracies in the Twitter filea last week and ignoring the main discussion with Matt Taibbi about government censorship around the world.
Look, if people want to encrypt their chats on Telegram, they start a secret chat. That’s how it should be. Why should it be the default? Because you think people are idiots?
If I make a secret chat on Telegram, I trust it more than a default chat on Signal. Both are good, but one company is much harder to pressure than another.
And this is all a moot point - like arguing which homeless person is richer. If you want real privacy and control — simply communicate without using the infrastructure and software provided by centralized corporations!
> Look, if people want to encrypt their chats on Telegram, they start a secret chat. That’s how it should be. Why should it be the default? Because you think people are idiots?
Because everyone is an idiot once in a while (just after waking up, when drunk, when stressed, when sick, ...). Also, because the very presence of a secret chat is something that can be observed and can be enough to raise suspicion.
I know this is a bit of a cop-out but even writing in a non-secret chat and having Telegram know, then totally deleting a message on Telegram with no visual trace to the counterpart, is less worrying for me than doing the same on the “e2e encrypted” WhatsApp which shows “Message deleted” and if I failed to do it, prevents me from deleting the message after a while. Telegram lets me delete everyone’s messages and even the entire chat anytime. That shows where their head is at.
That said, you are right that not-on-by-default-for-everyone makes the encrypted chats more suspicious.
I have to say that I have a nuanced view on encryption, which isn’t matching the orthodoxy on HN:
If I understand your proposed world correctly (I understand it as morally equivalent to the escrow of ~all keys with k-of-n split across some well-chosen entities/people), I expect a person holding that view to support encryption-by-default even more strongly, because in a world that looks that way (and that way actually works as described) there is no apparent downside to that. I'm curious whether you disagree with any part of this.
OT: Do you have anything more concrete written on the choice of holders of escrow shares (so that they can be trusted to actually follow the audit rules)?
Thanks for reading through what I wrote and grokking it! It means a lot to me. Now we can discuss it.
Yes, for all private interactions / conversations I support encryption, provided it can be decrypted in the way I said. Obviously there is room for innovation to make it harder and harder to do bulk decryption without a proper reason and audit trail. And make sure somehow that the cameras can prove they aren’t sending unencrypted video or that encryption keys are secure. It’s hard to prove a negative, but possible if verifiers can search the entire signal. Those innovations are part technological and part societal… but the underlying technology (like blockchain) has to exist first. Has anyone built it yet?
Now having said that, I don’t think encryption keys should be that hard to get for conversations within a corporation, and probably should be nonexistent for public servants on duty. Today we have the opposite … NATO promises to Gorbachev are secret, Normandy format talks were closed for years, Ukraine-Russia negotiations were behind closed doors, we don’t know why they all failed. And regular people have to go to war because of their failure. I think if the government wants to know where my $600 goes I should be able to know where trillions go.
Can you give an example of what you're referring to? I don't know of anything limiting memory / cpu / etc in Erlang at least of any individual gen_server. We have the Factory processes which can gracefully loadshed, but that doesn't stop you from having a memory leak.
At least Rust doesn't have a garbage collector, so when the actor is stopped + dropped, it'll cleanup not only it's state but also it's message queue's flushing them so that all memory is released at the time of drop.
I mean lunatic runs within Wasm, which isn't the end of the world, but we compile down to native. Lunatic generally seems to want to be the end-all of concurrency but we fit more into an existing Tokio-based environment.
As far as deployment, do you mean like how we came to build this? Or like how you'd actually deploy it...? Because that's really up to the program being built imo
> As far as deployment, do you mean like how we came to build this? Or like how you'd actually deploy it...? Because that's really up to the program being built imo
I mean that deployment of Erlang code on a BEAM cluster is pretty simple, iirc. I wonder if you have a solution that handles that part as conveniently as Erlang/Elixir/BEAM.
Ah yeah we don't have hot-updates like Erlang does (we may never, it depends on how Rust would let us do it). That being said we aren't running on a runtime, so Rust code is native-compiled. You could take down and upgrade a node in a cluster as long as the network protocol doesn't change (or is at least backwards compat), but you wouldn't be able to like upgrade a single actor on a node.
I suspect that it would be possible to somehow implement hot-updates with native, if you're willing to use some container as your "runtime". I haven't investigated this seriously, though, so I may be wrong.