Hacker Newsnew | past | comments | ask | show | jobs | submit | blintz's commentslogin

Doesn’t rustc emit LLVM IR? Are there a lot of systems that LLVM doesn’t support?

rustc can use a few different backends. By my understanding, the LLVM backend is fully supported, the Cranelift backend is either fully supported or nearly so, and there's a GCC backend in the works. In addition, there's a separate project to create an independent Rust frontend as part of GCC.

Even then, there are still some systems that will support C but won't support Rust any time soon. Systems with old compilers/compiler forks, systems with unusual data types which violate Rust's assumptions (like 8 bit bytes IIRC)


There are a number of oddball platforms LLVM doesn't support, yeah.

Many organizations and environments will not switch themselves to LLVM to hamfist compiled Rust code. Nor is the fact of LLVM supporting something in principle means that it's installed on the relevant OS distribution.

Using LLVM somewhere in the build doesn't require that you compile everything with LLVM. It generates object files, just like GCC, and you can link together object files compiled with each compiler, as long as they don't use compiler-specific runtime libraries (like the C++ standard library, or a polyfill compiler-rt library).

`clang-cl` does this with `cl.exe` on Windows.


If you're developing, you generally have control over the development environment (+/-) and you can install things. Plus that already reduces the audience: set of people with oddball hardware (as someone here put it) intersected with the set of people with locked down development environments.

Let alone the fact that conceptually people with locked down environments are precisely those would really want the extra safety offered by Rust.

I know that real life is messy but if we don't keep pressing, nothing improves.


> If you're developing, you generally have control over the development environment

If you're developing something individually, then sure, you have a lot of control. When you're developing as part of an organization or a company, you typically don't. And if there's non-COTS hardware involved, you are even more likely not to have control.


> Since ML-KEM is supported by the NSA, it should be assumed to have a NSA-known backdoor that they want to be used as much as possible

AES and RSA are also supported by the NSA, but that doesn’t mean they were backdoored.


AES and RSA had enough public scrutiny to make backdooring backdoors imprudent.

The standardization of an obviously weaker option than more established ones is difficult to explain with security reasons, so the default assumption should be that there are insecurity reasons.


There was lots of public scrutiny of Kyber (ML-KEM); DJB made his own submission to the NIST PQC standardization process. A purposely introduced backdoor in Kyber makes absolutely no sense; it was submitted by 11 respected cryptographers, and analyzed by hundreds of people over the course of standardization.

I disagree that ML-KEM is "obviously weaker". In some ways, lattice-based cryptography has stronger hardness foundations than RSA and EC (specifically, average -> worst case reductions).

ML-KEM and EC are definitely complementary, and I would probably only deploy hybrids in the near future, but I don't begrudge others who wish to do pure ML-KEM.


I don't think anyone is arguing that Kyber is purposefully backdoored. They are arguing that it (and basically every other lattice based method) has lost a minimum of ~50-100 bits of security in the past decade (and half of the stage 1 algorithms were broken entirely). The reason I can only give ~50-100 bits as the amount Kyber has lost is because attacks are progressing fast enough, and analysis of attacks is complicated enough that no one has actually published a reliable estimate of how strong Kyber is putting together all known attacks.

I have no knowledge of whether Kyber at this point is vulnerable given whatever private cryptanalysis the NSA definitely has done on it, but if Kyber is adopted now, it will definitely be in use 2 decades from now, and it's hard to believe that it won't be vulnerable/broken then (even with only publicly available information).


Source for this loss of security? I'm aware of the MATZOV work but you make it sound like there's a continuous and steady improvement in attacks and that is not my impression.

Lots of algorithms were broken, but so what? Things like Rainbow and SIKE are not at all based on the hardness of solving lattice problems.


> AES and RSA had enough public scrutiny to make backdooring backdoors imprudent.

Can you elaborate on the standard of scrutiny that you believe AES and RSA (which were standardized at two very different maturation points in applied cryptography) met that hasn't been applied to the NIST PQ process?


SHA-2 was designed by the NSA. Nobody is saying there is a backdoor.


I think it's established that NSA backdoors things. It doesn't mean they backdoor everything. But scrutiny is merited for each new thing NSA endorses and we have to wonder and ask why, and it's enough that if we can't explain why something is a certain way and not another, it's not improbable that we should be cautious of that and call it out. This is how they've operated for decades.


Sure. I'm not American either. I agree, maximum scrutiny is warranted.

The thing is these algorithms have been under discussion for quite some time. If you're not deeply into cryptography it might not appear this way, but these are essentially iterations on many earlier designs and ideas and have been built up cumulatively over time. Overall it doesn't seem there are any major concerns that anyone has identified.

But that's not what we're actually talking about. We're talking about whether creating an IETF RFC for people who want to use solely use ML-KEM is acceptable or not - and given the most famous organization proposing to do this is the US Federal Government it seems bizarre in the extreme to accuse them of backdooring what they actually intend to use for themselves. As I said, though, this does not preclude the rest of the industry having and using hybrid KEMs, which given what cloudflare, google etc are doing we likely will.


One does not place backdoors in hash algorithms. It's much more interesting to place backdoors in key agreement protocols.


How would NSA have "placed" a backdoor in Kyber? NSA didn't write Kyber.


TLS 1.3 did do that, but it also fixed the ciphersuite negotiation mechanism (and got formally verified). So downgrade attacks are a moot point now.


Standardizing a codepoint for a pure ML-KEM version of TLS is fine. TLS clients always get to choose what ciphersuites they support, and nothing forces you to use it.

He has essentially accused anyone who shares this view of secretly working for the NSA. This is ridiculous.

You can see him do this on the mailing list: https://mailarchive.ietf.org/arch/browse/tls/?q=djb


> standardizing a code point (literally a number) for a pure ML-KEM version of TLS is fine. TLS clients always get to choose what ciphersuites they support, and nothing forces you to use it.

I think the whole point is that some people would be forced to use it due to other standards picking previously-standardized ciphers. He explains and cites examples of this in the past.

> He has essentially accused anyone who shares this view of secretly working for the NSA. This is ridiculous.

He comes with historical and procedural evidence of bad faith. Why is this ridiculous? If you see half the submitted ciphers being broken, and lies and distortions being used to shove the others through, and historical evidence of the NSA using standards as a means to weaken ciphers, why wouldn't you equate that to working for the NSA (or something equally bad)?


> I think the whole point is that some people would be forced to use it due to other standards picking previously-standardized ciphers. He explains and cites examples of this in the past.

If an organization wants to force its clients or servers to use pure ML-KEM, they can already do this using any means they like. The standardization of a TLS ciphersuite is besides the point.

> He comes with historical and procedural evidence of bad faith. Why is this ridiculous?

Yes, the NSA has nefariously influenced standards processes. That does not mean that in each and every standards process (especially the ones that don't go your way) you can accuse everyone who disagrees with you, on the merits, of having some ulterior motive or secret relationship with the NSA. That is exactly what he has done repeatedly, both on his blog and on the list.

> why wouldn't you equate that to working for the NSA (or something equally bad)?

For the simple reason that you should not accuse another person of working for the NSA without real proof of that! The standard of proof for an accusation like that cannot be "you disagree with me".


> The standard of proof for an accusation like that cannot be "you disagree with me".

How is that the standard he's applying, though? Just reading his post, it's clearly "you're blatantly and repeatedly lying, and distorting the facts, and not even addressing my arguments". Surely "you disagree with me" is not an accurate characterization of this?


Let's invert that thinking. Imagine you're the "security area director" referenced. You know that DJB's starting point is assumed bad faith on your part, and that because of that starting point DJB appears bound in all cases to assume that you're a malicious liar.

Given that starting point, you believe that anything other than complete capitulation to DJB is going to be rejected. How are you supposed to negotiate with DJB? Should you try?


To start with, you could not lie about what the results were.


Your response focuses entirely on the people involved, rather than the substance of the concerns raised by one party and upheld by 6 others. I don't care if 1 of the 7 parties regularly drives busloads of orphans off a cliff, if the concerns have merit, they must be addressed. The job of the director is to capitulate to truth, no matter who voices it.

Any personal insults one of the parties lobs at others can be addressed separately from the concerns. An official must perform their duties without bias, even concerning somebody who thinks them the worst person in the world, and makes it known.

tl;dr: sometimes the rude, loud, angry constituent at the town hall meeting is right


Sunlight is the best disinfectant. I see one group of people shining it and another shading the first group.

Someone who wants to be seen as acting in good faith (and cryptography standards folks should want this), should be addressing the substance of what he said.

Consensus doesn't mean "majority rule", it requires good-faith resolutions (read: not merely responses like 'nuh-uh') to the voiced concerns.


Love atuin - has saved my ass many more times than I can count. The more you guys can monetize the better; will help keep the base product good. Even pretty senior devs (who don’t always love changing their workflows) can find a lot of value in it.

I would pay you guys for E2EE syncing, but I think it’s free at the moment. Charge me!


It’s symmetric keys, so quantum doesn’t matter.


<pedantry>

"On the other hand, symmetric algorithms such as AES are believed to be immune to Shor. In most cases, the best-known quantum key recovery attack uses Grover’s algorithm which provides a generic square-root speed-up over classical exhaustion in terms of the number of queries to the symmetric algorithm. In other words, Grover would recover the 256-bit key for AES-256 with around 2^128 quantum queries to AES compared to around 2^256 classical queries for exhaustion. "

- https://csrc.nist.gov/csrc/media/Events/2024/fifth-pqc-stand...

</pedantry>

the paper itself concludes "the practical security impact of Grover with existing techniques on plausible near-term quantum hardware is limited."


> This page measures the concentration of the Fediverse and the Atmosphere according to the Herfindahl–Hirschman Index (HHI), an indicator from economics used to measure competition between firms in an industry. Mathematically, HHI is the sum of the squares of market shares of all servers.

I had not heard of this metric before - it’s neat and simple to understand. If you scaled it down to 0-100 (by dividing by 100), I think it would make the numbers more immediately understandable. I’d even consider inverting it (so 0 = centralized and 100 = decentralized), since the website title implies measuring progress ‘towards’ decentralization.


OTOH the reason why they didn't normalize to 100 may be to not give people an idea that the measure is linear; seeing a score of 2500 makes you ask 'what does it mean?' whereas if you were presented with 25/100 you probably wouldn't think that it is 'highly concentrated'.


I'm surprised at how normal some of the unseen words are. I expected them to all be archaic or niche, but many are pretty reasonable: 'congregant', 'definer', 'stereoscope'.


For what it's worth, there's 1.7bn posts on Bluesky according to this: https://bsky.jazco.dev/stats

The dictionary site has only checked 4,920,000 posts, which is 0.28% of all messages.


It now claims to have checked 11 million posts but only seen "the" 16 thousand times. I'm not sure its numbers are entirely reliable.


It's likely that the commenter has read less than 5 million posts worth of text though. So perhaps this still points to a lack of diversity in content.


You got me wondering. Supposing the average post is 10 words, and a typical page of text is 250 words, that would only be ~50 pages of text a day over the last 10 years. Which I don't think I manage, but over 20 years I am probably in that window.


dentel, exclaustrations, gryding, datolite, frabbing?


I can't keep up with all these new Pokemon.


I say this as a lover of FHE and the wonderful cryptography around it:

While it’s true that FHE schemes continue to get faster, they don’t really have hope of being comparable to plaintext speeds as long as they rely on bootstrapping. For deep, fundamental reasons, bootstrapping isn’t likely to ever be less than ~1000x overhead.

When folks realized they couldn’t speed up bootstrapping much more, they started talking about hardware acceleration, but it’s a tough sell at time when every last drop of compute is going into LLMs. What $/token cost increase would folks pay for computation under FHE? Unless it’s >1000x, it’s really pretty grim.

For anything like private LLM inference, confidential computing approaches are really the only feasible option. I don’t like trusting hardware, but it’s the best we’ve got!


There is an even more fundamental reason why FHE cannot realistically be used for arbitrary computation: it is that some computations have much larger asymptomatic complexity on encrypted data compared to plaintext.

A critical example is database search: searching through a database on n elements is normally done in O(log n), but it becomes O(n) when the search key is encrypted. This means that fully homomorphic Google search is fundamentally impractical, although the same cannot be said of fully homomorphic DNN inference.


There has been a theoretical breakthrough that makes search a O(log n) problem, actually, (https://eprint.iacr.org/2022/1703) but it is pretty impractical (and not getting much faster).


Good point. Note however that PIR is a rather restricted form of search (e.g., with no privacy for the server), but even so, DEPIR has polylog(n) queries (not log n), and requires superlinear preprocessing and a polynomial blowup in the size of the database. I think recent concrete estimates are around a petabyte of storage for a database of 2^20 words. So as you say, pretty impractical.


Even without bootstrapping FHE will never be as fast as plaintext computation: the ciphertext is about three orders of magnitude much larger than the plaintext data it encrypts, which means you have to have more memory bandwidth and more compute. You can’t bridge this gap.


Technically, there are rate-1 homomorphic encryption schemes, where ‘rate’ refers to the size ratio between the plaintext and the ciphertext. They’re not super practical, so your general point stands.


Oh, interesting. Can you point to a paper about one?



Thank you, I’ll give it a read.


That actually sounds pretty reasonable and feels almost standard at this point?

To pick one out of a dozen possible examples: I regularly read 500 word news articles from 8mb web pages with autoplaying videos, analytics beacons, and JS sludge.

That’s about 3 orders of magnitude for data and 4-5 orders of magnitude for compute.


Sure, but downloading a lot of data is not the same as compute on this data. With web you simply download the data, and pass the pointers to this data around. With FHE, you have to compute on extremely large cipher texts, using every byte of them. FHE is roughly 1000x more data to process and it takes about 1000x more time.


I dont remember the last time I saw a news page that was <50mb


There’s still Druge Report.

https://www.drudgereport.com


This is basically rss


Don't you think there is a market for people who want services that have provable privacy even if it costs 1,000 times more? It's not as big a segment as Dropbox but I imagine it's there.


FHE solves privacy-from-compute-provider and doesn't affect any other privacy risks of the services. The trivial way to get privacy from the compute provider is to run that compute yourself - we delegate compute to cloud services for various reasonable efficiency and convenience reasons, but a 1000-fold less efficient cloud service usually isn't competitive with just getting a local device that can do that.


???

For the equivalent of $500 in credit you could self host the entire thing!


You're not joking. If you're like most people and have only a few TiB of data in total, self hosting on a NAS or spare PC is very viable. There are even products for non-technical people to set this up (e.g. software bundled with a NAS). The main barrier is having an ISP with a sufficient level of service.


Sure, hardware is cheap.

However if you actually follow the 3-2-1 rule with your backups, then you need to include a piece of real estate in your calculation as well, which ain’t cheap.


I have true 3-2-1 backups on a server running proxmox with 32 cores, 96gb of ram, and 5TB of ssd disks (2TB usable for VMs). Cost me $1500 for the new server hardware 2 years ago. Runs in my basement and uses ~30w of power on average (roughly $2.50/mo). The only cloud part is the encrypted backups at backblaze which cost about $15/mo.

Its a huge savings over a cloud instance of comparable performance. The closest match on AWS is ~$1050/mo and I still have to back it up.

The only outage in 2 years was last week when there was a hardware failure of the primary ssd. I was back up and running within a few hours and had to leverage the full 3-2-1 backup depth, so I am confident it works.

If i was really desperate i could have deployed on a cloud machine temporarily while i got the hardware back online.


Only $1500? How much would this setup cost today?


Some quick checking on Newegg and I came up with this. https://newegg.io/64113a4 About $1200. I didn’t look into the power draw for this setup. Added bonus there is space for a GPU if you want to do some AI stuff.


Each 2TB of SSD is like $85, double it if you want local redundancy in your software RAID.

The rest is basically a nice custom PC minus a cool case and a high end GPU. A 9950X is $500, a 2x48GB kit is maybe $200. A few hundo more for a mobo, PSU and basic case.


If you self-host your NAS, then your server has access to the data in clear to do fancy stuff, and you can make encrypted backups to any cloud you like, right?


Some people I know make a deal with a friend or relative to do cross backups to each others' homes. I use AWS Glacier as my archival backup, costs like 3 bucks a month for my data; you could make a copy onto two clouds if you like. There are tools to encrypt the backups transparently, like the rclone crypt backend.


You don't need homomorphic encryption for a backup, normal encryption suffices.


I keep a small backup drive at my office which I bring home each month to copy my most sensitive documents and photos onto.

All my ripped media could be ripped again: I only actually have a couple of Tb of un-lose-able data.


FHE is so much more expensive that it would still be cheaper.


But if you have a lot of data, self hosting is still cheaper.

Its always gonna be cheaper because you don't have the cloud provider's profit margin, which can be quite high.


It can be quite high, but it doesn't have to be. For instance, I have a 7TB storage server from Hosthatch that's $190 for 2 years. That's $7.92 per month, or £5.88 at today's exchange rates. That's under 20p per day.

Just on electricity costs alone, this is good value. My electricity costs are 22.86p/kWh which is pretty cheap for the UK. That means that if having that drive plugged in and available 24/7 uses more than 37W, it's more expensive to self host at home than rent the space via a server. Also, I've not needed to buy the drive or a NAS, nor do I have to worry about replacing hardware if it fails.


Do they offer deals like that often? List price is "from $24/month" for 6TB (no further details provided without registering an account).


They tend to do promotions, typically only valid for 24h and only advertised on certain forums like LET, a couple of times per year - typically at least around their company anniversary date or Black Friday.

There are others too, e.g. Servarica who keep their Black Friday offers running all year round.


> There are others too, e.g. Servarica who keep their Black Friday offers running all year round.

I don’t understand the logic here so I’m going to assume I’m being obtuse. Doesn’t that just mean that’s their standard price? Why or how would you ever pay more?


Yeah, I kind of agree in the latter case. Black Friday deals often have lower priority support etc.

I guess with Servarica, they have their standard deals, but for Black Friday deals are generally thin margins, but still enough to cover costs. Typically every year, they have special deals that are a bit different to their previous offerings. As a result, some people prefer the previous deals, some prefer the new ones, so they keep them all going. It's a bit unusual. They've also got a few interesting deals, like start with N TB and it grows a bit every day. If you keep these more than about 3-4 years, these are probably better value for money, but I think you're paying too much in the first few years. It's interesting if your primary use case is incremental backups.

Hosthatch's deals are a bit different as they're usually preorders and at almost cost with basically minimal support, whereas they keep their normal stuff in stock and have higher support levels.

I should also add that I've not personally used Servarica, even though they look interesting - just because they only have a Canadian datacenter. I have 4 Hosthatch servers spread all over the globe so that I have more redundancy in my backups. I only buy them when they have deals, assuming I don't miss them as they're only for 24h.


For very large amounts of data, the cloud provider can hit economies of scale using tape drives ($$$$ to buy a tape drive yourself) or enterprise-class hard drives (very loud + high price of entry if you want redundancy + higher failure rate than other storage). That's why storing data in the slower storage classes in S3 and other object stores is so cheap compared to buying and replacing drives.


The statements made in the linked description of this cannot be true, such as Google not being able to read what you sent them and not being able to read what they responded with.

Having privacy is a reasonable goal, but VPNs and SSL/TLS provide enough for most, and at some point your also just making yourself a target for someone with the power to undo your privacy and watch you more closely- why else would you go through the trouble unless you were to be hiding something? It’s the same story with Tor, VPN services, etc.- those can be compromised at will. Not to say you shouldn’t use them if you need to have some level of security functionally, but no one with adequate experience believes in absolute security.


> The statements made in the linked description of this cannot be true, such as Google not being able to read what you sent them and not being able to read what they responded with.

The beautiful thing is: they are :-)


If Google’s services can respond to queries, they must be able to read them.

If A uses a cereal box cipher and B has a cereal box cipher, B can can make sense of encoded messages A sends them, A can ask about the weather, and B can reply with an encoded response that A can decode and read. B is able to read A’s decoded query, and B knew what the weather was, and responded to A with that information.

Security is not magic.


What do you think fully homomorphic encryption is, then?


This is pointless, but I'll try anyway.

Yes, they can read both. But it's just gobbledygook to them. If you send them a "nonsense" query, they can reply with a "nonsense" response which is actually carefully computed to be something you can make sense of. But they can't make sense of it other than that it should be a relevant to the query you sent them.


The thing that you find magical is not only actually possible but implemented and in use! What a day for you! Enjoy it, this is a rare event :-D


If we are talking 1000x more latency, that is a pretty hard sell.

Something that normally takes 30 seconds now takes over 8 hours.


Its like, python can be 400 times slower than C++, but people still use it.


If Python devs/users had to actually use all pure Python libraries, no C bindings or Rust bindings, no RPC to binaries written in faster languages, it would get dropped for a ton of use cases, absolutely including its most prominent ones (machine learning, bioinformatics, numeric analysis, etc.).


It would probably especially include those before most others. The best thing about Python IMO is the FFI and the ecosystem built around it.


Yeah, because people use python when it doesn't matter and c++ when it does (including implicitly by calling modules that are backed by c implementations).

That is not an option with FHE. You have to go all in.


Yes but with FHE it also depends on the use-case and how valuable the output is and who is processing it and decrypting the final output.

There are plenty of viable schemes like proxy re-encryption, where you operate on a symmetric key and not on a large blob of encrypted data.

Or financial applications where you are operating on a small set of integers, the speed is not an issue and the output is valuable enough to make it worth it.

It only becomes a problem when operating FHE on a large encrypted dataset to extract encrypted information. The data extracted will need to offset the costs. As long as companies don't care about privacy, this use-case is non-existent so its not a problem that its slow.

For military operations on the other hand, it might be worth the wait to run a long running process


And people will use FHE where it matters and plaintext where it doesn’t…


For compute, which is a small part of things computers do. Many things are I/O and network bound.

I’m not at all a fan of Python, but perf is the least of my concerns with it.


Or more like, something that normally takes 50ms like a http request, would take a minute.


For LLM inference, the market that will pay $20,000 for what is now $20 is tiny.


there is, it's called governments. however this technology is so slow that using it in mission critical systems (think communication / coordinates during warfare) that it is not feasible IMO.

the parent post is right, confidential compute is really what we've got.


Honestly, no? Unless you get everyone using said services, then a market that is only viable to people trying to hide bad behavior becomes the place you look for people doing bad things?

This is a large part of why you have to convince people to hide things even if "they have nothing to hide."


For most this would mean only specially treating a subset of all the sensitive data they have.


I get that there is a big LLM hype, but is there really no other application for FHE? Like for example trading algorithms (not the high speed once) that you can host on random servers knowing your stuff will be safe or something similar?


I speak as someone who used to build trading algorithms (not the high speed ones) for a living for several years, so knows that world pretty well. I highly doubt anyone who does that will host their stuff on random servers even if you had something like FHE. Why? Because it's not just the code that is confidential.

1) if you are a registered broker dealer you will just incur a massive amount of additional regulatory burden if you want to host this stuff in any sort of "random server"

2) Whoever you are, you need the pipe from your server to the exchange to be trustworthy, so no-one can MITM your connection and front-run your (client's) orders.

3) This is an industry where when people host servers in something like an exchange data center it's reasonably common to put them in a locked cage to ensure physical security. No-one is going to host on a server that could be physically compromised. Remember that big money is at stake and data center staff typically aren't well paid (compared to someone working for an IB or hedge fund), so social engineering would be very effective if someone wanted to compromise your servers.

4)Even if you are able to overcome #1 and are very confident about #2 and #3, even for slow market participants you need to have predictable latency in your execution or you will be eaten for breakfast by the fast players[1]. You won't want to be on a random server controlled by anyone else in case they suddenly do something that affects your latency.

[1] For example, we used to have quite slow execution ability compared with HFTs and people who were co-located at exchanges, so we used to introduce delays when we routed orders to multiple exchanges so the orders would arrive at their destinations at precisely the same time. Even though our execution latency was high, this meant no-one who was colocated at the exchange could see the order at one exchange and arb us at another exchange.


But shouldn't proper FHE address most of these concerns? I mean, most of those extra measures are exactly because if you can physically access the server, it's game over. With FHE, if the code is trusted, even tampering with the hardware should not compromise the software.


How does FHE help with someone executing a process on the server that affects the latency of your trading algo? eg by sucking up the CPU resources you need to do FHE.

How does FHE help with the fact that regulators generally want single-tenant shared-nothing for registered broker/dealers? Have you tried to explain a technical mitigation like FHE to a financial regulator? I have, there are 2 standard responses:

1) (in the US) "We strongly prefer single-tenant shared nothing. I won't officially say whether or not we deem your technical mitigation of using FHE to be sufficient. If we think it's insufficient we may take regulatory action against you in the future. Us not taking action doesn't mean we think it's sufficient."

2) (in places like Switzerland) "We strongly prefer single-tenant shared nothing. I'm not sure I fully understand the technical mitigation of FHE you are putting in place, but I'm going to increase your regulatory capital reserves. Send us some more white papers describing the solution and we may not increase your capital reserves further".

Singapore is the only exception where you have a regulator who is tech-savvy and will give you a clear answer as to whether something or not is OK.


why would latency matter if the trading we're talking about isn't high-speed?


I give a concrete example in the GP post but the reason is that the high-speed people can take advantage of you in certain circumstances if you don’t have extremely accurate timing of things like order placement.

As another example, imagine you are placing an options order on one exchange and a cash hedge on another exchange (eg for a delta hedge). If someone sees one half of your order and has faster execution than you, they can trade ahead of you on the other leg of your trade, which increases your execution cost. This is even more important if you’re doing something like an index options trade on one side and the cash basket (all the stocks in the index) on the hedge side.

The fix for this is to use hi-res exchange timestamps (which the exchange gives you on executed trades) to tune a delay on one leg of your execution so both halves hit at precisely the same time. This ensures that HFTs can’t derive an information advantage from seeing one half of your trade before you place the other half of the order.


I encountered the situation where one company had the data, and considered this to be really valuable and did not want to show/share it. Another company had a model, which was very considered very valuable and did not want to show it. So they were stuck in a catch22. Eventually they solved the perceived risk via contracts, but it could have been solved technically if FHE were viable.


I think the only thing that could make FHE truly world-changing is if someone figures out how to implement something like multi-party garbled circuits under FHE where anyone can verify the output of functions over many hidden inputs since that opens up a realm of provably secure HSMs, voting schemes, etc.


I'd also like to comment on how everything used to be a PCIE expansion card.

Your GPU was, and we also used to have dedicated math coprocessor accelerators. Now most of the expansion card tech is all done by general purpose hardware, which while cheaper will never be as good as a custom dedicated silicon chip that's only focused on 1 task.

Its why I advocate for a separate ML/AI card instead of using GPU's. Sure their is hardware architecture overlap but your sacrificing so much because your AI cards are founded on GPU hardware.

I'd argue the only AI accelerators are something like what goes into modern SXM (sockets). This ditches the power issues and opens up more bandwidth. However only servers have the sxm sockets....and those are not cheap.


> most of the expansion card tech is all done by general purpose hardware, which while cheaper will never be as good as a custom dedicated silicon chip that's only focused on 1 task

I think one reason they can be as good as or better than dedicated silicon is that they can be adjusted on the fly. If a hardware bug is found in your network chip, too bad. If one is found in your software emulation of a network chip, you can update it easily. What if a new network protocol comes along?

Don't forget the design, verification, mask production, and other one-time costs of making a new type of chip are immense ($millions at least).

> Its why I advocate for a separate ML/AI card instead of using GPU's. Sure their is hardware architecture overlap but your sacrificing so much because your AI cards are founded on GPU hardware.

I think you may have the wrong impression of what modern GPUs are like. They may be descended from graphics cards (as in graphics ), but today they are designed fully with the AI market in mind. And they are design to strike an optional balance between fixed functionality for super-efficient calculations that we believe AI will always need, and programmability to allow innovation in algorithms. Anything more fixed would be unviable immediately because AI would have moved on by the time it could hit the market (and anything less fixed would be too slow).


Thx! I'm curious about your thoughts...

- FHE for classic key-value stores and simple SQL database tables?

- the author's argument that FHE is experiencing accelerated Moore's law, and therefore will close 1000x gap quickly?

Thx!


From your perspective: which FHE is actually usable? Or is only PHE actually usable?


Interesting! Can you provide some sources for this claim?


This isn’t some conspiracy, it’s just CYA. They know your general location from your IP and device APIs, they don’t encrypt business messaging, and they comply with subpoenas.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: