Hacker Newsnew | past | comments | ask | show | jobs | submit | rbanffy's commentslogin

This is a wonderful website.

Acorn’s UNIX had the IXI desktop, which was, back then, the absolute pinnacle of user friendly Unix. IIRC, IBM’s AIX for the PS/2 also had it or something very similar.

OpenLook was always prettier though, but Motif was more fashionable with all those 3d buttons.

Sad. No remote airline pilot positions…

Maybe for cargo in a few years.


If the photos on the phone are visible as files on a mounted filesystem, you can use rsync to copy them. If the connection drops but recovers by itself, you can put rsync inside a while true loop until it’s doing nothing.

I’m using Dropbox for syncing photos from phone to Linux laptop, and mounting the SDcard locally for cameras, so this is a guess.


I have a cluster of 4 RPi Zero Ws and network reliability is not great. Since it is for the chaos, it’s fine, but it’s very common to have a node be offline at any given time.

Even worse, the control plane is exposed, but for something that runs 3 Hercules mainframe emulation and two Altairs with MP/M, it’s fine.


I have a bunch of HA wifi connected sensors, I see them drop off and reconnect all the time it is most annoying.

Not sure why this happens to you. I have HA with several dozens WiFi devices and I have only 2 devices (one relay, one sensor) that disconnect regularly, they have both poor WiFi signal, one in a basement and one far from the AP. Almost all are on 2.4 GHz, not by choice, but they work well.

Wouldn’t this be useful for clustering Macs over TB5? Wasn’t the maximum bandwidth over USB-cables 5Gbps? With a switch, you could cluster more than just 4 Mac Studios and have a couple terabytes for very large models to work with.

I was hoping somebody would suggest that (and eventually try it out).

With TB5, and deep pockets, you might probably also benchmark it against a setup with dedicated TB5 enclosures (e.g., Mercury Helios 5S).

TB5 has PCIe 4.0 x4 instead of PCIe 3.0 x4 -- that should give you 50 GbE half-duplex instead of 25 GbE. You would need a different network card though (ConnectX-5, for example).

Pragmatically though, you could also aggregate (bond) multiple 25 GbE network card ports (with Mac Studio, you have up to 6 Thunderbolt buses, so more than enough to saturate a 100GbE connection).


Too bad Jeff Geerling returned his Mac Studios to Apple. Would be lovely to see how 5x faster RDMA impacts the performance.

25 Gbps isn't all that much. It would be good, but would be below the 40+ Gbps I was getting on the TB5 ring network.

I think where it would show more significant speed up is on the AMD Strix Halo cluster.

Except I haven't been able to get RDMA over Thunderbolt on there to work, so it'd be apples to oranges comparing ConnectX to Thunderbolt on Linux.


Oddly enough, that’s exactly what I’ve been benchmarking - different ways of linking Strix Halo machines - with respect to throughput & latency.

Posted a little bit re: the TB side of things on the Framework and Level1Techs forums but haven’t pulled everything together yet because the higher-speed Ethernet and Infiniband data is still being collected.

So far my observations re: TB is that, on Strix Halo specifically, while latency can be excellent there seem to be some limits on throughput. My tests cap out at ~11Gbps unidir (Tx|Rx), ~22Gbps bidi (Tx+Rx). Which is wierd because the USB4 ports are advertised at 40Gbps bidi, the links report as 2x20Gbs, and are stable with no errors/flapping - so not a cabling problem.

The issue seems rather specific to TB networking on Strix Halo using the USB4 links between machines.

Emphasis to exclude common exceptions - other platforms eg Intel users getting well over 20Gbps; other mini PCs eg MS-1 Max USB4v2; local network eg I’ve measured loopback >100Gbps; or external storage where folk are seeing 18Gbps+ / numbers that align with their devices.

Emd goal is to get hard data on all reasonably achievable link types. Already have data on TB & lower-speed Ethernet (switched & P2P), currently doing setup & tuning on some Mellanox cards to collect data for higher-speed Ethernet and IB. P2P-only for now; 100GbE switching is becoming mainstream but IB switches are still rather nutty.

Happy to collaborate with any other folk interested in this topic. Reach out to (username at pm dot me).


Jobs was right.

That, and probably that the speed of light is given by the latency of information transfer through space.

Citizenship can be revoked in cases that involve serious offences.

It usually requires fraud in receiving the citizenship for it to be revoked. Once naturalized, if you commit a serious offense unrelated to the citizenship process itself, you'll keep your US citizenship.

Or ICE shows up at a naturalization hearing.

Don't hold your breath, Miller is big on denaturalization these days.

But he hasn't done anything yet, he just wants to. There's no legal standing for it at this point beyond what I said. Every case I've been able to find was tied to fraud associated with the naturalization process (either the process itself, or false statements given during the process).

Lack of legal standing is not something that has ever stopped this administration from doing something it has decided to do. Best case scenario is Miller gets thrown under the bus after the goal is accomplished.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: