Hacker Newsnew | past | comments | ask | show | jobs | submit | frankjr's commentslogin

I think you're seeing the effects of Back/forward cache.

https://web.dev/articles/bfcache


This makes a lot of sense, thanks


> undersea fiber-optic cables is that they need amplifiers every N kilometers

Huh, I had no idea it needs to be done so frequently.

> These days optical cable repeaters are photon amplifiers that operate at full gain at the bottom of the ocean for an anticipated service life of 25 years.

> Repeaters are a significant cost component of the total cable cost, and there is a compromise between a ‘close’ spacing of repeaters, every 60km or so, or stretching the inter-repeater distance to 100km and making significant savings in the number of repeaters in the system. On balance it is the case that the more you are prepared to spend on the cable system the higher the cable carrying capacity.

https://blog.apnic.net/2020/02/12/at-the-bottom-of-the-sea-a...

https://www.rp-photonics.com/propagation_losses.html

https://www.rp-photonics.com/fiber_amplifiers.html


> The Manjaro project is backed by Manjaro GmbH & Co. KG, an open source driven company.

I'm guessing it's more likely they are looking for investments and need to show some credible numbers.


> south America and Europe they are all 16 or 32 for no clear reason

I don't know where you're getting your data from but it's clearly wrong or outdated. These are the most often sold routers in Czechia on Alza (the largest online retailer) under $100:

- TP-Link Archer AX53 (256MB)

- TP-Link Archer AX23 (128MB)

- TP-Link Archer C6 V3.2 (128MB)

- TP-Link Archer AX55 Pro (512MB?)

...

- Mercusys MR80X (256MB)

- ASUS RT-AX52 (256MB)

https://www.alza.cz/EN/best-sellers-best-wifi-routers/188430...


"Best sellers" usually means "best advertized because of worst sales".


..and is not even open source (EDIT: I'm wrong, see below). Zed's remote server on the other hand seems to be OSS. One could probably install it themselves without relying on the automatic download.

https://github.com/zed-industries/zed/tree/633b665379c18a069...


? The remote server (reh) is part of the vscode repo itself https://github.com/microsoft/vscode/tree/main/src/vs/server/... there is a vscodium and codeoss builds of it too (reh builds).

The extension that invokes it isn’t, but there are a few implementations of it that are, and I think vscodium bundles one with their build. It’s just some shell scripts that download the reh build and run it with the correct args.


Hmm, are the docs outdated then or are they talking about something else?

> The Visual Studio Code Remote Development extensions and their related components use an open planning, issue, and feature request process, but are not currently open source.

https://code.visualstudio.com/docs/remote/faq#_why-arent-the...


That’s the extension which launches the server. The server itself, which is what runs on the remote machine, is part of the vscode repo itself. You can see this open source extension for an example of what the extension does as well as an explanation of what it does https://github.com/xaberus/vscode-remote-oss

Mainly just sets up the UI/connect commands in vscode UI and launched REH (the vscode server) on the remote machine. The actual server is just a different binary you build from the same repo. Non-Microsoft OSS builds of vscode work with SSH fine.


Thank you for pointing this out.


Interesting, I didn’t know the VS Code server component wasn’t open source.


it is though


Nonsense. People use ORMs because the vast majority of queries are trivial. If you need something the ORM doesn't provide sufficient control over, only then you move to raw queries for these cases.


It’s has nothing to do with being simple and everything to do with wha the database looks like at the end of the day.

Some ORMs are better than others but if if you have ever looked at a database created by a ORM it always has weird lookup tables, funny names and even with simple objects completely unusable without the ORM.

We live in a multi language world. There is a high chance you are going to want to access this data with a different language. This langue is not going to have the same ORM as such you will have a horrid time access the data.

ORMs often lay their data out in a way that is highly language dependent.

The fact of the matter is SQL is not hard and bypassing the main interface to the database and hand wave it away will always bring you regret when writing any sooty of software other than a toy project.

Your best case is you become an expert in the ORM. Resulting in a skill set that does not transfer easy, language locked and the worst of all bottle necks any changes to the data layer to your ORM expert who at first will be proud and happy and end smug and bitchy as all the data layers request changes will simply be redirected to them.

When people like me say ORMs are trash it’s much more than any of the surface level rebuttals listed here. It’s about the whole life cycle is your project. Adding up all the places ORMs can fuck you just makes it a bad proposition.

God forbid you need to upgrade your database or the ORM version.


> ORMs often lay their data out in a way that is highly language dependent.

Which ORMs did you use? This doesn't sound normal at all. Never saw this with Rails, Ecto, EF, Sybase, or even the legacy project I once worked on that had 4 different ORMs for different parts of the same backend process, using the same database (some were very old and behind slowly phased out over time). Maybe you have ORM confused with CMS (content management system). A CMS can do those things, but that is not an ORM.

> There is a high chance you are going to want to access this data with a different language.

There are tools for that, such as ETL, read replicas, data warehouses, code generators, raw sql, stored procedures, views, just off the top of my head.


Well, different people, different experiences.


Same experience just different time lines.


`if you have ever looked at a database created by a ORM`

You do realize that you can make your own migrations using raw SQL, and still use the ORM with your tables?


Traffic is cheap and has been for years. Providers do not pay for the amount of sent/received data but rather a fixed price for capacity at peak and this fee can easily be absorbed into the price of their service (they also don't pay any fees if they are peering with the destination directly). This is why many companies (even smaller ones) are able to provide unlimited traffic for free or free but limited to tens of terabytes (OVHcloud, Scaleway, netcup, Hetzner...).

How much would it cost to have unlimited 1 Gbps in a collocated server? A local datacenter I'm familiar with offers 2U with unlimited 1 Gbps for ~$60. Each additional 1 Gbps costs ~$20 up to 10 Gbps. The larger plans come with 10 Gbps by default (https://dc6.cz/cenik-server-housingu/). If these are the prices offered to end customers from a company, how little do the large players pay?

Cloudflare is able to provide free egress because they are not (as?) greedy. The biggest scam the cloud industry has been able to pull off is to convince people that paying for traffic in 2024 is normal and expected.


Iirc Cloudflare gets it cheap because they store stuff on the very edge which makes it cheaper for ISPs to move the traffic around.


> What I don't understand is the !void at the end of the declaration, if we're meant to be returning an int, surely !int would be the expected syntax (although I would prefer returns int instead).

`!void` means the function can return an error (e.g. return error.BadArgument) but doesn't produce any value itself. In the case of an error in the main function, the process will exit with 1, otherwise 0. The main function can also return a value directly (return type u8 / !u8).


> In the case of an error in the main function, the process will exit with 1, otherwise 0. The main function can also return a value directly (return type u8 / !u8).

We know that by convention, but how do we know that from

    pub fn main() !void {
If if write

    pub fn foo() !void {
will that function also get to return a u8?

Also, what happened to argv/argc?


> will that function also get to return a u8?

No, the main function (the entry point of the entire program) is special cased. Have a look at the source code. There you can see it's calling the user defined main function and handling its return value / error.

https://github.com/ziglang/zig/blob/2d888a8e639856e8cb6e4c6f...

> Also, what happened to argv/argc?

You can access argv with std.os.argv which is a slice of null terminated strings. It's better to go with std.process.argsAlloc though (requires an allocation but works on all supported platforms).



For context this is indeed a breaking change but a change that brings Chromium in line with the URL spec and other engines. If your URLs are suddenly broken, it means they were invalid the whole time and functioned in Chromium due to bugs in their URL parser.

From the linked issue, the URL "customprotocol://https://example.com/test" starts with // after the scheme so it gets parsed as URL protocol=customprotocol: hostname=https port=<empty> pathname=//example.com/test. When you click the link, the browser will open "customprotocol://https//example.com/test" without the colon after "https" because an empty port is omitted (try opening https://example.com: in your browser).

Another example from the thread is "windx://host:scs/<port>;secure/MIS_VL_WINDX/v1/<sid>". This is invalid because it gets parsed as protocol=windx: host=host port=scs. "scs" is not a valid port number.

What if you want the previous behavior? Make sure the part after scheme doesn't start with //. In that case, the string will not be parsed as URL at all and just passed along.

1. customprotocol:https://example.com/test (protocol=customprotocol: hostname=<empty> pathname=https://example.com/test

2. windx:host:scs/<port>;secure/MIS_VL_WINDX/v1/<sid> (protocol=windx: hostname=<empty> pathname=host:scs/<port>;secure/MIS_VL_WINDX/v1/<sid>


This does bring Chromium in line with the URL spec, but Firefox still works properly with those callback links. Not sure which other engines you are speaking of, to my knowledge there really aren't any other viable Windows desktop browsers.

From an end-user point of view, Firefox remains not broken and Chromium broke things in the latest update. Your enterprise IT department cares very little about the standards compliance of the browser, they just want to avoid unplanned emergency upgrades if possible

On the server side, this is dead-easy to fix. For example register customprotocol64: and pass all your data in a base64 encoded blob, to stop the browser from mangling it.

Unfortunately every single offered workaround starts with deploying an upgraded version of your desktop application, so it understands the fixed link format. The challenge is not making the URLs spec compliant, it's getting the massive install bases upgraded to a version that properly parses the new URL format.

I wanted to push out this PSA to save everyone's time, because at least we spent quite a bit of time chasing server-side and templating engine issues, before we realized the link gets mangled by the browser after the data from the server has been rendered by the templating engine.

PS Better just bite the bullet and start planning those emergency upgrades now, the odds of the Chromium team restoring non-standards-compliant behaviour to fix thousands of enterprise apps is essentially zero


> This does bring Chromium in line with the URL spec, but Firefox still works properly with those callback links. Not sure which other engines you are speaking of, to my knowledge there really aren't any other viable Windows desktop browsers.

Hm, you're right, they're still working on it (https://bugzilla.mozilla.org/show_bug.cgi?id=1876105). Is it Safari then? I don't remember and don't have a way to test it right now. Obviously Safari is not something you care about on Windows anymore.

> On the server side, this is dead-easy to fix. For example register customprotocol64: and pass all your data in a base64 encoded blob, to stop the browser from mangling it.

You don't need to obfuscate it that much, just not start with // and then the rest can be still human readable/inspectable.

> Unfortunately every single offered workaround starts with deploying an upgraded version of your desktop application, so it understands the fixed link format. The challenge is not making the URLs spec compliant, it's getting the massive install bases upgraded to a version that properly parses the new URL format.

I agree. I'm a little bit surprised that they just dropped this and there wasn't any "grace" period where the browser would parse the URL as before but complained loudly in logs/console.

> PS Better just bite the bullet and start planning those emergency upgrades now, the odds of the Chromium team restoring non-standards-compliant behaviour to fix thousands of enterprise apps is essentially zero

It wouldn't be unheard of. If a big important Google customer complained enough, they might release a patch version with the kill switch turned on (base::kStandardCompliantNonSpecialSchemeURLParsing).


> You don't need to obfuscate it that much, just not start with // and then the rest can be still human readable/inspectable.

Fool me once, shame on you. Fool me twice, shame on me. I'll never again trust Chrome to not "help" with custom URLs


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: