Hacker Newsnew | past | comments | ask | show | jobs | submit | octoberfranklin's commentslogin

Antitrust laws don't work because they're subjective and are enforced by political appointees.

The simpler solution is a tax on scale -- a graduated corporate revenue tax, aggregated across any group of entities which meet the common control [1] criteria. Then it's just a tax, and you simply have to collect it. Very little wiggle room.

If splitting your company in half wouldn't impair any of its lines of business, the CEO has a powerful financial incentive (lower tax rates on the two halves) to do so.

[1] https://www.law.cornell.edu/cfr/text/26/1.52-1



Yeah, unfortunately.

This is pervasive in China; the politicians kid is the middleman. One child policy, so there's at most one kid -- makes it easy to know who to give the stock tips to.


That should make it incredibly easy to weed out then, surely?

Corruption is not weeded out because no matter how much people complain they WANT the corruption.

Bernie Sanders is not the president of the United States.


Equinix certainly is.

The dark fiber glut wasn't caused by DWDM suddenly appearing out of nowhere.

The telcos saw DWDM coming -- they funded a lot of the research that created it. The breakthrough that made DWDM possible was patented in 1991, long before the start of the dotcom mania:

  https://patents.google.com/patent/US5159601
It was a straight up bubble -- the people digging those trenches really thought we'd need all that fiber even at dozens of wavelengths per strand.

They believed it because people kept showing them hockey-stick charts.


Gigawatts? Pshaw. We have SamaWatts.

Knowledge cutoff date is usually part of the system prompt.

Helps you get useful answers like "I don't know that's too recent" when you ask questions like "who won the basketball game last night".


I opened up the Gemini app and asked:

Who are the Dallas Stars playing tonight? Is there anyone that will be out of the lineup for the Stars? What have their recent injuries been like? Can you get the current score?

It's response:

The Dallas Stars are playing the New York Rangers tonight, Tuesday, December 2, 2025, at Madison Square Garden.

Here is the information available regarding the rest of your questions:

Out of the Lineup & Recent Injuries *Lian Bichsel (Defenseman): He suffered a lower-body injury (ankle) in the recent game against the Senators and is expected to miss 6-8 weeks and require surgery. His absence will be felt on the blueline.

*Jake Oettinger (Goaltender): He is dealing with an illness and is not expected to start tonight; Casey DeSmith is projected to start in his place. Oettinger is expected to play tomorrow night in New Jersey.

*Thomas Harley (Defenseman): He remains week-to-week with a lower-body injury and is not with the team on their current road trip.

Other injured players who will likely be out of the lineup include Matt Duchene (Upper Body - on IR-LT, but joining the road trip), Adam Erne (Lower Body - on IR-LT), and Nils Lundkvist (Lower Body - on IR-LT).

Current Score The game is in progress, and the score is currently New York Rangers 1 - 1 Dallas Stars.

Would you like an update on the final score later, or news on a specific player's status?

I ask it things like updates on teams all the time, and it usually pulls the right information.


The consumer apps use RAG and traditional search to give the LLM recent information in the prompt when it answers your query. This basically bridges over the knowledge gap between the end of training and today.

I'm fully aware, I'm just wanting to point out to people that the actual AI apps they'll use can and do return recent information due to integrations like that. Lots of people think AI can only answer stuff in its training set, but it can answer anything from whatever data you hand to it, including any data on the internet.

Lots of AI tools can easily answer "who won the basketball game last night".


GCP is sort of blending this into their Gemini APIs.

https://ai.google.dev/gemini-api/docs/google-search

You don’t have to do RAG or use a serp scraper MCP, just add the Search Grounding tool to the APzi request and it does the rest at the model’s discretion and $0.014 / search.

I think that’s generally a fair price for my time vs doing my own search queries at 1/100th the speed. It could get expensive for deep research type queries.


The pricing may be reasonable, but Grounding with Google Search has extremely restrictive terms of use.

https://ai.google.dev/gemini-api/terms#grounding-with-google...


yup, I looked at it GwGS and it's not interesting. I want actual results, not a summary and some links to sites (not pages), then to go crawl those pages

Don't want Google as the middleman between the internet and my agent. Their search has become so bad I don't even use it any more


You don't train the next model by starting with the previous one.

A company's ML researchers are constantly improving model architecture. When it's time to train the next model, the "best" architecture is totally different from the last one. So you have to train from scratch (mostly... you can keep some small stuff like the embeddings).

The implication here is that they screwed up bigly on the model architecture, and the end result was significantly worse than the mid-2024 model, so they didn't deploy it.


I can not say how big ML companies do it, but from personal experience of training vision models, you can absolutely reuse the weights of barely related architectures (add more layers, switch between different normalization layers, switch between separable/full convolution, change activation functions, etc.). Even if the shapes of the weights do not match, just do what you have to do to make them fit (repeat or crop). Of course the models will not work right away, but training will go much faster. I usually get over 10 times faster convergence that way.

It’s possible the model architecture influences the effectiveness of utilizing pretrained weights. i.e. cnns might be a good fit for this since the first portion is the feature extractor, but you might scrap the decoder and simply retrain that.

Can’t say whether the same would work with Transformer architecture, but I would guess there are some portions that could potentially be reused? (there still exists an encoder/feature extraction portion)

If you’re reusing weights from an existing model, then it seems it becomes more of a “fine-tuning” exercise as opposed to training a novel foundational model.


Huh - I did not know that, and that makes a lot of sense.

I guess "Start software Vnext off the current version (or something pretty close)" is such a baseline assumption of mine that it didn't occur to me that they'd be basically starting over each time.

Thanks for posting this!


> There's a sysctl that allows for unprivileged ping "net.ipv4.ping_group_range"

What are the risks of enabling this for all groups (i.e. sysctl net.ipv4.ping_group_range='0 4294967294')?

Note this allows unprivileged ICMP sockets, not unprivileged RAW sockets.


> The idea here is that rooms are abstracted from servers and sort-of exist ephemerally

No, that's not even remotely true. In fact the opposite is true. The domain name of the server used to create the room is perpetually and permanently embedded in the room name and can't be changed, ever.


That doesn't have any practical effect, though. It's just part of the name and the room can still be used even if that server disappears forever.

No, not at all. Posession of a CA-signed TLS certificate for that domain, and the corresponding private key, is basically the root of trust for room operations.

I think their point is that the room doesn't depend on that server's continued availability.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: