Hacker Newsnew | past | comments | ask | show | jobs | submit | dhampi's commentslogin

Canada has no domestic automaker and US automakers, under pressure from Trump, are closing some factories in Canada & relocating production to the US.

Yes, the Canadian auto industry will take a hit, but it already has from the US (and might take more).


Presumably a lot less expensive than pumped hydro to build.


Pretty much zero chance of that. The complexity (moving parts, machined parts, number of generators, number of electrical interconnects (etc.) is so much higher per kilogram basis compared to pumped hydro. Much of the country does half of pumped hydro (storing potential energy in water towers) and delivers it to your door for fractions of a penny per kg, a price that includes a complete distribution network and sourcing/purification of the water.


My understanding is that water towers mostly exist as something to “pump against”, rather than being a vessel that gets filled and emptied repeatedly like a battery would. It does vary in level a bit, often with a circadian rhythm (but also randomly pulsatile). I just don’t think it’s a significant portion of the total water flow that its pressure supports.


You have to dredge pumped hydro though. Silt accumulation is a challenging engineering problem and erases capacity over time.


The actual title is pretty buzzy given how limited the task described is. In one specific, very constrained and artificial task, you can find something like detailed balance. And even then, their data are quite far from being a perfect fit for detailed balance.

Would love it if I could use my least action principle knowledge for LLM interpretability, this paper doesn't convince me at all :)


Since it took me some minutes to find the description of the task, here it is:

We conducted experiments on three different models, including GPT-5 Nano, Claude-4, and Gemini-2.5-flash. Each model was prompted to gener- ate a new word based on a given prompt word such that the sum of the letter indices of the new word equals 100. For example, given the prompt “WIZ- ARDS(23+9+26+1+18+4+19=100)”, the model needs to generate a new word whose letter indices also sum to 100, such as “BUZZY(2+21+26+26+25=100)”


I don't understand the analogy.

If I'm using an MCMC algorithm to sample a probability distribution, I need to wait for my Markov chain to converge to a stationary distribution before sampling, sure.

But in no way is 'a good answer' a stationary state in the LLM Markov chain. If I continue running next-token prediction, I'm not going to start looping.


I think you're confusing the sampling process and the convergence of those samples with the warmup process (also called 'burn-in') in HMC. When doing HMC MCMC we typically don't start sampling right away (or, more precisely we throw out those samples) because we may be initializing the sampler in a part of the distribution that involves pretty low probability density. After the chain has run awhile it tends to end up sampling from the typical set which, especially in high dimensional distribution, tends to more correctly represent the distribution we actually want to integrate over.

So for language when I say "Bob has three apples, Jane gives him four and Judy takes two how many apples does Bob have" we're actually pretty far from the part of the linguistic manifold where the correct answer is likely to be. As the chain wanders this space it's getting closer until it finally statistically follow the path "this answer is..." and when it's sampling from this path it's in a much more likely neighborhood of the correct answer. That is, after wandering a bit, more and more of the possible paths are closer to where the actual answer lies than they would be if we had just forced the model to choose early.

edit: Michael Betancourt has great introduction to HMC which covers warm-up and the typical set https://arxiv.org/pdf/1701.02434 (he has a ton more content that dives much more deeply into the specifics)


No, I still don’t understand the analogy.

All of this burn-in stuff is designed to get your Markov chain to forget where it started.

But I don’t want to get from “how many apples does Bob have?” to a state where Bob and the apples are forgotten. I want to remember that state, and I probably want to stay close to it — not far away in the “typical set” of all language.

Are you implicitly conditioning the probability distribution or otherwise somehow cutting the manifold down? Then the analogy would be plausible to me, but I don’t understand what conditioning we’re doing and how the LLM respects that.

Or are you claiming that we want to travel to the “closest” high probability region somehow? So we’re not really doing burn-in but something a little more delicate?


You need to think about 1) the latent state 2) the fact that part of the model is post trained to bias the MC towards abiding by the query in the sense of the reward.

A way to look at it is that you effectively have 2 model "heads" inside the LLM, one which generates, one which biases/steers.

The MCMC is initialised based on your prompt, the generator part samples from the language distribution it has learned, while the sharpening/filtering part biases towards stuff that would be likely to have this MCMC give high rewards in the end. So the model regurgitates all the context that is deemed possibly relevant based on traces from the training data (including "tool use", which then injects additional context) and all those tokens shift the latent state into something that is more and more typical of your query.

Importantly, attention acts as a Selector and has multiple heads, and these specialize, so (simplified) one head can maintain focus on your query and "judge" the latent state, while the rest can follow that Markov chain until some subset of the generated+tool injected tokens give enough signal to the "answer now" gate that the middle flips into "summarizing" mode, which then uses the latent state of all of those tokens to actually generate the answer.

So you very much can think of it as sampling repeatedly from an MCMC using a bias, A learned stoping rule and then having a model creating the best possible combination of the traces, except that all this machinery is encoded in the same model weights that get to reuse features between another, for all the benefits and drawbacks that yields.

There was a paper close when OF became a thing that showed that instead of doing CoT, you could just spend that token budget on K parallel shorter queries (by injecting sth. Like "ok, to summarize" and "actually" to force completion ) and pick the best one/majority vote. Since then RLHF has made longer traces more in distribution (although there's another paper that showed as of early 2025 you were trading reduced variance and peak performance as well as loss of edge cases for higher performance on common cases , although this might be ameliorated by now) but that's about the way it broke down 2024-2025


The warmup process is necessary in order to try to find high-probability regions of the target distribution. That's not an issue for an LLM, since it's trained to sample directly from a distribution which looks like natural language.

There is some work on using MCMC to sample from higher-probability regions of an LLM distribution [1], but that's a separate thing. Nobody doubts that an LLM is sampling from its target distribution from the first token it outputs.

[1] https://arxiv.org/abs/2510.14901


> When doing HMC MCMC we typically don't start sampling right away (or, more precisely we throw out those samples) because we may be initializing the sampler in a part of the distribution that involves pretty low probability density.

And how that applies to LLMs? Since they don't do MCMC.


I used to be befuddled by this too. Then I lived in the U.S. for a few years.

I think the answer is that the democrats are shockingly bad too, in many parts of the US. People expect grift and corruption from both parties.

Perhaps they didn’t expect the scale of this admin’s grift.


Guessing you’re a physicist based on the name. You don’t think automatically doing RG flow in reverse has beauty to it?

There’s a lot of “force” in statistics, but that force relies on pretty deep structures and choices.


I quit Julia after running into serious bugs in basic CSV package functionality a few years back.

The language is elegant, intuitive and achieves what it promises 99% of the time, but that’s not enough compared to other programming languages.


FOAG is probably the shortest readable introduction to serious algebraic geometry anyone has written. That's the nature of algebraic geometry.


Yes, Schrodinger’s equation is entirely deterministic. There is no randomness inherent in quantum mechanics. The perceived randomness only arises when we have incomplete information. (It is another matter that quantum mechanics to some extent forbids us from having perfect information.)

I mean no disrespect, but I don’t think it’s a particularly useful activity to speculate on physics if you don’t know the basic equations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: