Hacker Newsnew | past | comments | ask | show | jobs | submit | wolttam's commentslogin

If Fabrice explained what he wanted, I expect the LLM would respond in kind.

If Fabrice explained what he wanted the LLM would say it's not possible.

When the coding assistant LLMs load for a while it's because they are sending Fabrice an email and he corrects it and replies synchronously.


There is also an immense amount of water vapour being produced by the combustion of a hydrocarbon.

Sure, but water vapor doesn't spontaneously transition to a liquid and accrete onto surfaces - there needs to be a super-saturation of water vapor, and given the temperatures of jet exhaust, that's not trivial to achieve. However, the super-saturation needed for water vapor to deposit onto surfaces as ice is much lower, hence the preference for ice crystal nucleation.

Somehow I thought this was going to be about detecting AV1 based on the decoded video frames, which would have been interesting!


Yeah, I would think that the simulated grain of AV1 might be characterizable, even though, IIRC, it is pretty sophisticated.


This doesn’t look like it was intended to compete. The research appears interesting


I really struggle to come up with a reason that transformers won't continue to deliver on additional capabilities that get fit into the training set.


> WebGPU isn't just a replacement for WebGL; it's a massive leap forward

Is it just satire at this point?


Sounds like something an LLM would say. It just needs an emoji


Anecdote: I became a parent at 25 and didn't feel these shifts until 30/31.


Yeah I could distinctly feel my brain shifting into its adult era over the last couple of years (I'm 31)

It was kind of odd. I'm more serious now (but at the same time.. less?). I'm way more easily able to focus on what actually matters in this life. (In saying that, I think it's more likely that my brain has finally decided what's important... in a way I feel like a passenger)


I have the exact same feeling. Turning 30 was like flipping a switch for me. Since then, I've even felt like I've become more intelligent, especially in math. It's like everything suddenly clicked. I struggled with math throughout my entire school life, but now that I've gone back to college, I'm amazed by how easily I grasp concepts. I've recently taken on challenges that I always thought were exclusive to "smart people," like systems programming in Rust and Zig or building compilers.


It's 1/3 the old price ($15/$75)


Not sure if that’s a joke about LLM math performance, but pedantry requires me to point out 15 / 75 = 1/5


15$/Megatoken in, 75$/Megatoken out


Sigh, ok, I’m the defective one here.


There's so many moving pieces in this mess. We'll normalize on some 'standard' eventually, but for now, it's hard, man.


In case it makes you feel better: I wondered the same thing. It's not explained anywhere on the blog post. In that poste they assume everyone knows how pricing works already I guess.


they mean it used to be $15/m input and $75/m output tokens


Just updated, thanks


This has been my intuition with these models since close to the beginning.

Any framework you build around the model is just behaviour that can be trained into the model itself


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: