Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you might be confusing two concepts here.

It's definitely reasoning. We can watch that in action, whatever the mechanism behind it is.

But it's not doing long-term learning, it's not updating its model.



Even long term learning it does to some extent. Admittedly I’m not very familiar with what it’s doing, but it does create “memories” which appear to be personal details that it deems might be relevant in the future. Then I assume it uses some type of RAG to apply previously learned memories to future conversations.

This makes me wonder if there is or could be some type of RAG for chains of thought…


>whatever the mechanism behind it is.

The mechanism is that there is an additional model that basically outputs chain of thought for a particular problem, then runs the chain of thought through the core LLM. This is no different from just a complex forward map lookup.

I mean, its incredibly useful, but its still just information search.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: