Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

so we are both agreeing on the same thing? :D


Conceptually, no. For me your statement sounds like it says we did not solve flight, because our planes have to land.


Well I am in fact trying to argue that the self feedback loop is what is missing to solve AGI and even more so implying that the reason for that may be that there are different ways to see a "human" psychologically. Also that validity in data can only be achieved using a multi modal approach with source ranking.

Psychology essentially has the same problem and is only a "science" where reproducible. Quantitative and Qualitative psychology have 2 approaches to the same solution whereas the latter is "reading between the lines" and the former is "bean counting", meaning statistical interference.

I am trying to say that friend of mine has created "Chicken Infinite" in 2014, which is basically an endless auto generated cooking book. Deep-L has also been around for a while. These applications lead me to believe that text applications, trained on a large dataset do not have to be this intensive.

Furthermore what makes chatgpt enticing, is its chat interface which is using a multi model approach too. Have it create a detailed story for you with multiple prompts and then ask it to generate the prompt to generate this story and you will see various model instructions. (or at least you could last month).

Or differently put, there is no AGI because the understanding function is simply not present and i think that the reasoning for that is buried in the approach of the human mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: