Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I experienced ChatGPT confidently giving incorrect answers about the Schwarzchild radius of the black hole at the center of our galaxy, Saggitarius A-star. Both when asked about "the Scharzchild radius of a black hole with 4 million solar masses" (a calculation) and "the Scharzchild radius of Saggitarius A-star" (a simple lookup).

Both answers were orders of magnitude wrong, and vastly different from each other.

JS code suggested for a simple database connection had glaring SQL injection vulnerabilities.

I think it's an ok tool for discovering new libraries and getting oriented quickly to languages and coding domains you're unfamiliar with. But it's more like a forum post from a novice who read a tutorial and otherwise has little experience.



My understanding is that ChatGPT (and similar things) are purely language models; they do not have any kind of "understanding" of anything like reality. Basically, they have a complex statistical model of how words are related.

I'm a bit surprised that it got a lookup wrong, but for any other domain, describing it as a "novice" is understating the situation a lot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: