Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, you need to be aware of the possibility of confabulation by the AI. But you can minimize that problem by:

- Using an LLM that hallucinates less. GPT-4 is more reliable than GPT-3.5, and the just-released Claude 2.1 is reportedly better than its predecessor. In my experience, Bard confabulates too much to be useful for many purposes.

- Using the AI to explore relatively general topics. In my tests, GPT-4 is excellent for getting an overview of, say, linguistic theories, the history of ethics, or the differences between quantum and classical physics. The more focused the topic is--how a particular verb conjugates in Romanian, what David Hume said about the death penalty, how gravity affects neutrinos--the more you need to double-check with other sources.

- Focusing not on learning facts but on the interactive exploration. One interesting exercise is to discuss counterfactuals: How might human civilization have developed if electricity had not been harnessed? What would have happened if a fifty-meter-diameter asteroid had struck the Rhine Valley in May 1944? There are no right or wrong answers to such questions, but exploring them with the AI can be very rewarding.

I agree with the OP: Interaction with LLMs can be a great way to learn, and it will get only better as their reliability improves further and they become more customizable for the individual learner. What I want most now is for them to have a persistent memory of our past conversations. Better multimodal capabilities would also be nice.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: