Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the LLM’s user indicates that the input can and should be translated as a logic problem, and then the user runs that definition in an external Prolog solver, what’s the LLM really doing here? Probabilistically mapping a logic problem to Prolog? That’s not quite the LLM solving the problem.


Do you feel differently if it runs the prolog in a tool call?


Not the user you’re replying to, but I would feel differently if the LLM responded with “This is a problem I can’t reliably solve by myself, but there’s a logic programming system called Prolog for which I could write a suitable program that would. Do you have access to a Prolog interpreter, or could you give me access to one? I could also just output the Prolog program if you like.”

Furthermore, the LLM does know how Prolog’s unification algorithm works (in the sense that it can provide an explanation of how Prolog and the algorithm works), yet it isn’t able to follow that algorithm by itself like a human could (with pen and paper), even for simple Prolog programs whose execution would fit into the resource constraints.

This is part of the gap that I see to true human-level intelligence.


But the problem is solved. Depends what you care about.


Psst, don't tell my clients that it's not actually me but the languages syntax i use, that's solving their problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: