This is a function of the language model itself. By the time you get to the output, the uncertainty that is inherent in the computation is lost to the prediction. It is like if you ask me to guess heads or tails, and I guess heads, I could have stated my uncertainty (e.g. Pr [H] = .5) before hand, but in my actual prediction of heads, and then the coin flip, that uncertainty is lost. It's the same with LLMs. The uncertainty in the computation is lost in the final prediction of the tokens, so unless the prediction itself is uncertainty (which it should rarely be based on the training corpus, I think), then you should not find an LLM output really ever to say it does not understand. But that is because it never understands, it just predicts.
It's not just loss of the uncertainty in prediction, it's also that an LLM has zero insight into its own mental processes as a separate entity from its training data and the text it's ingested. If you ask it how sure it is, the response isn't based on its perception of its own confidence in the answer it just gave, it's based on how likely it is for an answer like that to be followed by a confident affirmation in its training data.
There’s a difference between certainty of the next token given the context and the model evaluation so far and certainty about an abstract reasoning process being correct given it’s not reasoning at all. These probabilities and stuff coming out are more about token prediction than “knowing” or “certainty” and are often confusing to people in assuming they’re more powerful than they are.
When you train a model on data made by humans, then it learns to imitate but is ungrounded. After you train the model with interactivity, it can learn from the consequences of its outputs. This grounding by feedback constitutes a new learning signal that does not simply copy humans, and is a necessary ingredient for pattern matching to become reasoning. Everything we know as humans comes from the environment. It is the ultimate teacher and validator. This is the missing ingredient for AI to be able to reason.
Yeah but this doesn't change how the model functions, this is just turning reasoning into training data by example. It's not learning how to reason - it's just learning how to pretend to reason, about a gradually wider and wider variety of topics.
If any LLM appears to be reasoning, that is evidence not of the intelligence of the model, but rather the lack of creativity of the question.
Humans are only capable of principled reasoning in domains where they have expertise. We don't actually do full causal reasoning in domains we don't have formal training in. We use all sorts of shortcuts that are similar to what LLMs are doing.
If you consider AlphaTensor or other products in the Alpha family, it shows that feedback can train a model to super-human levels.
It’s the process by which you solve a problem. Reasoning requires creating abstract concepts and applying logic against them to arrive at a conclusion.
It’s like saying what’s the difference between between deductive logic and Monte Carlo simulations. Both arrive at answers that can be very similar but the process is not similar at all.
If there is any form of reasoning on display here it’s an abductive style of reasoning which operates in a probabilistic semantic space rather than a logical abstract space.
This is important to bear in mind and explains why hallucinations are very difficult to prevent. There is nothing to put guard rails around in the process because it’s literally computing probabilities of tokens appearing given the tokens seen so far and the space of all tokens trained against. It has nothing to draw upon other than this - and that’s the difference between LLMs and systems with richer abstract concepts and operations.
Naive way of solving this problem is to ie. run it 3 times and seeing if it arrives at the same conclusion 3 times. More generally running it N times and calculating highest ratio. You trade compute for widening uncertainty window evaluation.
You can ask the model sth like: is xyz correct, answer with one word, either Yes or No. The log probs of the two tokens should represent how certain it is. However, apparently RLHF tuned models are worse at this than base models.
Seems like functions could work well to give it an active and distinct choice, but I'm still unsure if the function/parameters are going to be the logical, correct answer...
But the LLM predicts the output based on some notion of a likelihood so it could in principle signal if the likelihood of the returned token sequence is low, couldn’t it?
Or do you mean that fine-tuning distorts these likelihoods so models can no longer accurately signal uncertainty?
I get the reasoning but I’m not sure you’ve successfully contradicted the point.
Most prompts are written in the form “you are a helpful assistant, you will do X, you will not do Y”
I believe that inclusion of instructions like “if there are possible answers that differ and contradict, state that and estimate the probability of each” would help knowledgeable users.
But for typical users and PR purposes, it would be disaster. It is better to tell 999 people that the US constitution was signed in 1787 and 1 person that it was signed in 349 B.C. than it is to tell 1000 people that it was probably signed in 1787 but it might have been 349 B.C.
Why does the prompt intro take the form of a role/identity directive "You are helpful assistant..."?
What about the training sets or the model internals responds to this directive?
What are the degrees of freedom of such directives?
If such a directive is helpful, why wouldn't more demanding directives be even more helpful: "You are a domain X expert who provides proven solutions for problem type Y..."
If don't think the latter prompt is more helpful, why not?
What aspect of the former prompt is within bounds of helpful directives that the latter is not?
Are training sets structured in the form of roles? Surely, the model doesn't identify with a role?!
Why is the role directive topically used with NLP but not image generation?
Do typical prompts for Stable Diffusion start with an identity directive "You are assistant to Andy Warhol in his industrial phase..."?
Why can't improved prompt directives be generated by the model itself? Has no one bothered to ask it for help?
"You are the world's most talented prompt bro, write a prompt for sentience..."
If the first directive observed in this post is useful and this last directive is absurd, what distinguishes them?
Surely there's no shortage of expert prompt training data.
BTW, how much training data is enough to permit effective responses in a domain?
Can a properly trained model answer this question? Can it become better if you direct it to be better?
Why can't the models rectify their own hallucinations?
To be more derogatory: what distinguishes a hallucination from any other model output within the operational domain of the model?
Why are hallucinations regarded as anything other than a pure effect, and as pure effect, what is the cusp of hallucination? That a human finds the output nonsensical?
If outputs are not equally valid in the LLM why can't it sort for validity?
OTOH if all outputs are equally valid in the LLM, then outputs must be regarded by a human for validity, so what distinguishes a LLM from an the world's greatest human time-wasting device? (After Las Vegas)
Why will a statistical confidence level help avoid having a human review every output?
The questions go on and on...
—
Parole Board chairman: They've got a name for people like you H.I. That name is called "recidivism."
Parole Board member: Repeat offender!
Parole Board chairman: Not a pretty name, is it H.I.?
H.I.: No, sir. That's one bonehead name, but that ain't me any more.
Parole Board chairman: You're not just telling us what we want to hear?
H.I.: No, sir, no way.
Parole Board member: 'Cause we just want to hear the truth.
H.I.: Well, then I guess I am telling you what you want to hear.
Parole Board chairman: Boy, didn't we just tell you not to do that?
I'm not sure if that is a metric you can rely on. LLMs are very sensitive to the position of your item lists along the context, paying extra attention at the beginning and the end of those list.
See the listwise approach at "Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting", https://arxiv.org/abs/2306.17563
This is a function of the language model itself. By the time you get to the output, the uncertainty that is inherent in the computation is lost to the prediction. It is like if you ask me to guess heads or tails, and I guess heads, I could have stated my uncertainty (e.g. Pr [H] = .5) before hand, but in my actual prediction of heads, and then the coin flip, that uncertainty is lost. It's the same with LLMs. The uncertainty in the computation is lost in the final prediction of the tokens, so unless the prediction itself is uncertainty (which it should rarely be based on the training corpus, I think), then you should not find an LLM output really ever to say it does not understand. But that is because it never understands, it just predicts.