When the tool isn't polarized. I wouldn't use a wrench with an objectionable symbol on it.
> You don't forecast weather with image detection model
What do you do with a large language model? I think most people put language in and get language out. Plenty of people are going to look askance at statements like "the devil is really good at coding, so let's use him for that only". Do you think it should be illegal/not allowed to not hire a person because they have political beliefs you don't like?
> I'm tired of this example everyone tests out, I think it undermines the researchers and engineers hard work.
It's completely valid, IMO. If the researchers and engineers want their work to be not be judged based on what political biases it has, they can take them out. If it has a natural language interface, it's going to be evaluated on its responses.
Basic informatics says this is objectively impossible. Every human language is pre-baked with it's own political biases. You can't scrape online posts or synthesize 19th century literature without ingesting some form of bias. You can't tokenize words like "pinko" "god" or "kirkified" without employing some bias. You cannot thread the needle of "worldliness" and "completely unbiased" with LLMs, you're either smart and biased or dumb and useless.
I judge models on how well they code. I can use Wikipedia to learn about Chinese protests, but not to write code. Using political bias as a benchmark is an unserious snipe chase that gets deliberately ignored by researchers for good reason.
Normally I would say you're right, but I read the context opposite to you; I read the "fine" as a straight/literal statement: the author of "this is fine" is disputing the author's parent's statement that "this can be considered [a bug]".
Are you doing this with vLLM, or some other model-running library/setup?
reply