Hacker Newsnew | past | comments | ask | show | jobs | submit | echion's commentslogin

> you can combine Spark with M3U, the former streaming the compute, lowering TTFT, the latter doing the token generation part

Are you doing this with vLLM, or some other model-running library/setup?


They're probably referencing this article: https://blog.exolabs.net/nvidia-dgx-spark/

> training smart LLMs like Qwen3-Next-80B-A3B-Instruct-NVFP4

Sounds interesting; can you suggest any good discussions of this (on the web)?


> with same success you can prove bias in western models.

What are some examples? (curious, as a westerner)

Are there "bias" benchmarks? (I ask, rather than just search, because: bias)


> when do we stop this kind of polarization?

When the tool isn't polarized. I wouldn't use a wrench with an objectionable symbol on it.

> You don't forecast weather with image detection model

What do you do with a large language model? I think most people put language in and get language out. Plenty of people are going to look askance at statements like "the devil is really good at coding, so let's use him for that only". Do you think it should be illegal/not allowed to not hire a person because they have political beliefs you don't like?


> I'm tired of this example everyone tests out, I think it undermines the researchers and engineers hard work.

It's completely valid, IMO. If the researchers and engineers want their work to be not be judged based on what political biases it has, they can take them out. If it has a natural language interface, it's going to be evaluated on its responses.


And risk their or their families lives?

Or what should they do, give up their careers?


> they can take them out

Basic informatics says this is objectively impossible. Every human language is pre-baked with it's own political biases. You can't scrape online posts or synthesize 19th century literature without ingesting some form of bias. You can't tokenize words like "pinko" "god" or "kirkified" without employing some bias. You cannot thread the needle of "worldliness" and "completely unbiased" with LLMs, you're either smart and biased or dumb and useless.

I judge models on how well they code. I can use Wikipedia to learn about Chinese protests, but not to write code. Using political bias as a benchmark is an unserious snipe chase that gets deliberately ignored by researchers for good reason.


Can you elaborate a bit on your setup if you have time?



Ironically, this fell off the HN front page without enough upvotes...neither hackernews.coffee nor Claude suggested it to me...


Oh, itsdrewmiller pointed us in the right direction: https://news.ycombinator.com/item?id=44551579


Normally I would say you're right, but I read the context opposite to you; I read the "fine" as a straight/literal statement: the author of "this is fine" is disputing the author's parent's statement that "this can be considered [a bug]".


"Axial motors" don't seem to have been discussed much here; In 2022 there was some discussion of the flaws ( https://news.ycombinator.com/item?id=30816149 ).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: