Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, tokenization and embeddings are exactly how LLMs process input—they break text into tokens and map them to vectors. POS tags and SVOs aren't part of the model pipeline but help visualize structures the models learn implicitly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: