Hacker Newsnew | past | comments | ask | show | jobs | submit | ilmj8426's commentslogin

It's impressive to see how fast open-weights models are catching up in specialized domains like math and reasoning. I'm curious if anyone has tested this model for complex logic tasks in coding? Sometimes strong math performance correlates well with debugging or algorithm generation.

It makes complete sense to me: highly-specific models don't have much commercial value, and at-scale llm training favours generalism.

kimi-k2 is pretty decent at coding but it’s nowhere near the SOTA models of Anthropic/OpenAI/Google.

Are you referring to the new reasoning version of Kimi K2?


I've recently started using a similar approach for my own projects. providing a high-level architecture overview in a single markdown file really helps the LLM understand the 'why' behind the code, not just the 'how'. Does anyone have a specific structure or template for Claude.md that works best for frontend-heavy projects (like React/Vite)? I find that's where the context window often gets cluttered.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: