Hacker Newsnew | past | comments | ask | show | jobs | submit | matt_cogito's commentslogin

I recommend a multi-LLM library like Vercel AI SDK. Anything more than that and you will work with lots of abstraction layers that will prevent you from learning how agents work, anything less than that and you are using the APIs by the providers - still fine, but too limiting.

If you are an experienced engineer, you should be able to build the necessary primitives yourself pretty easily.


Am I reading this correctly: an AI researcher might get paid roughly the same amount of money as the largest transfer of a soccer player in history (Neymar at €220M)?

There is hope for humanity.

Jokes aside, how and why?


Why: Zuck knows exactly one trick, and that is to throw money at a problem.

I don't know what the current tally on his metaverse fiasco is, but if he can spend billions upon billions on that, then poaching AI researchers and engineers for a fraction of that isn't really out of character.


At least part of is is that the capex for LLM training is so high. It used to be that compute was extremely cheap compared to staff, but that's no longer the case for large model training.


How: Meta has the money because they presumably have infinite margins.

Why: Its a bubble.


I do not have a use case for this _right now_. But I am a 100% sure I will have one pretty soon.

Email has become this massive, constant influx of information, that cannot be managed with just adding an AI agent to it. It takes so much more context knowledge to get right. None of the tools I have tried so far seem to solve this problem the way I would need it.

So sooner or later I might solve it for myself. And you guys will get a new customer.

Good luck, cowboys!


Nice, looking forward to hear about it!


All production code will be written by AI. The question is not if. It is WHEN.

What we are seeing right in front of our eyes is how the boundaries of what is possible in the realm of software programming has gone from AI/LLMs poorly writing simple scripts, to being able to "zero-shoot" and "vibe code" a complex system, with a set of instructions written in natural language.

What we might be seeing in 2025 is how programming, the way it has been for the last decades, be disappearing and becoming a rare artisanal craft, not meant for being productive, but for showing off skill, for intellectual entertainment of the programmer.

I know how hard this hits the real coders. I am one, too. But I cannot unsee, what I have seen. The progress is so undeniable, there is no doubt left in me.


Statistically salient answers are not necessarily correct.

Real "AI" may happen some day, but it is unlikely going to be an LLM. =3


I’m also surprised at the progress but don’t quite share the “AI is doing a good job” perspective.

It’s fine. Some things it’s awful at. The more you know about what you’re asking for the worse the result in my opinion.

That said a lot of my complaints are out of date apis being referenced and other little nuisances. If ai is writing the code, why did we even need an ergonomic api update in the first place. Maybe apis stabilize and ai just goes nuts.


LLMs are doing a great job at generating syntactically correct output related to the prompt or task at hand. The semantics, hierarchy, architecture, abstraction, security, and maintainability of a code base is not being handled by LLMs generating code.

So far, the syntax has gotten better in LLMs. More tooling allows for validation of the syntax even more, but all those other things are still missing.

I feel like my job is still safe: but that of less experienced developers is in jeopardy. We will see what the future brings.


Yeah I think its over, it will take a while for the effects to ripple through society, but writing code will be seen as something like wood working


I'm more curious how society will look when half the population is living in tents.


what a time to be alive!


"all computer engineering will be replaced by software"

"all code will be generated by compilers from basic prose"

"enterprise software will be trivialized by simple SQL prompts that anyone can write"

...

The progress is so unreliable, there is doubt left in me.


Let's start with stating, that Opus 4 + Sonnet 4 are a gift to humanity. Or at least to developers.

The two models are not just the best models for coding at this point (in areas like UX/UI and following instructions they are unmatched); they come package with possibly the best command line tool today.

The invite developers to use them a lot. Yet for the first time ever, I can feel how I cannot 100% fully rely on the tool and feel a lot of pressure, when using it. Not because I don't want to pay, but because the options are either:

> A) Pay $200 and be constantly warned by the system that you are close to hitting your quota (very bad UX) > B) Pay $$$??? via the API and see how your bill grows to +$2k per month (this is me this month via Cursor)

I guess Anthropic has the great dilemma now: should they make the models more efficient to use and lower the prices to increase limits and boost usage OR should they cash in their cash cows while they can?

I am pretty sure no other models comes even close in terms of developer-hours at this point. Gemini would be my 2nd best guess, but Gemini is still lagging behind Claude, and not that good at agentic workloads.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: