>I love (Neo)Vim. Like I wrote earlier, it's been my primary editor for the past 15 years. What I don't love is all the configuration that goes into it before I can use it to start writing code.
this resonates with me. i've spent so much time configuring neovim in the terminal (kitty) and i've never had everything work 100% of the time. simple things just like seeing an entire typescript error are challenging to get working. those errors just continue on one line outside of the screen.
with LLM's the tradeoff tipped in the favor of cursor with the neovim extension.
> 2. It just works
so i switched to cursor last week from neovim in the terminal and this is how i feel. but, i'm not going to invest more time to check out Zed now that i just got cursor set up the way i like it.
but it's great to see all the progress in IDE's lately.
i think this type of interaction is the future in lots of areas. i can imagine we replace API's completely with a single endpoint where you hit it up with a description of what you want back. like, hit up 'news.ycombinator.com/api' with "give me all the highest rated submissions over the past week about LLMs". a server side LLM translates that to SQL, executes the query, returns the results.
this approach is broadly applicable to lots of domains just like FFMpeg. very very cool to see things moving in this direction.
HN and internet forums in general have a contagion of critique, where we mercilessly point out flaws and attempt to show our superiority. It best to ignore them.
> I ask the LLM to build it. That way, by definition, the LLM has a built in understanding of how the system should work, because the LLM itself invented it.
I share the same belief, and as a rebuttal against EagnaIonat's comment, when you ask the LLM to create something, it is finding the centroid of the latent space of your request in its high dimensional space. The output is congruent with what it knows and believes. So yes, the output would be statistical, but is also embedded in its subspace. For code you have written independent of the LLM, that isn't necessarily true.
I think there are many ways we could test this, even in smaller models through constructed tests and reprojection of output programs.
It is like if I asked an OO programmer to come up with a purely functional solution, it would be hard. And then if I asked to take an existing PFP program and refactor and extend it, it would be broken.
Solutions have to exist in the natural space, this is true for everyone.
The big protocol doing this is called "Model Context Protocol" and it should've been a widely read/discussed post except hn has taken a wide anti-ai stance
Except you don't need an LLM to do any of this, and it's already computationally cheaper. If you don't know the results you want, you should figure that out first, instead of asking a Markov chain to do it.
I believe this approach is destined for a lot of disappointment. LLMs enable a LOT of entry- and mid-level performance, quickly. Rightfully, you and I worry about the edge cases and bugs. But people will trend towards things that enable them to do things faster.
This thing costs more than 45.000EUR with any kind of equipment and is massive - a far cry away from something like a Renault Clio or similar econobox.
And the average new home in the US is $480k, but at a median income of $31k I would never call it inexpensive. Let’s save it for the truly high value purchases like $90 espresso machines, $400 phone, $400 PC, and $300 TV. Maybe a $10k 10 year old Toyota with 150k miles left?
Median household income in the US is more like $70k. Median individual income is $40k. Median household also varies tremendously by state, Mississippi is around $44k and Maryland/DC area is $90k.