Hacker Newsnew | past | comments | ask | show | jobs | submit | dirkc's commentslogin

> a semantic quibble

I mean, all of philosophy can probably be described as such :)

But I reckon this semantic quibble might also be why a lot of people don't buy into the whole idea that LLMs will take over work in any context where agency, identity, motivation, responsibility, accountability, etc plays an important role.


Maybe this is why I still prefer working with a very vanilla vim setup, when I start with a coding task, I have to think for a moment about what files I need to work in and start by opening one. There is no IDE with 5 files already open. I do sometimes cheat and put a TODO comment in some files so that `git status` helps me remember where I left off previously.

Another thing that I've enjoyed a lot is a browser plugin called OneTab, when I start a new task or context switch I just hit the button and all the browser tabs are saved and closed. I then go through the list and only open up the tabs relevant to the task or I just start from scratch.


I also keep a 'lab notebook', but I must admit that a lot of what I used to document in my notes (setting up software/compiling 3rd party deps) I now document in code (scripts, devops, etc).

I still find lots of value in keeping notes though! And sometimes miss it when I didn't keep notes.


I'm hazarding a guess that there are many AI startups that focus on building datasets with the aim to sell those datasets. Still doesn't make total sense, since doing it badly would only hurt them, but maybe they don't really care about the product / outcome, they're just capturing their bit of the AI goldrush?

I guess the AI companies finally figured out they’re supposed to buy their stolen datasets from a shell company spun up by the most unsavory character within two degrees of the CEO. Every CEO has a drug dealer, and every CEO drug dealer knows the greasy grey hat dude running a data laundry “startup.” The VCs usually know some private equity dons who run the same racket to do bust out fraud, too.

It’s truly unbelievable that OpenAI and Anthropic were so sloppy. Pirating all that copyrighted media and not even bothering to hide behind one layer of indirection. Amateurs.

So yeah… it’s what, five years’ worth of pent up demand for organized crime, hitting the market everywhere all at once? I’m surprised the request volume isn’t higher!


You can always prune it down later (so the thinking goes, no doubt)

The thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.

*edit* - seems inline with what the author is saying :)

> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.


> I don't personally think there is magic in building a Docker container. Call me old-fashioned.

I still vividly remember setting up gcc in a docker container to cross compile custom firmware for my cannon camera and thinking about the amount of pain my local system would have been in if I had to do all the toolchain work in my host OS. Don't know if it felt like magic, but it sure didn't hurt like the alternative!


For sure. Docker is rad (sorry Docker!)... all I'm saying is that I am not proud of the fact that I can do it and I don't think it moves the awesome needle - but it's still hard to get right and a pain in the ass. It's just an example of something I appreciate that I can automate now.

Oh, the joy that awaits you when you come back home to discover how the gardener interpreted "please trim the hedge by the gate a little".

> Using anything other than the frontier models is actively harmful

If that is true, why should one invest in learning now rather than waiting for 8 months to learn whatever is the frontier model then?


So that you can be using the current frontier model for the next 8 months instead of twiddling your thumbs waiting for the next one to come out?

I think you (and others) might be misunderstanding his statement a bit. He's not saying that using an old model is harmful in the sense that it outputs bad code -- he's saying it's harmful because some of the lessons you learn will be out of date and not apply to the latest models.

So yes, if you use current frontier models, you'll need to recalibrate and unlearn a few things when the next generation comes out. But in the meantime, you will have gotten 8 months (or however long it takes) of value out of the current generation.


You also don't have to throw away everything you've learnt in those 8 months, there's some things that you'll subtly pickup that you can carry over into the next generation as well.

Also a lot of what you learn is how to work around limitations of today's models and agent frameworks. That will all change, and I imagine things like skills and subagents will just be an internal detail that you don't need to know about.

snarky answer: so you can be that 'AI guy' at your office that everyone avoids in the snackroom

Because you might want to use LLMs now. If not, it's definitely better to not chase the hype - ignore the whole shebang.

But if you do want to use LLMs for coding now, not using the best models just doesn't make sense.


It's not like you need to take a course. The frontier models are the best, just using them and their harnesses and figuring out what works for your use case is the 'investing in learning'.

How could it be actively harmful if it wasn't harmful last month when it was the frontier model?

There's not that much learning involved. Modern SOTA models are much more intelligent than what they used to be not long ago. It's quite scary/amazing.

His phone number and email address are in his resume, so you could try to contact him

Please don't call people randomly. Unless you're offering him a job which is what the resume is for...

When the linkedin is down, medium is left unattended, the personal domain is not working, we can reasonably guess he doesn't (or is unable to) care about the project or online presence anymore.


You don't have to call them.

You can use WhatsApp, Signal, or SMS. Drop them a message, see if you get a reply.


I think that's silly. Do we really live in an age where we feel it's better to simply not communicate with people in the slightest?

Give them a call, you're not harassing them. If they choose not to answer or call back a voice mail number, then you can presume they don't want to be contacted.


> Give them a call, you're not harassing them.

Before posting this idea online... Maybe, possibly, but personally I still think it's a bad idea.

After posting this on HN - no! If you think it's a good idea, so will other people reading this. (And others have before you) After the post reaches the front page - absolutely no - there's a bunch of socially awkward people already thinking about calling the author and they really should NOT DO THAT.

The author owes us absolutely nothing and if they want to disappear, that's their right. Calling them is demanding their time in a not trivial to ignore way. Just write an email that can be deleted async.


You are right: it is silly, but also, given the amount of robo-calls in the US, cold calling someone you don't know is a good way to be put on auto-spam.

If you really want to reach out, his email seems to be the way he prefers to be reached, so that's what I'd recommend.

PS: He did some commits to his personal website about 1.5 years ago: https://github.com/Hopding/Hopding.github.io/commits/master


> I think that's silly. Do we really live in an age where we feel it's better to simply not communicate with people in the slightest?

I agree it’s silly. But it’s also the prevailing view that I’ve seen.

I still answer calls, even if 95% of them these days are either phishing attempts or vendors trying to sell me stuff. But my friends will text me first and say “can I call you” even if I say they can just call.


I think the post you're responding to would agree, but is trying to make the argument that it isn't worth the cost:

> spent insane amounts of money, labour, and opportunity costs of human progress on this

That said, I would 100% approve of certain people pouring all their energy into AI to rather focus on teaching squirrels chess!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: