Hacker Newsnew | past | comments | ask | show | jobs | submit | silentkat's commentslogin

My work has required us all to be "AI Native". I am AI skeptical but am the type of person to try to do what is asked to the best of my ability. I can be wrong, after all.

There is some real power in AI, for sure. But as I have been working with it, one thing is very clear. Either AI is not even close to a real intelligence (my take), or it is an alien intelligence. As I develop a system where it iterates on its own contexts, it definitely becomes probabilistically more likely to do the right thing, but the mistakes it makes become even more logic-defying. It's the coding equivalent of a hand with extra fingers.

I'm only a few weeks into really diving in. Work has given me infinite tokens to play with. Building my own orchestrator system that's purely programmatic, which will spawn agents to do work. Treat them as functions. Defined inputs and defined outputs. Don't give an agent more than one goal, I find that giving it a goal of building a system often leads it to assert that it works when it does not, so the verifier is a different agent. I know this is not new thinking, as I said I am new.

For me the most useful way to think about it has been considering LLMs to be a probabilistic programming language. It won't really error out, it'll just try to make it work. This attitude has made it fun for me again. Love learning new languages and also love making dirty scripts that make various tasks easier.


Oh, no, I had these grand plans to avoid this issue. I had been running into it happening with various low-effort lifts, but now I'm worried that it will stay a problem.


I’m at a big tech company. They proudly stated more productivity measures in commits (already nonsense). 47% more commits, 17% less time per commit. Meaning 128% more time spent coding. Burning us out and acting like the AI slop is “unlocking” productivity.

There’s some neat stuff, don’t get me wrong. But every additional tool so far has started strong but then always falls over. Always.

Right now there’s this “orchestrator” nonsense. Cool in principle, but as someone who made scripts to automate with all the time before it’s not impressive. Spent $200 to automate doing some bug finding and fixing. It found and fixed the easy stuff (still pretty neat), and then “partially verified” it fixed the other stuff.

The “partial verification” was it justifying why it was okay it was broken.

The company has mandated we use this technology. I have an “AI Native” rating. We’re being told to put out at least 28 commits a month. It’s nonsense.

They’re letting me play with an expensive, super-high-level, probabilistic language. So I’m having a lot of fun. But I’m not going to lie, I’m very disappointed. Got this job a year ago. 12 years programming experience. First big tech job. Was hoping to learn a lot. Know my use of data to prioritize work could be better. Was sold on their use of data. I’m sure some teams here use data really well, but I’m just not impressed.

And I’m not even getting into the people gaming the metrics to look good while actually making more work for everyone else.


Management is just stupid sometimes. We had a similar metric at my last company and my manager's response was "well how else are we supposed to measure productivity?", and that was supposed to be a legitimate answer.


The benefits of AI either accrue toward incremental revenue-generation or cost-saving.

Its not rocket science to measure actually. The issue is most people dont know how to think properly to invent the right proxies.


Lol its gonna take longer than it should for this to play out.

Sunk cost fallacy is very real, for all involved. Especially the model producers and their investors.

Sunk cost fallacy is also real for dev's who are now giving up how they used to work - they've made a sunk investment in learning to use LLMs etc. Hence the 'there's no going back' comments that crop up on here.

As I said in this thread - anyone who can think straight - Im referring to those who adhere to fundamental economic principles - can see what's going on from a mile away.


It’s a form of contrastive reduplication. Used to emphasize the realness of the experience, versus like second hand experience like interviewing those who have the actual experience.

Also consider a phrase like “work work” versus “school work”. For someone who both works a paid job and goes to school, clarifying that they need to do “work work” makes sense.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: