>with a Luddism that somehow actually becomes a rather hilarious attempt at superiority
>The people who usually think LLMs aren't valuable to coders generally work on the most boring, copy/paste prattle.
Here you are doing the same thing, aren't you?
Instead of calling people names, the biggest tell of a weak argument, why don't you explain the type of work you do and how using an LLM is faster than if you coded it yourself and and/or also faster than any current way of doing the same thing.
But...I'm not doing the same thing. In actuality I'm saying I'm a fairly typical programmer in a common situation: I work across a variety of languages and platforms and toolings and projects, building solutions for problems. The truth is that extraordinarily few programmers are working on anything truly novel. Zero of the readers of this comment are, in all likelihood. The ridiculous notion that someone has so unique of a need that it hasn't been seen is kind of hilarious nonsense. It's the "I'm so random! Other girls aren't like me" bit.
>Instead of calling people names
Who called anyone a name? Luddism? Yes, many HN participants are reacting to AI in a completely common rejection of change / challenge, and it recurs constantly.
>how using an LLM is faster than if you coded it yourself
I am coding it myself. Similar to the other guy who talks about putting an LLM in "charge" of his precious, super-novel code, you're setting up a strawman where using an LLM implies some particular scenario that you envision. In reality I spend my day asking questions, getting broad strokes, getting code commented, asking for API or resources, etc.
>In reality I spend my day asking questions, getting broad strokes, getting code commented, asking for API or resources, etc.
Can you give me some concrete examples? I'd like to use it, but I'm currently of the mind:
1. If it's boring code, I can write it faster than asking LLM to do it and fixing its issues.
2. If it's not boring code, like say a rules engine or something, I'm not sure the LLM will give me a good result based on the domain.
I mainly stick to back end work, automation, building WebAPI's and DSS engines for the medical field.
Maybe I'm under and over thinking it at the same time. FWIW, I typically stick to a single main language, but where I usually work, the companies dictate a GP language for all our stuff: C# in my example. I do a small amount of Python for LLM training, but I'm just starting out with Python. I can see it being useful saying, "convert this C# to Python," but honestly, I'd rather just learn the Python.
> Who called anyone a name? Luddism? Yes, many HN participants are reacting to AI in a completely common rejection of change / challenge, and it recurs constantly.
You should read up on what Luddism and Luddists were actually about. They didn't think the machines were evil or satanic, which is the common cultural read. They assumed (correctly) that the managerial class would take full advantage of increased productivity of lower-quality goods to flood the market with cheap shit that would put competitors out of business, and let them fire 4/5 of their workforces while doing so.
And considering the state of the textile industry today, I think that was a pretty solid set of projections.
Luddites didn't oppose automation on the basis that machines are scary. They were the people who worked the machines that already existed at the time, after all. They opposed them on the basis that the greedy bastards who owned everything would be the only ones actually benefiting from automation, everyone else would get one kind of shaft or another, which again: is exactly what happened.
This, actually is incredibly analogous to my opinions about LLM. It's an interesting tech that has applications but is already being situated to be the sole domain of massive hyperscalers and subject to every ounce of enshittification that follows every tech that goes that way, while putting creatives, and yes some coders, out of a job.
So yes, it was name calling, but also I don't object to the association. In this case, I'm a Luddite. I am suspicious of the motivations and the beneficiaries of automation being forced into my industry and I'm not going to be quiet about it.
>And considering the state of the textile industry today, I think that was a pretty solid set of projections.
I think it's just about all industries these days.
Yes, so many quote meanings have been malformed over the years, such as "a rolling stone gathers no moss," is considered good, while originally bad. "Blood is thicker than water," "Money is the root of all evil," etc.
>You should read up on what Luddism and Luddists were actually about.
They were primarily opposed to automation because it devalued the work they did and the skills they held. That is the core essence of Luddism. They thought if they destroyed the machines, automation could be stopped. There were some post-facto justifications like product quality, but if that was true they'd have no problem out-competing the machines.
Yes, it is Luddism that drives a lot of the AI sentiment seen on HN, and it is not only utterly futile and basically people convincing themselves and each other while the world moves on. There is no "name calling", and that particular blend of pearl clutching is absurd.
imho a lot of "luddism" so labeled by pro-ai bros is just people furious about shoddy artefacts that degenAI produce. That compares to original luddism, difference being the original 19th-century opposition against industrial revolution had proven them wrong with improved quality whereas genAI hasn't.
I suspect you're overstating the degree to which an LLM might be unsuitable for some types of work. For example, I'm a data scientist who works primarily in the field of sales forecasting. I've found that LLMs are quite poor at this task, frequently providing answers that are inappropriate, misleading, or simply not a good fit for the data we're working with. In general I've found very limited use in engaging LLMs in discussion about my work.
I don't think I'm calling myself a super special snowflake here. These models are just ... bad at sales forecasting.
LLMs aren't entirely useless for me. I'll use ChatGPT to generate code to make plots. That's helpful.
Zero LLMs have been trained on doing sales forecasting to my knowledge (and it isn't the right use regardless). In contrast, many LLMs have been trained on enormous quantities of code, coding languages and platforms and uses. Billions and billions of lines of code covering just about every sort of project. Millions of projects.
If someone says "Well my software dev project is too unique and novel and they are therefore of no value to me, but I understand it works for those simple folk with their simple needs", there is an overwhelming probability they are...misinformed.
Would it help if I said most “normal folk” applications of LLMs are a waste of time and money too then? Because I’m also absolutely a believer that a huge bubble burst is coming for OpenAI and company.
>The people who usually think LLMs aren't valuable to coders generally work on the most boring, copy/paste prattle.
Here you are doing the same thing, aren't you?
Instead of calling people names, the biggest tell of a weak argument, why don't you explain the type of work you do and how using an LLM is faster than if you coded it yourself and and/or also faster than any current way of doing the same thing.
I'm assuming you are a senior+ level coder.