Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A whole lot of people find LLM code to be strictly objectionable, for a variety of reasons. We can debate the validity of those reasons, but I think that even if those reasons were all invalid, it would still be unethical to deceive people by a deliberate lie of omission. I don't turn it off, and I don't think other people should either.


For the purpose of disclosure, it should say “Warning: AI generated code” in the commit message, not an advertisement for a specific product. You would never accept any of your other tools injecting themselves into a commit message like that.


My last commit is literally authored by dependabot.


well you know 100% know what dependabot does


Leaves you open to vulnerabilities in overnight builds of NPM packages that increasingly happen due to LLM slop?


You can set a minimum age for packages (https://docs.github.com/en/code-security/reference/supply-ch...), though that's not perfect (and becomes less effective if everyone uses it).


> becomes less effective if everyone uses it

I don’t think that’s necessarily the case. Exposure and discovery aren’t that tightly correlated. Maybe there’s a small effect, but I think it is outweighed by the fact that blast radius and spread is reduced while buying time for discovery.


But how much AI-generated code? If it's just a smallish function or two while most iof the code was written by hand?


My tools just don't add such comments. I don't know why I would care to add that information. I want my commits to be what and why, not what editor someone used. It seems like cruft to me. Why would I add noise to my data to cater to someone's neuroticism?

At least at my workplace though, it's just assumed now that you are using the tools.


What editor you are using has no effect on things like copyright, while software that synthesises code might.

In commercial settings you are often required to label your produce and inform about things like 'Made in China' or possible adverse effects of consumption.


well if I know a specific LLM has certain tendencies (eg. some model is likely to introduce off-by-one errors), I would know what to look for in code-review

I mean, of course I would read most of the code during review, but as a human, I often skip things by mistake


Tbh as long as the PR looks good, its good to go for internal testing.


If a whole of people thought that running code through a linter or formatter was objectionable, I'd probably just dismiss their beliefs as invalid rather than adding the linter or formatter as a co-author to every commit.


A linter or a formatter does not open you up to compliance and copyright issues.


Linters and formatters are different tools then LLMs. There is a general understanding that linters and formatters don’t alter the behavior of your program. And even still most projects require a particular linter and a formatter to pass before a PR is accepted, and will flag a PR as part of the CI pipeline if a particular linter or a particular formatter fails on the code you wrote. This particular linter and formatter is very likely to be mentioned somewhere in the configuration or at least in the README of the project.


Like frying a veggie burger in bacon grease. Just because somebody's beliefs are dumb doesn't mean we should be deliberately tricking them. If they want to opt out of your code, let them.


> frying a veggie burger in bacon grease

hmm gotta try that


I love black bean burgers (bongo burger near Berkeley is my classic), sounds like an interesting twist


Never fried one in bacon grease, but they are good with bacon and cheese. I have had more than one restaurant point out that their bacon wasn't vegetarian when ordering, though.


In your view, those who prefer veggie burgers are dumb. Am I misinterpreting?


I've heard similar things before. Frying a veggie burger in bacon grease to sneakily feed someone meat/meat-byproducts who does not want to eat it, like a vegan or a person following certain religious observances. As in, it's not ok to do this even if you think their beliefs are stupid.


In my view, vegans are dumb but it's still unethical to trick them into eating something they ordinarily wouldn't. Does that make sense to you? I am not asking you to agree with me on the merits of veganism, I am explaining why the merits of veganism shouldn't even matter when it comes to the question of deliberately trying to trick them.


Can you see a world where everyone has an AI Persona based on their prior work that acts like a RAG to inform how things should be coded? Meaning this is patent qualified code because, despite being AI configured, it is based on my history of coding?


Likewise. I don’t mind that people use LLMs to generate text and code. But I want any LLM generated stuff to be clearly marked as such. It seems dishonest and cheap to get Claude to write something and then pretend you did all the work yourself.


The reason I want it to be marked as such is because I review AI code differently than human code - it just makes different kinds of mistakes.


You can disclose that you used an LLM in the process of writing code in other ways, though. You can just tell people, you can mention it in the PR, you can mention it in a ticket, etc.


+1. If we’re at an early stage in the agentic curve where we think reading commit messages is going to matter, I don’t want those cluttered with meaningless boilerplate (“co-authored by my tools!”).

But at this point i am more curious if git will continue to be the best tool.


I'm only beginning to use "agentic" LLM tools atm because we finally gained access to them at work, and the rest of my team seems really excited about using them.

But for me at least, a tool like Git seems pretty essential for inspecting changes and deciding which to keep, which to reroll, and which to rewrite. (I'm not particularly attached to Git but an interface like Magit and a nice CLI for inspecting and manipulating history seem important to me.)

What are you imagining VCS software doing differently that might play nicer with LLM agents?


Of course git is great!

Check out Mitchell Hashimoto’s podcast episode on the pragmatic engineer. He starts talking about AI at 1:16:41. At some point after that he discusses git specifically, and how in some cases it becomes impossible to push because the local branch is always out of date.


So if I use Claude to write the first pass at the code, make a few changes myself, ask it to make an additional change, change another thing myself, then commit it — what exactly do you expect to see then?


A Co-Authored-By tag on the commit. It's a standard practice and the meaning is self-explanatory. This is what Claude adds by default too.


I make the commits myself, I don't let Claude commit anything.


I guess if enough people use it, doesn’t the tag become kind of redundant?

Almost like writing “Code was created with the help of IntelliSense”.


I don't think so. The tag doesn't just say "this was written by an LLM". It says which LLM - which model - authored it. As LLMs get more mature, I expect this information will have all sorts of uses.

It'll also become more important to know what code was actually written by humans.


I'm not really sure that's any of their business.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: