Hacker Newsnew | past | comments | ask | show | jobs | submit | redcobra762's commentslogin

It's not likely you've actually gotten the opinion of the "majority of tech folks", just the most outspoken ones, and only in specific bubbles you belong to.


It's abusive and wrong to try and prevent AI companies from using your works at all.

The whole point of copyright is to ensure you're paid for your work. AI companies shouldn't pirate, but if they pay for your work, they should be able to use it however they please, including training an LLM on it.

If that LLM reproduces your work, then the AI company is violating copyright, but if the LLM doesn't reproduce your work, then you have not been harmed. Trying to claim harm when you haven't been due to some philosophical difference in opinion with the AI company is an abuse of the courts.


> It's abusive and wrong to try and prevent AI companies from using your works at all.

People don't view moral issues in the abstract.

A better perspective on this is the fact that human individuals have created works which megacorps are training on for free or for the price of a single book and creating models which replace individuals.

The megacorps are only partially replacing individuals now, but when the models get good enough they could replace humans entirely.

When such a future happens will you still be siding with them or with individual creators?


> A better perspective on this is the fact that human individuals have created works which megacorps are training on for free or for the price of a single book and creating models which replace individuals.

Those damn kind readers and libraries. Giving their single copy away when they just paid for the single.


Going to the library and reading a book takes hours while AI companies chew thousands of books per second. It's different scale.


> The whole point of copyright is to ensure you're paid for your work.

No. The point of copyright is that the author gets to decide under what terms their works are copied. That's the essence of copyright. In many cases, authors will happily sell you a copy of their work, but they're under no obligation to do so. They can claim a copyright and then never release their work to the general public. That's perfectly within their rights, and they can sue to stop anybody from distributing copies.


We're operating under a model where the owner of the copyright has already sold their work. And while it's within their rights to stipulate conditions of the sale, they did not do that, and fair use of the work as governed under the laws the book was sold under encompasses its conversion into an LLM model.

If the author didn't want their work to be included in an LLM, they should not have sold it, just like if an author didn't want their work to inspire someone else's work, they should not have sold it.


Yeah, this is part of the ruling. The judge decided that the usage was sufficiently transformative and thus fair use. The issue is the authors were selling their works and the company went to a black market instead.


> fair use of the work as governed under the laws the book was sold under encompasses its conversion into an LLM model

If that were the case then this court case would not be ongoing


That seems to be a misunderstanding of what's disputed. One fact that is disputed is whether or not the use of the work qualifies as fair use and the judge determined that it is because the result is sufficiently transformative. Another disputed fact is whether the books were acquired legally and the judge determined that they were not. The reason the case is still ongoing is to determine Anthropic's liability for illegally acquiring copies of the books, not to determine the legal status of the LLMs.


Current copyright law is not remotely sophisticated enough to make determinations on AI fair use. Whether the courts say current AI use is fair is irrelevant to the discussion most people on this side would agree with: That we need new laws. The work the AI companies stole to train on was created under a copyright regime where the expectation was that, eh, a few people would learn from and be inspired from your work, and that feels great because you're empowering other humans. Scale does not amplify Good. The regime has changed. The expectations under what kinds of use copyright protects against has fundamentally changed. The AI companies invented New Horrors that no one could have predicted, Vader altered the deal, no reasonable artist except the most forward-thinking sci-fi authors would have remotely guessed what their work would be used for, and thus could never have conciously and fairly agreed to this exchange. Very few would have agreed to it.


It is not wrong at all. The author decides what to do with their work. AI companies are rich and can simply buy the rights or hire people to create works.

I could agree with exceptions for non-commercial activity like scientific research, but AI companies are made for extracting profits and not for doing research.

> AI companies shouldn't pirate, but if they pay for your work, they should be able to use it however they please, including training an LLM on it.

It doesn't work this way. If you buy a movie it doesn't mean you can sell goods with movie characters.

> then you have not been harmed.

I am harmed because less people will buy the book if they can simply get an answer from LLM. Less people will hire me to write code if an LLM trained on my code can do it. Maybe instead of books we should start making applications that protect the content and do not allow copying text or making screenshots. ANd instead of open-source code we should provide binary WASM modules.


If you reproduce the material from a work you've purchased then of course you're in violation of copyright, but that's not what an LLM does (and when it does I already conceded it's in violation and should be stopped). An LLM that doesn't "sell goods with movie characters" is not in violation.

And the harm you describe is not a recognized harm. You don't own information, you own creative works in their entirety. If your work is simply a reference, then the fact being referenced isn't something you own, thus you are not harmed if that fact is shared elsewhere.

It is an abuse of the courts to attempt to prevent people who have purchased your works from using those works to train an LLM. It's morally wrong.


> It is worse than ineffective; it is wrong too, because software developers should not exercise such power over what users do. Imagine selling pens with conditions about what you can write with them; that would be noisome, and we should not stand for it. Likewise for general software. If you make something that is generally useful, like a pen, people will use it to write all sorts of things, even horrible things such as orders to torture a dissident; but you must not have the power to control people's activities through their pens. It is the same for a text editor, compiler or kernel.

Sorry for the long quote, but basically this, yeah. A major point of free software is that creators should not have the power to impose arbitrary limits on the users of their works. It is unethical.

It's why the GPL allows the user to disregard any additional conditions, why it's viral, and why the FSF spends so much effort on fighting "open source but..." licenses.


To load a printed book into a computer one has to reproduce it in digital form without authorization. That's making a copy.


Making a digital copy of a physical book is fair use under every legal structure I am aware of.

When you do it for a transformative purpose (turning it into an LLM model) it's certainly fair use.

But more importantly, it's ethical to do so, as the agreement you've made with the person you've purchased the book from included permission to do exactly that.


Per the ruling, the problem is the books were not purchased, they were downloaded from black market websites. It's akin to shoplifting, what you do later with the goods is a different matter.

Reasonable minds could debate the ethics of how the material was used, this ruling judged the usage was legal and fair use. The only problem is the material was in effect stolen.


> Maybe instead of books we should start making applications that protect the content and do not allow copying text or making screenshots.

https://en.wikipedia.org/wiki/Analog_hole


That would be "circumvention of DRM".


w: this is profoundly stupid

What would a virtue garnish be for that?


Gemini suggested some: "Seek first to understand, then to be understood", "Let go of judgment", "My goal is progress, not perfection or judgment" and so on. Sounds fine as virtue garnishes for your whisper.


You're presupposing that this blogvertisement ought not be considered profoundly stupid.

What would my virtue garnish be if I wanted to think of this blatant attempt at shilling a book as a waste of digital space?


Oh, I didn't realize your OP was actually a satire. I intentionally ignored that part because well, you can always take whatever meaningful informations out of anything including stupidly apparent ads. As the other comment said, the whisper part is a well-known strategy as well.


You genuinely read this ad and think there's something to be learned?

If I take this concept seriously for a few seconds, it's one massive exercise in begging the question. The argument boils down to "notice when you do something wrong" while simultaneously admitting that people don't do that. And the ad's advice for doing it? Do it. What?!

That's profoundly stupid as a concept, where "profoundly stupid" here is defined specifically as an argument with a clear reasoning issue.

You know one great way to stop smoking? By stopping. OP is stealing Bob Newhart's bit:

https://www.youtube.com/watch?v=bcSAQyzPcl0


g: That's just, like, your opinion, man.


g: Criticism is the price for success?


Sorry to be clear: I want to reinforce my belief that this ad you've posted is profoundly stupid.

What would my virtue garnish be for that?


If you want to reinforce your current beliefs do the opposite. Write nothing down. Do what comes to your mind first. You do you.

If I had a garnish that turned anything objectively stupid into something that has objective value it would probably revolutionize chain-of-thought reasoning.


PayPal terrifies me; I would move off of their platform ASAP, or at least be ready to when they inevitably pull the "lock your account pending an investigation" move that kills so many new companies.


Yep, and don't forget they will lock up any funds during that period, so I recommend you transfer them to your bank early and often. Money in your Paypal account is not your money.


Is that still going on? I remember this killing businesses left and right early on (early 2000s).


If you've been to one of these testing centers, you'd realize it's not easy to cheat, and the companies that run them take cheating seriously. The audacity of someone to cheat in that environment would be exceptionally high, and just from security theater alone I suspect almost no actual cheating takes place.


What is the human counterpart to an LLM? The article doesn't describe.

And "LLM apologists" is so polemical it's hard to take seriously. We get it, you don't like GenAI. That's fine, but can we talk about it without getting normative?


The article doesn't, but you can go to the "more information" section and keep digging:

"To systematically assess differences between LLM-generated and human-written summaries, we also collected the corresponding expert-written summaries from NEJM Journal Watch (henceforth ‘NEJM JW’)"


Right so not scientific American how convenient (and may I add scientific!)


Or maybe he could care less, but doesn't even bother to care less because caring less would exert effort and he doesn't care enough to exert any effort.


Own the grammar mistake, my dude :)


The grammar mistake was done by a different person


Didn’t do it, Nobody saw me do it, Can’t prove anything.


It's not a grammar mistake. It's faulty logic.


The "criticism" of a p-value was strange to me, because it's never taught to be an "end all" answer to a question, rather it's taught as a way of determining statistical significance. Every stats or economic analytics course I took in college explained that hypothesis rejection involved critical thinking in addition to p-value analysis.


Yes, moving to a freshly 1.0 tool/library is often the best way to gain stability...


*with good vibes


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: