I support the idea of UBI with zero conditions, but not this. You didn't get royalties before AI when someone was heavily influenced by your work/content and converted that into money. If you expected compensation, then you shouldn't have given away your work for free.
> you shouldn't have given away your work for free.
Almost none of the original work I've ever posted online has been "given away for free", because it was protected by copyright law that AI companies are brazenly ignoring, except where they make huge deals with megacorporations (eg openai and disney) because they do in fact know what they're doing is not fair use. That's true whether or not I posted it in a context where I expected compensation.
> Almost none of the original work I've ever posted online has been "given away for free", because it was protected by copyright law that AI companies are brazenly ignoring.
I just don't think the AI is doing anything differently than a human does. It "learns" and then "generates". As long as the "generates" part is actually connecting dots on its own and not just copy & pasting protected material then I don't see why we should consider it any different from when a human does it.
And really, almost nothing is original anyway. You think you wrote an original song? You didn't. You just added a thin layer over top of years of other people's layers. Music has converged over time to all sound very similar (same instruments, same rhythms, same notes, same scales, same chords, same progressions, same vocal techniques, and so on). If you had never heard music before and tried to write a truly original song, you can bet that it would not sound anything like any of the music we listen to today.
Coding, art, writing...really any creative endeavor, for the most part works the same way.
Conjecture on the functional similarities between LLMs and humans isn't relevant here, nor are sophomoric musings on the nature of originality in creative endeavors. LLMs are software products whose creation involves the unauthorized reproduction, storage, and transformation of countless copyright-protected works—all problematic, even if we ignore the potential for infringing outputs—and it is simple to argue that, as a commercial application whose creators openly tout their potential to displace human creators, LLMs fail all four fair use "tests".
It's not just "heavily influenced" though, it's literally where the smarts are coming from.
I don't think royalties make sense either, but we could at least mandate some arrangement where the resulting model must be open. Or you can keep it closed for a while, but there's a tax on that.
> The work was given to other humans. They paid taxes.
Says who? I mean what if black artists said they gave blues to black people, and white people making rock'n'roll? Black people spent money in black communities, now it's white people making it and spending it in theirs.
In essence they are the same point about outflows of value from the originating community. How you define a community, and what is integral is subjective.
I'm not convinced either way, but this line of reasoning feels dangerous.
I'd rather say that all ownership is communal, and as a community we allow people to retain some value to enable and encourage them further.
That is your distinction because you chose to draw the line around all humans. But who is to say that the line shouldn't be drawn around black-people, or just men, or just Christians?
And no, taxes don't just magically benefit everyone. It's actually the point of them, that they are redistributive.
Who is to say the line should be drawn using discrimination?
Taxes fund the state. The state provides a minimum set of services - law and order, border security, fire safety - to everyone regardless of ability to pay. That others may derive additional state benefits is beside the point. Everyone gets something.
originally we all posted online to help each other, with problems we mutually have. it was community, and we always gave since we got back in a free exchange.
now, there is an oligarchy coming to compile all of that community to then serve it at a paid cost. what used to be free with some search, now is not and the government of the people is allowing no choice by the people (in any capacity).
once capital comes for things at scale (with the full backing of the government), and they monetize that and treat it as "their own" i would consider that plagiarism.
how can we be expected to pay taxes on every microtransaction, when we get nothing for equally traceable contributions to the new machine?
Curious, what is your solution to this situation? Imagine all labor has been automated - virtually all facets of life have been commoditized, how does the average person survive in such a society?
I would go further and ask how does a person who is unable to work survive in our current society? Should we let them die of hunger? Send them to Equador? Of course not, only nazis would propose such a solution.
Isn't this the premise of some sci-fi books and such?
(We in some way, in the developed world, are already mostly here in that the lifestyle of even a well-off person of a thousand years ago is almost entirely supported by machines and such; less than 10% of labor is in farming. What did we do? Created more work (and some would say much busy-work).)
Trials show that UBI is fantastic and does bring the best in people, lifting them from poverty and addiction, making them happier, healthier and better educated.
It is awful for the extractive economy as employees are no longer desperate.
Maybe I'm misreading this article, but where does it actually say that anything UBI-related failed? The titular "failure" of the experiment is apparently:
> While the Ontario’s Basic Income experiment was hardly the only one of its kind, it was the largest government-run experiment. It was also one of the few to be originally designed as a randomised clinical trial. Using administrative records, interviews and measures collected directly from participants, the pilot evaluation team was mandated to consider changes in participants’ food security, stress and anxiety, mental health, health and healthcare usage, housing stability, education and training, as well as employment and labour market participation. The results of the experiment were to be made public in 2020.
> However, in July 2018, the incoming government announced the cancellation of the pilot programme, with final payments to be made in March 2019. The newly elected legislators said that the programme was “a disincentive to get people back on track” and that they had heard from ministry staff that it did not help people become “independent contributors to the economy”. The move was decried by others as premature. Programme recipients asked the court to overturn the cancellation but were unsuccessful.
So according to the article, a new government decided to stop the experiment not based on the collected data, but on their political position and vibes. Is there any further failure described in the article?
People start announcing that they're using AI to do their job for them? Devs put "AI generated" banners all over their apps? No, because people are incentivised to hide their use of AI.
Businesses, on the other hand, announce headcount reductions due to AI and of course nobody believes them.
If you're talking about normal people using AI to build apps those apps are all over the place, but I'm not sure how you would expect to find them unless you're looking. It's not like we really need that many new apps right now, AI or not.
Given the amount of progress in AI coding in the last 3 years, are you seriously confident that AI won't increase programming productivity in the next three?
This reminds me of the people who said that we shouldn't raise the alarm when only a few hundred people in this country (the UK) got Covid. What's a few hundred people? A few weeks later, everyone knew somebody who did.
Okay, so if and when that happens, get excited about it _then_?
Re the Covid metaphor; that only works because Covid was the pandemic that did break out. It is arguably the first one in a century to do so. Most putative pandemics actually come to very little (see SARS1, various candidate pandemic flus, the mpox outbreak, various Ebola outbreaks, and so on). Not to say we shouldn’t be alarmed by them, of course, but “one thing really blew up, therefore all things will blow up” isn’t a reasonable thought process.
AI codegen isn't comparable to a highly-infectious disease: it's been a lot more than a few weeks. I don't think your analogy is apt: it reads more like rhetoric to me. (Unless I've missed the point entirely.)
From my perspective, it's not the worst analogy. In both cases, some people were forecasting an exponential trend into the future and sounding an alarm, while most people seemed to be discounting the exponential effect. Covid's doubling time was ~3 days, whereas the AI capabilities doubling time seems to be about 7 months.
I think disagreement in threads like this often can trace back to a miscommunication about the state today / historically versus. Skeptics are usually saying: capabilities are not good _today_ (or worse: capabilities were not good six months ago when I last tested it. See: this OP which is pre-Opus 4.5). Capabilities forecasters are saying: given the trend, what will things be like in 2026-2027?
The "COVID-19's doubling time was ≈3 days" figure was the output of an epidemiological model, based on solid and empirically-validated theory, based on hundreds of years of observations of diseases. "AI capabilities' doubling time seems to be about 7 months" is based on meaningless benchmarks, corporate marketing copy, and subjective reports contradicted by observational evidence of the same events. There's no compelling reason to believe that any of this is real, and plenty of reason to believe it's largely fraudulent. (Models from 2, 3, 4 years ago based on the "it's fraud" concept are still showing high predictive power today, whereas the models of the "capabilities forecasters" have been repeatedly adjusted.)
The article provides a few good signals: (1) an increase in the rate at which apps are added to the app store, and (2) reports of companies forgoing large SaaS dependencies and just building them themselves. If software is truly a commodity, why aren't people making their own Jiras and Figmas and Salesforces? If we can really vibe something production-ready in no time, why aren't industry-standard tools being replaced by custom vibe clones?
> If we can really vibe something production-ready in no time, why aren't industry-standard tools being replaced by custom vibe clones?
That's a silly argument. Someone could have made all of those clones before, but didn't. Why didn't they? Hint: it's not because it would have taken them longer without AI.
I feel like these anti-AI arguments are intentially being unrealistic. Just because I can use Nano Banana to create art does not mean I'm going to be the next Monet.
> Why didn't they? Hint: it's not because it would have taken them longer without AI.
Yes it is. "How much will this cost us to build" is a key component of the build-vs-buy decision. If you build it yourself, you get something tailored to your needs; however, it also costs money to make & maintain.
If the cost of making & maintaining software went down, we'd see people choosing more frequently to build rather than buy. Are we seeing this? If not, then the price of producing reliable, production-ready software likely has not significantly diminished.
I see a lot of posts saying, "I vibe-coded this toy prototype in one week! Software is a commodity now," but I don't see any engineers saying, "here's how we vibe-coded this piece of production-quality software in one month, when it would have taken us a year to build it before." It seems to me like the only software whose production has been significantly accelerated is toy prototypes.
I assume it's a consequence of Amdahl's law:
> the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used.
Toy prototypes proportionally contains a much higher amount of the type of rote greenfield scaffolding that agents are good at writing. The sticker problems of brownfield growth and robustification are absent.
I would expect a general rise in productivity across sectors, but with the largest concentrated in the tech sector given the focus on code generation. A proliferation of new apps, new features, and new functionalities at a quicker pace than pre-AI. Given the hype, one would expect an inflection point in the productivity of this sector, but it mostly just appears linear.
I am very willing to believe that there are many obscure and low-quality apps being generated by AI. But this speaks to the fact that mere generation of code is not productive, that generating quality applications requires other forms of labor that is not presently satisfied by generative AI.
> A proliferation of new apps, new features, and new functionalities at a quicker pace than pre-AI
IMO you're not seeing this because nobody is coming up with good ideas because we're already saturated with apps. And apps are already releasing features faster than anyone wants them. How many app reviews have you read that say: "Was great before the last update". Development speed and ability isn't the thing holding us back from great software releases.
I would expect a _big_ increase in the production of amateur/hobbyist games. These aren’t demand driven; they’re basically passion projects generally. And that doesn’t seem to be happening; steam releases are actually modestly _down_, say.
> This is a decent example of not buying, getting pulled, or being forced into any corporate pushed hype
It seems that maybe they did get hyped into Rust, because it's not clear why they believed Rust would make their JavaScript tool easier to develop, simpler, or more efficient in the first place.
Biome and oxc are developer tools. I don't know why in the world they would do this, but it sounds like they were using Rust at runtime to interact with the database?
> If you borrow $100 USD from the bank, and pay it off immediately after, it's clear no money was "created" as such
The bank "printed" money by handing out cash that it didn't have. It only had a fraction of it. That new money went free into the world with the same respect any other cash gets. You and I can't pull that off.
Ok fine I'll agree call it "creating money" rather than "printing money", because it's not the same mechanism the central bank uses to "print" permanent money (technically not printed either but whatever), but money is still created by the bank.
AI is an excellent teacher for someone that wants to learn.
reply