Anecdotally, the common theme I'm starting to hear more often now is that people who use “AI” at work despise it when it replaces humans outright, but love it when it saves them from mundane, repetitive crap that they have to do.
These companies are not selling the world on a vision where LLMs are a companion tool, instead, they are selling the world on some idea that this is the new AI coworker. That 80/20 rule you're calling out is explained away with words like “junior employee.”
I think it's also important to see that even IF there are those selling it as a companion tool, it's only in the meantime. That is, it's your companion now, but because we need you next to it to make it better so it can be an "AI employee" once it's trained from your companionship.
There are hundreds of thousands of software engineers who, given FU amounts of money, would absolutely keep writing software and do it only for the love of it. The companies that hire us usually make us sign promises that we won't work on side projects. Even if there are legal workarounds to that, it's not quite so simple.
Even still, whatever high salaries they do give us just flow right back into the neighborhoods through insane property values and other cost-of-living expenses that negate any gains. So, it’s always just the few of us who can win that lottery and truly break out of the cycle.
> whatever high salaries they do give us just flow right back into the neighborhoods through insane property values and other cost-of-living expenses that negate any gains. So, it’s always just the few of us who can win that lottery and truly break out of the cycle.
You break out of the cycle by selling your HCOL home and moving to LCOL after a few years. That HCOL home will have appreciated fast enough given the original purchase price that the growth alone would easily pay for a comparable home in a LCOL area. This is the story of my village in Texas, where Cali people have been buying literal mansions after moving out of their shitboxes in LA and the Bay Area.
moonlighting is permitted by law in California (companies legally can't prevent you from doing it, iiuc), as long as there's no conflict of interest with your main job...
"no conflict of interest" is basically meaningless if your day job is writing software. These clauses you sign are quite broad in what that scope of conflict could be.
Every company I've worked for has had very explicit rules that say, you must get written permission from someone at some director or VP level sign off on your "side project," open source or not.
You might want to check your company guidelines around this just to make sure you're safe.
Sad, because before COVID, no one at Meta cared where you worked as long as you were getting your shit done. There was never available meeting rooms, and the open floor plans were so loud, that people would spread out all over the campus and use single person VC rooms to communicate in.
Not directly but they do create an open and fair working environment for all.
Once you leave room for discrimination and bullying, everyone suffers because it makes company culture harder.
And it's not just about "quotas". That's an extreme-right talking point. Diversity done properly doesn't involve quotas. Those are just a way for companies that don't actually care about it to have an easy 'fix' to get their numbers to look ok but it's not actual diversity.
I'm part of a diversity team myself as a side role. In Europe luckily.
> And it's not just about "quotas". That's an extreme-right talking point. Diversity done properly doesn't involve quotas
At Microsoft, Google, Apple, and Meta, diversity programs were implemented as soft quotas. All this talk about "diversity done properly" is just so much noise when approximately all the largest companies aren't doing it that way.
Soft quotas are not great but at my work (also a huge multinational but headquartered in Europe) we just use stats as a guide. Obviously if a country has 30% people of a certain ethnicity and in your employ it's 2% you're doing something wrong. We use that to measure hoe effective we are at combating bias and prejudice, what works and what doesn't. I wonder if that's sometimes regarded as a 'soft quota' but it shouldn't be.
We don't fix this with hiring targets. We hit the root cause with training for HR and management (and also some for all employees in the yearly mandatory training package). Recognising hidden bias, challenging people to bconsider their reactions.
And then measure the performance with stats, but not just force them. That's lazy and only fixes the problem on paper. Window dressing. Diversity is more than the hiring process anyway, a huge part is discrimination on the work floor. Often not by managers but co-workers, so we give management skills to deal with that.
Maybe in US big tech this is common but those are all pretty immoral companies anyway. See how quickly they pivoted to sucking up to Trump. The world is much bigger than the US and big tech.
> Do you apply this to everything? Like say, a sports team?
No, at work where we have tens of thousands of people.
Sport is a voluntary thing, people just join it when they want (I guess, I'm not into sports, not watching nor playing).
> Anything impressive to show for it? Because it really seems like all this focus on diversity is your downfall not your strength.
Yes we have great quality of life. It's not all about money.
In fact I asked to move to a country where the wage levels are much lower, to have a better quality of life. Here around the Mediterranean the weather is better, people enjoy life more and take it slower. There's much more things to do in my free time that I enjoy. When I'm back in Holland I hate it, people are so materialistic. Always talking about their new car, how big their TV is lol. I don't even own any car or motor and my TV is tiny but I'm much happier here.
Also diversity is just a thing we do, we're not all about that. I am because I voluntarily spend part of my work time on it (LGBTIQ in particular). For most people in the company it's a message here or there, one little training per year and maybe a talk from one of us at the town hall meetings which are optional.
There's other similar programs in the company about sustainability and ethics.
They are merit based. Especially with DEI. It just aims to remove the common human bias to trust people more when they are more like us. With training and hiding names/photos during CV preselection. So it is only based on merit.
Tech companies spent a decade (since 2010), driving towards some belief that the entire world was going to go online and stay there. They also ammassed an insane amount of wealth in that time. Wealth that is now structurally tied to the stability of the entire financial system.
For whatever reason, investors get bored and want to move money around and so the tech companies, that built healthy, stable businesses, needed to keep that dopamine hit coming with new mega annoucements. What else is there to build? "Efficiency" is the current corporate white collar trend, because that's what investors are woo'd with. AI is the other new-new thing, but instead of a that next reason to reverse hiring trends, AI itself is built and sold as an employee replacement.
Anyway, the fact that there is an entire class of people in the US who feel and believe, it can't get any worse, are geniunely suffering in ways many of here on this forum can't even imagine. Definitely think it's unfair to put these two concerns in the same bucket.
It's kind of annoying, right now at least, when an agent can see all the LSP noise and it decides to go off on a tangent to address the LSP noise in the middle of running a task that the LSP is responding to.
For this to work, the LLM has to be trained on the LSP and the LSP has to know when to wait reporing changes and when to resume.
Gemini 2.5 and now 3 seem to continue their trend of being horrific in agentic tasks, but almost always impress me with the single first shot request.
Claude Sonnet is way better about following up and making continuous improvements during a long running session.
For some reason Gemini will hard freeze-up on the most random queries, and when it is able to successfully continue past the first call, it only keeps a weird summarized version of its previous run available to itself, even though it's in the payload. It's a weird model.
My take is that, it's world-class at one-shotting, and if a task benefits from that, absolutely use it.
I ended up using container service on azure for a small rust project that I built in a docker container and published to GitHub. GitHub actions publishes to the azure service and in the 3 years I have been running it, it's basically been almost entirely free.
I thought WASM was no_std since there's no built in allocator?
Regardless, not sure why a Rust engineer would choose this path. The whole point to writing a service in Rust is that you would trade 10x time build complexity and developer ovearhead for getting a service that can run in a low memory, low CPU VM. Seems like the wrong tools for the job.
Thanks for the confirmation.
I was confused as well.
I always thought that the real use of WASM is to run exotic native binaries in a browser, for example, running Tesseract (for OCR) in the browser.
Notarization is an automated process at the very least, and just speculation, but since entitlements are baked into the codesigning step, it seems meant to prevent software from granting itself entitlements Apple doesn't want 3rd parties having access to.
The world of literature is increasingly making itself inaccessible to broad audiences by turning this into a zero-sum game.
I wish OpenAI, Anthropic and Gemini would all figure out how to pay royalties to copyright holders anytime their content is used. I see absolutely no reason why they can't do this. It would really take all the steam out of these hardline anti-AI positions.
So ... every time a model is used? Because it has been trained on these works so they have some influence on all its weights?
> I see absolutely no reason why they can't do this
They didn't even pay to access the works in the first place, frankly the chances of them paying now seems pretty minimal, without being forced to by the courts.
Why go after the AI company? If someone is using the AI generated content for commercial purposes and it’s based of a copyrighted work, they are the ones who should be paying the royalty.
The AI company is really more like a vector search company that brings you relevant content, kind of like Google, but that does not mean the user will use those results for commercial purposes. Google doesn’t pay royalties for displaying your website in search results.
I suspect from purely logistics, AI training is better when it's free to injest all the content it can, and for that freedom it pays in some small royalty amount when that source is cited.
They'd simply pass that cost onto the customer. For universities or enterprises or lawfirms, or whatever, they would either include pre-existing agreements, or pay for blanket access. Whatever terms OpenAI, Anthropic, and Gemini sign with these entities, they can workout the royalties there.
These are all solved problems for every other technology middle man company.
It's not quite the same though, when I search Google I'm generally directed to the source (though the summary box stuff might cross the line a bit).
With AI, copyrighted material is often not obvious to the end user, so I don't think it's fair to go after them.
I think it's non-trivial to make the AI company pay per use though, they'd need a way to calculate what percent of the response is from which source. Let them pay at training time with consent from the copyright holder, or just omit it.
This only makes sense if we have open access to the training data so can verify if it’s copyrighted or not. Otherwise how am I supposed to know it’s replicated someone’s IP.
I think we will be seeing a lot more business pop up that will take cater to people who are unhappy with AI. Especially if you consider the large amount of inevitable layoffs, people will begin to resent everything AI. The intelligent machine was never supposed to replace laborers, it was supposed to do your dishes and laundry.
I'm down for this, but only if the people who are getting paid by OpenAI/etc also turn around and pay any inspiration they've had, any artist they've copied from, etc over their entire life. If we're going to do this, we need to do it in a logically consistent way; anyone who's derived art from pre-existing art needs to pay the pre-existing artist, and I mean ALL of it, for anything derivative.
> I'm down for this, but only if the people who are getting paid by OpenAI/etc also turn around and pay any inspiration they've had, any artist they've copied from, etc over their entire life.
Why? Things are scale have different rules (different laws as well) from things done individually or for personal reasons.
What is the argument for AI/LLM stuff getting an exemption in this regard?
I don't see why AI/LLMs should get exemptions or special treatment.
If copying someone is bad, and they should be paid for it, that should be universal.
We already have copyright laws, they already prevent people from distributing AI outputs that infringe on intellectual property. If you don't like those laws in the age of AI, get them changed consistently, don't take a broken system and put it on life support.
I find it funny that many people are pro-library and pro-archive, and will pay money to causes that try to do that with endangered culture, but get angry at AI as having somehow stolen something, when they're fulfilling an archival function as well.
What I find funny about your argument is how completely degraded fair use has become when using anything by a corporation capable of delaying and running up legal fees. It sure feels like there are a separate set of rules.
> many people are pro-library and pro-archive, but get angry at AI as having somehow stolen something
Yes! They're angry that there are two standards, an onerous one making life hell for archivists, librarians, artists, enthusiasts, and suddenly a free-for-all when it comes to these AI fad companies hoovering all the data in the world they can get their paws on.
I.e. protecting the interests of capital at the expense of artists and people in the former, and the interests of capital at the expense of artists and people in the latter.
> If copying someone is bad, and they should be paid for it, that should be universal.
But we (i.e. society) don't agree that it is; the rules, laws and norms we have is that some things are bad at scale!
As a society, we've already decided that things at scale are regulated differently than things for personal use. That ship has sailed and it's too late now to argue for laws to apply universally regardless of scale.
I am asking why AI/LLMs should get a blanket exemption in this regard.
I have not seen any good arguments for why we society should make a special exemption for AI/LLMs.
"Scale" isn't a justification for regulatory differences, that's a straw man. We take shortcuts at scale because of resource constraints, and sometimes there are more differences than just scale, and we're over simplifying because we're not as smart as we'd like to imagine. If there aren't resource constraints, and we have the cognitive bandwidth to handle something in a consistent way, we really should.
If we were talking algorithms, would you special case code because a lot of people hit it even if load wasn't a problem, or would you try to keep one unified function that works everywhere?
Distribution and possession are fundamentally different. Cops try to bust people who have large amounts for distribution even if they don't have any evidence of it, but that's a different issue.
Corporations are individuals and can engage in fair use (at least, as the law is written now). Neither corporations nor individuals can redistribute material in non-fair use applications.
School bake sales are regulated under cottage food laws, which are relaxed under the condition that a "safe" subset of foods is produced. That's why there are no bake sales that sell cured sausage, for instance. Food laws are in some part regulatory capture by big food, but thankfully there hasn't been political will to outlaw independent food production entirely.
You're misinformed about all the examples you cited, you should do more research before stating strong opinions.
> You're misinformed about all the examples you cited, you should do more research before stating strong opinions.
You've literally agreed with what I said[1]:
> School bake sales are regulated under cottage food laws, which are relaxed under the condition that a "safe" subset of foods is produced. That's why there are no bake sales that sell cured sausage, for instance. Food laws are in some part regulatory capture by big food, but thankfully there hasn't been political will to outlaw independent food production entirely.
Scale results in different regulation. You have, with this comment, agreed that it does yet are still pressing on the point that there should be an exemption for AI/LLM.
I don't understand your reasoning in pointing out that baking has different regulations depending on scale; I pointed out the same thing - the regulations are not universal.
-------------------
[1] Things I have said:
> Things are scale have different rules (different laws as well) from things done individually or for personal reasons.
> As a society, we've already decided that things at scale are regulated differently than things for personal use.
> You can hold a bake sale at school with fewer sanitation requirements that a cake store has to satisfy.
I've had this idea kicking around in my head now for a few months that this is an opportunity to update copyright / IP law generally, and use the size and scope of government to do something about both the energy costs of AI and compensation for people whose works are used. At a very rough draft and high level it goes something like this:
Update copyright to an initial 10 year limit, granted at publication without any need to register. This 10 year period also works just like copyright today, the characters, places, everything is projected. After 10 years, your entire work falls into the public domain.
Alternatively, you can register your copyright with the government within the first 3 years. This requires submitting your entire work in a machine readable specified format for integration into official training sets and models. These data sets and models will be licensed by the government for some fee to interested parties. As a creator with material submitted to this data set, you will receive some portion of those licensing feed, proportional to the quantity and amount of time your material has been in the set, with some caps set to prevent abuse. I imagine this would work something like the broadcast licensing for radios works. You will receive these licensing fees for up to 20 years from the first date of copyright.
During the first 10 years, copyright is still enforced on your work for all the same things that would normally be covered. For the 10 years after that, in additional consideration for adding your work to the data sets, you will be granted an additional weaker copyright term. The details would vary by the work, but for a novel for example, this might still protect the specific characters and creatures you created, but no longer offer protection on the "universe" you created. If we imagine Star Wars being created under this scheme, while Darth Vader, Luke Skywalker and Leia Organa might still be protected from 1987-1997, The Empire, Tatooine, and Star Destroyers might not be.
What I envision here is that these government data sets would be known good, clean, properly categorized and in the case of models, the training costs have already been paid once. Rather than everyone doing a mad dash to scrape all the world's content, or buy up their own collection of books to be scanned and processed, all of that work could already have been done and it's just a license fee away. Additionally because we're building up an archive of media, we could also license custom data sets. Maybe someone wants to make a model trained on only cartoons, or only mystery novels or what have you. The data is already there, a nominal fee can get you that data, or maybe even have something trained up, and all the people who have contributed to that data are getting something for their work, but we're also not hamstringing our data sets to being decades or more out of date because Disney talked the government into century long copyrights decades ago.
Hah. You say this in such a way that you leave out the possibility that robots are actually just coming for you. Robots can do you, better and/or faster than your partner. Who cares if they're coming for your partner if you can equally have a robot make you feel and experience things you could only imagine.
These companies are not selling the world on a vision where LLMs are a companion tool, instead, they are selling the world on some idea that this is the new AI coworker. That 80/20 rule you're calling out is explained away with words like “junior employee.”
reply