Hacker Newsnew | past | comments | ask | show | jobs | submit | hansonkd's commentslogin

I've been saying this for the past 2 years. Even think about the stereotypical "996" work schedule that is all the rave in SF and AI founder communities.

It just takes thinking about it for 5 seconds to see the contradiction. If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.

20 years ago SV was stereotyped for "lazy" or fun loving engineers who barely worked but cashed huge pay checks. Now I would say the stereotype is overworked engineers who on the midlevel are making less than 20 back.

I see it across other disciplines too. Everyone I know from sales, to lawyers, etc if they engage with AI its like they get stuck in a loop where the original task is easier but now it revealed 10 more smaller tasks that fill up their time even more so than before AI.

Thats not to say productivity gains with AI aren't found. It just seems like the gains get people into a flywheel of increasing work.


Talking about "productivity" is a red herring.

Are the people leveraging LLMs making more money while working the same number of hours?

Are the people leveraging LLMs working fewer hours while making the same amount of money?

If neither of these are true, then LLMs have not made your life better as a working programmer.


Regardless of that, LLMs could be a Moloch problem.

That is, if anyone uses it your life will be worse, but if you don't use it then your life will be even worse than those using it.

Too bad you programmers didn't unionize when you had the chance so you could fight this. Guess you'll have to pull yourself up by your bootstraps.


Classical prisoner's dilemma.

Well, at least thus far, the only reason my life is worse due to AI is because of all the people who won't stop talking about how amazing it is for vibe-coding everything from scratch despite ample empirical evidence to the contrary.

Until and unless there are some more significant improvements in how it works with regard to creating code, having strong "manual" programming skills is still paramount.


Thank god there’s no programmers union ffs

>Are the people leveraging LLMs making more money while working the same number of hours?

Nobody is getting a raise for using AI. So no.

>Are the people leveraging LLMs working fewer hours while making the same amount of money?

Early adopters maybe, as they offload some work to agents. As AI commodifies and is the baseline, that will invert, especially as companies shed people to have the remaining "multiply" their output with AI.

So the answer will be no and no.


Well they don't call it being a wage slave for nothing. You aren't getting a raise because you're still selling the same 40-60 hours of your time. If the business is getting productivity wins they'll buy less time via layoffs.

(USSR National Anthem plays) But if you owned the means of production and kept the fruits of your labor, say as a founder or as a sole proprietor side hustle, then it's possible those productivity gains do translate into real time gains on your part.


>But if you owned the means of production and kept the fruits of your labor, say as a founder or as a sole proprietor side hustle, then it's possible those productivity gains do translate into real time gains on your part.

Not even then: since it will commodify your field, and make any rando able to replicate it.


What about coops? Or partnerships?

The very reason why we object to state ownership, that it puts a stop to individual initiative and to the healthy development of personal responsibility, is the reason why we object to an unsupervised, unchecked monopolistic control in private hands. We urge control and supervision by the nation as an antidote to the movement for state socialism. Those who advocate total lack of regulation, those who advocate lawlessness in the business world, themselves give the strongest impulse to what I believe would be the deadening movement toward unadulterated state socialism.

--Theodore Roosevelt


> Are the people leveraging LLMs making more money while working the same number of hours?

> Are the people leveraging LLMs working fewer hours while making the same amount of money?

Yes, absolutely. Mostly because being able to leverage LLMs effectively (which is not "vibe coding" and requires both knowing what you're doing and having at least some hunch of how the LLM is going to model your problem, whether it's been given the right data, directed properly, etc.) is a rare skill.


Can you name an example? Who do you know that made more money by using LLM?

Did high-level languages and compilers make life better for working programmers? Is it even a meaningful question to ask? Like what would we change depending on the outcome?

Lots of people have jobs today thanks to high level languages that wouldn't have a job before them, they don't need to know how to manage memory manually.

Maybe that will happen for LLM programming as well, but I haven't seen many "vibe coder wanted" job ads yet that doesn't also require regular coding skills, so today LLM coding is just a supplementary skill its not a primary skill, so not like higher level languages since those let you skip a ton of steps.


> Did high-level languages and compilers make life better for working programmers

Yes.


Of course not. In the world of capitalism and employment, money earned is not a function of productivity, it is a function of competency. It is all relative.

Oh you sweet summer child. Under capitalism money is a function of how low you can pay your fungible organic units before they look for other opportunities or worse, unionize (but that can be dealt with relatively easily nowadays). Except for a few exceptional locations and occupations, the scale is tilted waaay against the individual, especially in the land of the free (see H-1B visas, medical debt and workers on food stamps). (See also the record profits or big companies since Covid).

Lines of code are not a good metric for productivity.

Neither are the hours worked.

Nor is the money.

Just think of the security guard on site walking around, or someone who has a dozen monitors.


I feel this. Since my team has jumped into an AI everything working style, expectations have tripled, stress has tripled and actual productivity has only gone up by maybe 10%.

It feels like leadership is putting immense pressure on everyone to prove their investment in AI is worth it and we all feel the pressure to try to show them it is while actually having to work longer hours to do so.


I laughed at all the Super Bowl commercials showing frazzled office workers transformed into happy loafers after AI has done all their work for them...

I chuckled at the Genspark one while imaging what the internal discussions must have been.

Obviously, "take a day off" is not the value prop their selling to buyers (company leadership), but they can't be so on the nose in a public commercial that they scare individual contributors.


As one of the AI people doing 996(7?) I will at least say I can watch youtube videos/play bass/etc while directing 4-5 agents without much trouble, I have my desktop set up into a terminal grid and I just hover the window I want to talk to and give voice instructions. Since I'm working on stuff I'm into the time passes pleasantly.

Can you describe what stack you're using for this?

Hyprland, Voxtype, Claude Code + Pi.

Yeah, why would billionaires sell us something that lets us chill out all day, instead of using it themselves and capturing the value directly? You claim to have a perpetual motion machine and a Star Trek replicator rolled into one, what do you need me for?

Those ads are not for workers, they're for the employers.

There's an old saying among cyclists attributed to Greg Lemond: "It doesn't get easier, you just go faster"

I don't think it's super complicated. I think that prompting takes generally less mental energy than coding by hand, so on average one can work longer days if they're prompting than if they were coding.

I can pretty easily do a 12h day of prompting but I haven't been able to code for 12h straight since I was in college.


For me it’s the opposite. Coding I enter flow and can do 5 hours at a stretch while barely noticing.

Prompting has so many distractions and context switches I get sick of it after an hour.


Same here. Context switching is a real flow-killer.

Isn’t the grander question why on earth people would tolerate, let alone desire, more hours of work every day?

The older I get, the more I see the wisdom in the ancient ideas of reducing desires and being content with what one has.

---

Later Addition:

The article's essential answer is that workers voluntarily embraced (and therefore tolerated) the longer hours because of the novelty of it all. Reading between the lines, this is likely to cause shifts in expectation (and ultimately culture) — just when the novelty wears off and workers realize they have been duped into increasing their work hours and intensity (which will put an end to the voluntary embracing of those longer hours and intensity). And the dreaded result (for the poor company, won't anyone care about it?!) is cognitive overload, hence worker burnout and turnover, and ultimately reduced work quality and higher HR transaction costs. Therefore, TFA counsels, companies should set norms regarding limited use of generative language models (GLMs, so-called "AI").

I find it unlikely that companies will limit GLM use or set reasonable norms: instead they'll crack the whip!

---

Even Later Addition:

As an outsider, I find it at once amusing and dystopian to consider the suggestions offered at the end of the piece: in the brutalist, reverse-centaur style, workers are now to be programmed with modifications to their "alignment … reconsider[ation of] assumptions … absor[ption of] information … sequencing … [and] grounding"!

The worker is now thought of in terms of the tool, not vice versa.


While I agree with the idea that prompting is easier to get started, is it actually less work. More hours doesn't mean they're equally as productive. More, lower quality hours just makes work:life balance worse with nothing to show for it.

I agree. However, for me, I'm finding that I'm drastically leveling up what I'm doing in my day to day. I'm a former founder and former Head of Engineering, back in an IC role.

The coding is now assumed "good enough" for me, but the problem definition and context that goes into that code aren't. I'm now able to spend more time on the upstream components of my work (where real, actual, hard thinking happens) while letting the AI figure out the little details.


> I can pretty easily do a 12h day of prompting

Do you want to though?


That's a bingo.

Additionally, I can eke out 4 hrs really deep diving nowadays, and have structured my workday around that, delegating low-mental-cost tasks to after that initial dive. Now diving is a low enough mental cost that I can do 8-12hrs of it.

It's a bicycle. Truly.


>so on average one can work longer days if they're prompting than if they were coding

It's 2026 for god's sake. I don't want to work __longer__ days, I want to work __shorter__ days.


If you're in the office for 12h it won't matter if you're proompting, pushing pens or working your ass off. You gave that company 12h of your life. You're not getting those back.

> If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.

Isn't it simple?

Because of competition, which is increased because of entry barrier is lowered a lot for building new software products.

You output a lot, so do your competition.


> If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.

Heavy machinery replaces shovels. It reduces workload on the shovel holders, However, someone still needs to produce the heavy machinery.

Some of these companies are shovel holders, realizing they need to move up stream. Some of these companies are already upstream, racing to bring a product to market.

The underlying bet for nearly all of these companies is "If I can replace one workflow with AI, I can repeat that with other workflows and dominate"


I have seen this written many times and can't shake off this feeling myself; I feel more productive using LLMs but I am not sure I really am. I even feel quite overloaded right now with all the ideas that I could do. In the past I also had many ideas but they were quickly set aside understanding that there's not enough time for everything. Now, it usually starts with prompting and I get into a rabbit hole. In the end, it feels like a lot of words have been exchanged but the results are nowhere to be found.

Same story with hardware and software. Hardware gets more efficient and faster, so software devs shove more CPU intensive stuff into their applications, or just go lazy and write inefficient code.

The software experience is always going to feel about the same speed perceptually, and employers will expect you to work the same amount (or more!)


I think you're missing the point. The folks pushing 996 (and willingly working 996) feel like they are in a land rush, and that AI is going to accelerate their ability "take the most amount of land" No one is optimizing for the "9 to 5" oriented engineer.

> If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.

Throughout human history, we have chosen more work over keeping output stable.


Throughout human history we were never given the choice. We were forced into it like cattle.

You could always choose to work less, but would have less as a result.

These days, that choice is more viable than ever, as the basic level of living supported by even a few hours a week of minimum wage affords you luxuries unimaginable 50 or 100 years ago.


See a lot of people on this site doing it willingly. I think a lot of people will always choose perceived convenience over anything

You are correct; however, it should be noted that even the top 1% overworks themselves to some extent (e.g. American CEOs work on average 63h per week). They do it for a different reason, though.

"Throughout human history", approximately 90% of all work was to produce food. More work meant more food, which meant more people could survive.

We don't have to do that anymore. We have enough food for everyone.

Now, we're just being whipped to work harder to produce more profits for the people who already have more than they will ever be able to spend. We're just increasing their dollar-denominated high scores.


Maybe ask the friendly AI about reducing project scope? But we probably won’t if we’re having too much fun.

Many people in silicon valley truly Believe that AI will take over everything. Therefore, this is the last chance to get in so you better be working really really hard.

There's a palpable desperation that makes this wave different from mobile or cloud. It's not about making things better so much as its about not being left behind.

I'm not sure of the reason for this shift. It has a lot of overlap with the grindset culture you see on Twitter where people caution against taking breaks because your (mostly imaginary) competition may catch up with you.


now everyone gets to be a manager !

Jevons Paradox applies to labor.

996 is a Chinese term, not American.

There is a lot of work to do, just because you are doing more work with your time doesn’t mean you can somehow count that as less work.


I've only seen it in job postings and linkedin posts from SF founders.

I'm not sure if you are serious, but 996 was invented by the Chinese tech industry in 2019 and I've never heard it to describe anything in the USA (well, until today). Wiki:

https://en.wikipedia.org/wiki/996_working_hour_system

Note all the examples are also Chinese. There is a recent edit at the bottom of the page though:

Occurrence in US tech companies In 2025, amidst the AI boom, reports have emerged of startup tech companies in San Francisco / Silicon Valley requiring "9-9-6" work schedules, with the goal of building things quickly in a competitive market.[15][16][17] California Labor Code §515.5 exempts employers from providing many software engineers with overtime pay.


china outlawed it

What? 007 is the norm here now.

Dynasty | https://www.getdynasty.com | San Francisco, CA (in-person) | Full-Time

Building the next generation of trust & estate planning software

Dynasty is building a modern trust platform for startup founders and families, with a focus on QSBS trust stacking so founders can multiply the Section 1202 capital gains exclusion across beneficiaries. We make it possible to set up, fund, and maintain QSBS eligible trusts with ongoing compliance through a vertically integrated model that includes trust administration

We are based in San Francisco and work together in person.

Open Roles

- Sales

You will work directly with prospective customers, many of whom are founders, operators, and high net worth families. You will guide them through understanding our solutions, manage pipeline, and close new business. This role is ideal for someone who is consultative, comfortable discussing financial concepts, and excited to help people make important long term decisions.

- Onboarding

You will help new customers successfully set up their trusts through our platform. This includes collecting information, coordinating next steps, answering questions, and ensuring a smooth transition from signup to completed structure. This role is great for someone detail oriented who enjoys guiding people through complex processes with clarity and care.

What We Look For

    * Strong communication skills
    * Comfort working in person on a small, fast moving team
    * High ownership and follow through
    * Interest in startups, fintech, or estate planning

To appy: email your resume to kyle <at> getdynasty <dot> com

On some levels its insane that billion dollar companies are pouring resources into something and the name was only relevant for like a couple hours before things moved. Fast paced world.

Singularity of AI project names, projects change their names so fast we have no idea what they are called anymore. Soon, openclaw will change its name faster than humans can respond and only other AI will be able to talk about it.

I’m surprised Google haven’t renamed Gemini yet since Bard. Usually they rename them a few times before shutting them down.

Bard was a bad name, Gemini is fine and it matches the name of the underlying models.

    f"{os.urandom(8)}.ai"

Somethings get packaged up and distributed in just the right way to go viral

There was always going to be a first DAO on the blockchain that was hacked and there will always be a first mass network of AI hacking via prompt injection. Just a natural consequence of how things are. If you have thousands of reactive programs stochastically responding to the same stream of public input stream - its going to get exploited somehow

Its crazy to me after all these years that django-like migrations aren't in every language. On the one hand they seem so straightforward and powerful, but there must be some underlying complexities of having it autogenerate migrations.

Its always a surprise when i went to Elixir or Rust and the migration story was more complicated and manual compared to just changing a model, generating a migration and committing.

In the pre-LLM world, I was writing ecto files, and it was super repetitive to define make large database strucutres compared to Django.


Going from Django to Phoenix I prefer manual migrations. Despite being a bit tedious and repetitive, by doing a "double pass" on the schema I often catch bugs, typos, missing indexes, etc. that I would have missed with Django. You waste a bit of time on the simple schemas, but you save a ton of time when you are defining more complex ones. I lost count on how many bugs were introduced because someone was careless with Django migrations, and it is also surprising that some Django devs don't know how to translate the migrations to the SQL equivalent.

At least you can opt-in to automated migrations in Elixir if you use Ash.


Django doesn't force anyone to use the automatic migrations, you can always write them manually if you want to :)

There are some subtle edge cases in the django migrations where doing all the migrations at once is not the same as doing migrations one by one. This has bitten me on multiple django projects.

Can you give an example how this would happen?

Ok, from memory --

There's a pre, do and post phase for the migrations. When you run a single migration, it's: pre, do, post. When you run 2 migrations, it's: pre [1,2], do: [1,2], post: [1,2].

So, if you have a migration that depends on a previous migration's post phase, then it will fail if it is run in a batch with the previous migration.

When I've run into this is with data migrations, or if you're adding/assigining permissions to groups.


Did you mean migration signals (pre_migrate and post_migrate)? They are only meant to run before and after the whole migration operation, regardless of how many steps are executed. They don't trigger for each individual migration operation.

The only catch is they will run multiple times, once for each app, but that can also be prevented by passing a sender (e.g. `pre_migrate.connect(pre_migrate_signal_handler, sender=self)` if you are registering them in your AppConfig.ready method).


Does that affect the autogenerated migrations at all? Teh only time I ran into that issue as if I generated a table, created a data migration and then it failed because the table was created same transaction. Never had a problem with autogenerated migrations.

What a crazy design, why don't they just do pre1 do1 post1 pre2 do2 post2?

This doesn't sound at all familiar, are you sure you're not mixing it up with something else?

There’s like an atomic flag you can pull it out of the transaction . Solves a lot of these issues.

There is no way to autogenerate migrations that work in all cases. There are lots of things out there that can generate migrations that work for most simple cases.

Django manages to autogenerate migrations that work in the VAST majority of cases.

They don't need to work in every case. For the past `~15 years 100% of the autogenerated migrations to generating tables, columns or column names I have made just work. and i have made thousands of migrations at this point.

The only thing to manually migrate are data migrations from one schema to the other.


I end up needing to write a manual migration maybe once every other year in real world use.

That's why you can do your own migrations in Django for those edge cases.

well in elixir you can have two schemas for the same table, which could represent different views, for example, an admin view and a user view. this is not (necessarily) for security but it reduces the number of columns fetched in the query to only what you need for the purpose.

Idk, that is terrible advice. I've known several people who got hired because they emailed the CEO of 5-20 person startups.

Heck my CEO asks me all the time that people are messaging him and if i think they are interesting enough to hire.


If it's 5 person company they likely don't have HR or recruiting and the CEO is likely doing the hiring (for VPs/Directors/etc). In that case of course you would communicate with them directly, they are effectively a hiring manager and don't have HR to outsource the hiring to.

If the company has a person/group dedicated to hiring then going around them is counterproductive. IMHO of course!


Agreed. I've worked in startups most of my career, I've messaged CEO's, CEO's have been messaged, never a negative experience and higher quality candidates in my opinion.

Side note: You gotta hustle people!


the hidden text about financial markets is doubly so. Hate every time i open the news and its "$COMPANY stock falls after $EVENT happens" when often the event probably had no bearing on the stock price of multi-trillion dollar companies at all. It just happened at the same time and the news networks want to construct a narrative.


It's maddening that $100k purchases get totally nerfed by bad software. Absolutely crazy to me that I can go out find a super nice car I want and have to walk away because of bad software or no carplay support.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: