Hacker Newsnew | past | comments | ask | show | jobs | submit | hnroo99's commentslogin

What the hell that was a good read. Ending was great (though the last line did confuse me)

Previously in the story it is mentioned that George as a child was curious about the etymology of the Olympics event and asked his father, only to be dismissed.

The callback at the end symbolizes his renewed curiosity. He is no longer ashamed of the way his mind works and if it makes him look different.


50-100 PRs a week to me is insane. I'm a little skeptical and wonder how large/impactful they are. I use AI a lot and have seen significant productivity gains but not at that level lol.


I work for a FAANG and I'm the top reviewer in my team (in terms of number of PRs reviewed). I work on an internal greenfield project, so something really fast moving.

For ALL of 2025 I reviewed around 400 PRs. And that already took me an extreme amount of time.

Nobody is reviewing this many PRs.

I've also raised around 350 PRs in the same year, which is also #1 for my team.

AI or not, nobody is raising upwards of 3,500 CRs a year. In fact, my WHOLE TEAM of 15 people has barely raised this number of CRs for the year.

I don't know why people keep believing those wild unproven claims from actors who have everything to gain from you believing them. Has common sense gone down the drain that much, even for educated professionals?


> I don't know why people keep believing those wild unproven claims from actors who have everything to gain from you believing them.

It's grifters all the way down. The majority of people pushing this narrative have vested interests, either because they own some AI shovelware company or are employed by one of the AI shovelware companies. Anthropic specifically is running guerilla marketing campaigns fucking everywhere at the moment, it's why every single one of these types of spammed posts reads the same way. They've also switched up a bit of late, they stopped going with the "It makes me a 10x engineer!" BS (though you still see plenty of that) and are instead going with this weird "I can finally have fun developing again!" narrative instead, I guess trying to cater to the ex-devs that are now managers or whatever.

What happens is you get juniors and non-technical people seeing big numbers and being like "Wow, that's so impressive!" without stopping to think for 5 seconds what the kind of number they're trying to push even actually means. 100 PRs is absurd unless they're tiny oneliners, and even if they were tiny changes, there's 0 chance anyone is looking at the code being shat out here.


Reviewing PRs should be for junior engineers, architectural changes, brand new code, or broken tests. You should not review every PR; if you do, you're only doing it out of habit, not because it's necessary.

PRs come originally from the idea that there's an outsider trying to merge code into somebody's open source project, and the Benevolent Dictator wants to make sure it's done right. If you work on a corporate SWEng team, this is a completely different paradigm. You should trust your team members to write good-enough code, as long as conventions are followed, linters used, acceptance tests pass, etc.


> You should trust your team members to write good-enough code...

That's the thing, I trust my teammate, I absolutely do not trust any LLM blindly. So if I were to receive 100 PRs a week and they were all AI-generated, I would have to check all 100 PRs unless I just didn't give a shit about the quality of the code being shit out I guess.

And regardless, whether I trust my teammates or not, it's still good to have 2 eyes on code changes, even if they're simple ones. The majority of the PRs I review are indeed boring (boring is good, in this context) ones where I don't need to say anything, but everyone inevitably makes mistakes, and in my experience the biggest mistakes can be found in the simplest of PRs because people get complacent in those situations.


It seems like LLMs really have made people insane.


(Not the OP)

For many years, all the projects I’ve been in had mandatory code review, some in the form of PRs (a github fabrication), most as review requests in other tooling.

This applies to everything from platform code, configuration, tooling to production software.

Inside a component, we use review to share knowledge about how something was implemented and reach consensus on the implementation details. Depending on developer skill level, this catches style, design issues or even bugs. For skilled developers, it’s usually comments on code-to-architecture mismatches, understandability, etc. Sometimes not entirely objective things, that nevertheless contribute to developing and maintaining a team consensus and style. Discussions also happen outside and before review, but we’ve found reviews invaluable.

If a team has yearly turnover or different skill levels (typical for most teams), not reviewing every commit is sloppy. Which has an additional meaning now with AI slop :)


I am also skeptical about the need for such a large number of PRs. Do those open because of previous PRs not accomplishing their goals?

It's frustrating because being part of a small team, I absolutely fucking hate it when any LLM product writes or refractors thousands of lines of code. It's genuinely infuriating because now I am fully reliant on it to make any changes, even if it's really simple. Just seems like a new version of vendor lock-in to me.


Yeh, 100 PRs a week is a PR every 24 minutes at standard working hours (not including lunch break). That would be crazy to even review.


Because he is working on a product that is hot and has demand from the users for new features/bug fixes/whatnot and also gets visibility on getting such things delivered. Most of us don't work on products that have that on a daily basis.


In other words, nobody cares that the generated code is shit, because there is no human who can review that much code. Not even on high level.

According to the discussion here, they don’t even care whether the tests are real. They just care about that it’s green. If tests are useless in reality? Who cares, nobody has time to check them!

And who will suffer because of this? Who cares, they pray that not them!


>nobody cares that the generated code is shit

That is the case, whether the code is AI generated or not. Go take a look at some of the source code for tools you use ever day, and you'll find a lot of shit code. I'd go so far as to say, after ~30 years of contributing to open source, that it's the rare jewel that has clean code.


Yeah, but there is a difference, between if at least one people at one point of time understood the code (or the specific part of it), and none. Also, there are different levels. Wildfly’s code for example is utterly incomprehensible, because the flow jumps on huge inheritance chains up and down to random points all the time; some Java Enterprise people are terrible with this. Anyway, the average for tools used by many is way better than that. So it’s definitely possible to make it worse. Blindly trusting AI is one possible way to reach those new lows. So it would be good to prevent it, before it’s too late, and not praising it without that, and even throwing out one of the (broken, but better than nothing) safeguard. Especially how code review is obviously dead with such amount of generated code per week. (The situation wasn’t great there either before) So it’s a two in one bad situation.


For comparison, I remember doing 250 PRs in 2.5 months of my internship at FB (working on a fullstack web app). So that’s 2-4x faster. What’s interesting is that it’s Boris, not an intern (although the LLM can play an intern well).


Most likely * The rest of the team also reviews * If you're the founder, chances are that people will just accept reviews without reading much and give you priority in reviews

So I can see this happening in a practical sense.


When people do PR counting then I assume they're dependabot-style stuff.


50-100 is a lot, but 15 a week should be normal with continuous integration, you should be merging multiple times a day


Where have you worked? I have been at a lot of places and I have never seen people consistently checking in 2 PR/day every day.


iirc he (or his colleague) did mention somewhere on X that most of the PRs are small


Yeah everything you said resonates, thanks for your input.

Your last paragraph is interesting though, in what way do you think mobile dev is going to change?


Whoa did not expect the CEO of Cloudflare to comment here! Thanks for the response. The extended periods of high latency was concerning, but I did some more digging and saw that your team is aware of this and working on it: https://www.answeroverflow.com/m/1409539854747963523 Hoping things work out!


Thanks for the response. After doing some more digging it looks like this is a known issue at Cloudflare and they're actively working on it: https://www.answeroverflow.com/m/1409539854747963523


Side-quest productivity is a great way to put it... It does feel like AI effectively enables the opposite of "death by a thousand cuts" (life by a thousand bandaids?)


Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715

I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.

I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.


ts-rest doesn't see a lot of support these days. It's lack of adoption of modern tanstack query integration patterns finally drove us look for alternatives.

Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.

- https://orpc.unnoq.com/

- https://github.com/unnoq/orpc


First time hearing about oRPC, never heard of or used ts-rest and I'm a big fan of tRPC. Is the switch worth the time and energy?


If you're happy with tRPC and don't need proper REST functionality it might not be worth it.

However, if you want to lean that direction where it is a helpful addition they recently added some tRPC integrations that actually let you add oRPC alongside an existing tRPC setup so you can do so or support a longer term migration.

- https://orpc.unnoq.com/docs/openapi/integrations/trpc


Do you need an LLM for this? I've made my own in-house fork of a Java library without any LLM help. I needed apache.poi's excel handler to stream, which poi only supports in one direction. Someone had written a poi-compatible library that streamed in the other direction, but it had dependencies incompatible with mine. So I made my own fork with dependencies that worked for me. That got me out of mvn dependency hell.

Of course I'd rather not maintain my own fork of something that always should have been part of poi, but this was better than maintaining an impossible mix of dependencies.


For forking and changing a few things here and there, I could see how there might be less of a need for LLMs, especially if you know what you're doing. But in my case I didn't actually fork `ts-rest`, I built a much smaller custom abstraction from the ground-up and I don't consider myself to be a top-tier dev. In this case it felt like LLMs provided a lot more value, not necessarily because the problem was overly difficult but moreso because of the time saved. Had LLMs not existed, I probably would have never considered doing this as the opportunity cost would have felt too high (i.e. DX work vs critical user-facing work). I estimate it would have taken me ~2 weeks or more to finish the task without LLMs, whereas with LLMs it only took a few days.

I do feel we're heading in a direction where building in-house will become more common than defaulting to 3rd party dependencies—strictly because the opportunity costs have decreased so much. I also wonder how code sharing and open source libraries will change in the future. I can see a world where instead of uploading packages for others to plug into their projects, maintainers will instead upload detailed guides on how to build and customize the library yourself. This approach feels very LLM friendly to me. I think a great example of this is with `lucia-auth`[0] where the maintainer deprecated their library in favour of creating a guide. Their decision didn't have anything to do with LLMs, but I would personally much rather use a guide like this alongside AI (and I have!) rather than relying on a 3rd party dependency whose future is uncertain.

[0] https://lucia-auth.com/


nvm I'm dumb lol, `ts-rest` does support express v5: https://github.com/ts-rest/ts-rest/pull/786. Don't listen to my misinformation above!!

I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.


unintuitively, less cores ended up being the fix... I did a small writeup here: https://news.ycombinator.com/item?id=44436679


Nut has been cracked! https://news.ycombinator.com/item?id=44436679

And yeah, I've been using prometheus' `collectDefaultMetrics()` function so far to see event loop metrics, but it looks like node:perf_hooks might provide a more detailed output... thanks for sharing


Big thanks to everyone who commented so far, I wasn't able to reply to everyone (busy trying to fix the issue!) but grateful for everyone's insights.

I ended up figuring out a fix but it's a little embarrassing... Optimizing certain parts of socket.io helped a little (eg installing bufferutil: https://www.npmjs.com/package/bufferutil), but the biggest performance gain I found was actually going from 2 node.js containers on a single server to just 1! To be exact I was able to go from ~500 concurrent players on a single server to ~3000+. I feel silly because had I been load-testing with 1 container from the start, I would've clearly seen the performance loss when scaling up to 2 containers. Instead I went on a wild goose chase trying to fix things that had nothing to do with the real issue[0].

In the end it seems like the bottleneck was indeed happening at the NIC/OS layer rather than the application layer. Apparently the NIC/OS prefers to deal with a single process screaming `n` packets at it rather than `x` processes screaming `n/x` packets. In fact it seems like the bigger `x` is, the worse performance degrades. Perhaps something to do with context switching, but I'm not 100% sure. Unfortunately given my lacking infra/networking knowledge this wasn't intuitive to me at all - it didn't occur to me that scaling down could actually improve performance!

Overall a frustrating but educational experience. Again, thanks to everyone who helped along the way!

TLDR: premature optimization is the root of all evil

[0] Admittedly AI let me down pretty bad here. So far I've found AI to be an incredible learning and scaffolding tool, but most of my LLM experiences have been in domains I feel comfortable in. This time around though, it was pretty sobering to realize that I had been effectively punked by AI multiple times over. The hallucination trap is very real when working in domains outside your comfort zone, and I think I would've been able to debug more effectively had I relied more on hard metrics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: