Yes, that's all just as it was, and in places braces were not required / interchangeable so this is more of an optional compiler choice than a real change
Probably so, but that doesn't mean their value can keep scaling without heavy diminishing returns. Softbank must assume they've taken 80%+ of the gains from this phase of NVIDIA's growth, and want to capture the next wave of growth.
I agree with you that OpenAI seems much more risky in terms of it's actual true viability as a business, but the risk:reward must be there for Softbank.
It’s natural to feel anxious as we approach the inevitable automation of all human labor
This is sell-side idealist thinking and blurred view of reality. We're not approaching it, we're not even seeing metrics to suggest that any sub-division of any business is making serious progress there at all.
Too many people are hyping something that will not happen in our lifetimes and we risk looking beyond the terrible state of large global economies, poor business practice and human exploitation on mass scales to a place we will never see. It's more fun to try and shape future possibilities for large profit that we'll probably never have to justify, than attempt to deal with current realities, and thus go against the grain of investment trends today, for an uncertain benefit.
It's just a tool. Are the people that run Makita terrible? Who knows, I just use their tools to fix cars. I use tools to build apps for businesses that pay me. There is far too much ideology based decision making in tech. Just build stuff with it or not.
Far too many smart people are putting their energies into such discussions that add a lot of drag to the process of society and humanity moving forward for no net gain at all.
As someone who isn't too familiar with Next and Vercel (having primarily used Nuxt, the Vue equivalent), it's helpful for me to know what's going on in the React world. Discussions like the above are actually helpful in terms of helping people choose between the various frameworks and hosts.
Who are this endless cohort of develops who need to maintain a 'deep understanding' of their code. I'd argue a high % of all code written globally on any given day that is not some flavour of boilerplate, while written with good intention, is ultimately just short-lived engineering detritus of it even gets a code review to pass.
If you're on HN there's a good chance you've self-selected into "caring about the craft and looking for roles that require more attention."
You need to care if (a) your business logic requirements are super annoyingly complex, (b) you have hard performance requirements, or (c) both. (c) is the most rare, (a) is the most common of those three conditions; much of the programmer pay disparity between the top and the middle or bottom is due to this, but even the jobs where the complexity is "only" business requirements tend to be quite a bit better compensated than the "simple requirements, simple needs" ones.
I think there's a case to be made that LLM tools will likely make it harder for people to make that jump, if they want to. (Alternately they could advance to the point where the distinction changes a bit, and is more purely architectural; or they could advance to the point where anyone can use an LLM to do anything - but there are so many conditional nuances to what the "right decision" is in any given scenario there that I'm skeptical.)
A lot of times floor-raising things don't remove the levels, they just push everything higher. Like a cheap crap movie today will visually look "better" from a technology POV (sharpness, special effects, noise, etc) than Jurassic Park from the 90s, but the craft parts won't (shot framing, deliberate shifts of focus, selection of the best takes). So everyone will just get more efficient and more will be expected, but still stratified.
And so some people will still want to figure out how to go from a lower-paying job to a higher-paying one. And hopefully there are still opportunities, and we don't just turn into other fields, picking by university reputations and connections.
> You need to care if (a) your business logic requirements are super annoyingly complex, (b) you have hard performance requirements, or (c) both. (c) is the most rare
But one of the most fun things you can do is C: creative game development coding. Like coding world simulations etc, you want to be both very fast but the rules and interactions etc is very coupled and complex compared to most regular enterprise logic that is more decoupled.
So while most work programmers do fits A, the work people dream about doing is C, and that means LLM doesn't help you make fun things, it just removes the boring jobs.
In my experience the small percent of developers who do have a deep understanding are the only reason the roof doesn’t come crashing in under the piles of engineering detritus.
We also have to pretend that anyone has ever been any good at writing descriptive, detailed, clear and precise specs or documentation. That might be a skillset that appears in the workforce, but absolutely not in 2 years. A technical writer that deeply understands software engineering so they can prompt correctly but is happy not actually looking at code and just goes along with whatever the agent generates? I don't buy it.
This seems like a typical engineer forgets people aren't machines line of thinking.
I can't believe so many replies are struggling with the easy answer: privacy, security, "local first", "open source", "distributed", "open format" etc etc etc are developer goals projected onto a majority cohort of people who have never, and will never, care and yet hold all the potential revenue you need.
One thing I did notice though from looking through the examples is this:
Uncaught errors automatically cause retries of tasks using your settings. Plus there are helpers for granular retrying inside your tasks.
This feels like one of those gotchas that is absolutely prone to benign refactoring causing huge screwups, or at least someone will find they pinged a pay for service 50x by accident without realising.
ergonomics like your helper of await retry.onThrow feel like a developer friendly default "safe" approach rather than just an optional helper, though granted it's not as magic feeling when you're trying convert eyeballs into users.
Yep you do need to be careful with uncaught errors.
When you setup your project you choose the default number of retries and back-off settings. Generally people don't go as high as 50 and setup alerts when runs fail. Then you can use the bulk replaying feature when things do wrong, or if services you rely on have long outages.
reply