Hacker Newsnew | past | comments | ask | show | jobs | submit | ricardobeat's commentslogin

2025, the world rediscovers simple static caching. You could do the same with varnish/nginx or wp-cache with 10% of the complexity. Or a CDN.

“Incremental Static Regeneration” is also one of the funniest things to come out of this tech cycle.


I have an existential crisis about joining a company so deeply bought into NextJS dark patterns every day.

Cooking utensils are mostly one piece, otherwise wood glue is PVA, same as school glue, that's about as non-toxic as you can get. I'd be more concerned about some kind of supply-chain issue contaminating the raw wood - hopefully they do frequent control checks on the material.

They aren’t one piece. See the dark seams? https://www.everythingkitchens.com/totally-bamboo-all-natura...

These are strips glued together aka laminated. The binder is not PVA (which is water soluble and not suitable for the task), it’s most commonly a formaldehyde resin such as phenol- , urea- or melamine urea formaldehyde


That’s plain bambu, the dark areas are the nodes/rings in the plant.

I don’t build cutting boards myself, but have never heard of using anything but food-safe PVA glue. Those resins are used for laminating plywood etc, probably not even legal to use in kitchen utensils, at least in the EU.


They are raw wood, unfinished. I usually give them a little sanding and a layer of beeswax - doesn't last very long but makes them feel new for a while :)

PU is about the last coating I'd like to see on my food utensils. Not very interested in a daily dose of microplastics injected directly into my food...

I believe in the original amazon service architecture, that grew into AWS (see “Bezos API mandate” from 2002), backwards compatibility is expected for all service APIs. You treat internal services as if they were external.

That means consumers can keep using old API versions (and their types) with a very long deprecation window. This results in loose coupling. Most companies doing microservices do not operate like this, which leads to these lockstep issues.


Yeah. that's a bad thing right? Maintaining backward compatibility to the end of time in the name of safety.

I'm not saying monoliths are better then microservices.

I'm saying for THIS specific issue, you will not need to even think about API compatibility with monoliths. It's a concept you can throw out the window because type checkers and integration tests catch this FOR YOU automatically and the single deployment insures that the compatibility will never break.

If you choose monoliths you are CHOOSING for this convenience, if you choose microservices you are CHOOSING the possibility for things to break and AWS chose this and chose to introduce a backwards compatibility restriction to deal with this problem.

I use "choose" loosely here. More likely AWS ppl just didn't think about this problem at the time. It's not obvious... or they had other requirements that necessitated microservices... The point is, this problem in essence is a logical consequence of the choice.


> or they had other requirements that necessitated microservices

Scale

Both in people, and in "how do we make this service handle the load". A monolith is easy if you have few developers and not a lot of load.

With more developers it gets hard as they start affecting eachother across this monolith.

With more load it gets difficult as the usage profile of a backend server becomes very varied and performance issues hard to even find. What looks like a performance loss in one area might just be another unrelated part of the monolith eating your resources,


Exactly, performance can make it necessary to move away from a monolith.

But everyone should know that microservices are more complex systems and harder to deal with and a bunch of safety and correctness issues that come with it as well.

The problem here is not many people know this. Some people think going to microservices makes your code better, which I’m clearly saying here you give up safety and correctness as a result)


> Yeah. that's a bad thing right? Maintaining backward compatibility to the end of time in the name of safety.

This this is what I don't get about some comments in this thread. Choosing internal backwards compatibility for services managed by a team of three engineers doesn't make a lot of sense to me. You (should) have the organizational agility to make big changes quickly, not a lot of consensus building should be required.

For the S3 APIs? Sure, maintaining backwards compatibility on those makes sense.


Backwards compatibility is for customers. If customers don’t want to change apis… you provide backwards compatibility as a service.

If you’re using backwards compatibility as safety and that prevents you from doing a desired upgrade to an api that’s an entirely different thing. That is backwards compatibility as a restriction and a weakness in the overall paradigm while the other is backwards compatibility as a feature. Completely orthogonal actions imo.


> dx defaults to --allow-all

And just like that, there goes 50% of it's reason to exist


I had one of these around 2011! Used it to host a websocket server - a novelty at the time - during a conference talk, and it held up to 30+ clients before dying.

+1 to this. I seriously believe frontend was more productive in the 2010-2015 era than now, despite the flaws in legacy tech. Projects today have longer timelines, are more complex, slower, harder to deploy, and a maintenance nightmare.

I'm not so sure those woes are unique to frontend development.

I remember maintaining webpack-based projects, and those were not exactly a model of simplicity. Nor was managing a fleet of pet dev instances with Puppet.

Puppet isn’t a front end problem, but I do agree on Webpack - which is one reason it wasn’t super common. A lot of sites either didn’t try to bundle things or had simple Make-level workflows which were at least very simple, and at the time I noted that these often performed similarly: people did, and still do, want to believe there’s a magic go-faster switch for their front end which obviates the need to reconsider their architectural choices but anyone who actually measured it knew that bundlers just didn’t deliver savings on that scale.

I do kind of miss gulp and wish there was a modern TS version. Vite is mighty powerful, but pretty opaque.

Webpack came out in late 2012 and took a few years to take over, thankfully. I was lucky to avoid it at dayjob™ until around 2019.

    brew list sqlite
gives you the installed path, works for any formula.

Neat. What I wasn't able to find was the dynamic library, just the `litestream` executable. Was there some secret you used for the litestream dylib? Thanks in advance!

Looks like you need to build it yourself: https://litestream.io/guides/vfs/

Got it....you beat me to it. I had just figured that part out!! I didn't understand that part. ALSO, and this might be helpful for others, in the releases section, if you expand the little blue link at the bottom that says something like "see the other 35 assets", then the VFS extension will be downloadable from there!

Thanks for humouring me! :D


Not long ago I turned on my original iPod touch (2007), to see how the keyboard felt and if I was romanticising the past, and guess what?

Absolute perfect typing experience, better responsiveness and almost entirely free of mistakes. It's mind-boggling.


This!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: