Reminds me of the unfortunate book "Vibe Coding" by Steve Yegge, whom I otherwise enjoy. While it contained okay, if very light on actionable details, overview of the broad ideas behind LLM-assisted coding (how much of it was vibe coding, though?), much of it was co-written through the use of an LLM book editing pipeline, proudly advertised throughout the book. A treatise of otherwise one-tenth of the final length has been blown up into the size of a volume, not unlike a piece of meat is pumped with water to make it appear fattier.
Every time I see a title like this, I ask myself if I'm not being open enough, if my biases are interfering with any potential progress I could be making when it comes to utilising AI. Then I find out that the content is just more slop and it further solidifies my position on all of this. What a waste of energy. It really saddens me.
that's funny. I read an article about how the use EM prevalance in content as an indicator that it was AI generated. I should tell my agents to stop using them :)
As the project progressed, multiple significant revisions to Servo were released, and the Verso browser was unable to keep pace with these updates due to limited manpower and funding. Therefore, we will be archiving the repository for now and look forward to a future opportunity to revitalize the project and continue contributing to the Servo ecosystem. [0]
Surprisingly it was the UI that pinged off the detectors for me.