Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love this, but also wonder how this plays out when tooling designed to de-enshittify is owned by a YC startup that must have some sort of future exit.


De-enshittify with a subscription.


Making the world (or even the internet) a better place, definitely doesn't even seem to register on the priority scale for YC startups. I personally don't need to spend any time wondering how this plays out.


These folks get $500k to run an experiment. I love that for them, experiments are great, and if someone else will pay for it, also great. YC can afford it based on their capital available for investment. But what they build will have no moat, so it can be copied in the future if traction is found, with a license that prohibits commercial use. My first thought is a directed donation to the EFF for a clone, but there are likely other paths to success (yt-dlp is incredibly effective at empowering people to rip content from 1000+ media storage systems, and runs on free open source dev time and a handful of contributions). The last crucial component is cheap local models for inference for this, remains to be solved for, but the trajectory is clear that local, efficient models will come. For people who can pay, a config dialog to specify your LLM provider and their API endpoint probably works too, but won't scale for the masses imho. Worst case, they fold or are aqui-hired, but will have taught us something on someone else's dime. Could be worse, right?

User owned and controlled inference in their compute context is what beats enshittification, it is equalizing Big Tech power asymmetry against users, or at least keeps it in check. And so, I wish this team much luck, and await their results from their experiment. Many thanks to YC for funding them.


Frankly, this wouldn't be possible without the investment/cloud credits. And that is a shame because I think this is something that should exist in the world (even if I'm not the one building it). We're trying to make the most of the system.

I'm honestly not certain myself how we'll monetize this, but I have had a lot of fun building it and using it myself, and seeing how others use it. As you said, if we continue down this path without success, then worst case, what we built will still exist.

Re: local models, I am a big proponent, but they aren't there yet. This task is non-trivial. Try taking raw HTML from a webpage (minified, bundled, abstracted variable names, no comments, etc.) and using it as a basis to make useful edits. It's tough, and very impressive that any model can actually do it reasonably well. It tentatively looks like we're starting to reach a plateau for general models and open-weight is catching up, but I know the big labs/companies are aggressively capturing massive data and squeezing everything they can out of RL for more task-specific tuning. I hope open-weights can continue to compete!


I wish you all the best, genuinely. Enjoy the work, the learnings, and experience. Hope to be taught something by what you discover.


Appreciate it!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: