I felt the same way reading the linked webpage. Reads like minimally edited LLM output, which makes me question how much effort was put into the research itself. Was the research all LLM too? How much of the paper was LLM?
I don't think a healthy society has anything close to our level of wealth concentration, but even if he's made mistakes, he's saved many millions of lives.
Compare that to Elon Musk, who uses his Musk Foundation as a tax shelter, only spending from it for a private school for his children.
And how many people would have been saved if he didn't forcibly extracted that money from society to begin with?
Because it's almost impossible to not help someone if he just throw wads of money at random. What important is how many people weren't saved because he decided to be a middle man in all of it?
Way, way fewer. Any billionaire you've heard of is almost certainly a net creator of a huge amount of value, by successfully leading a company in a capitalist system that made enough money selling products or services to make its shareholders worth billions of dollars. This isn't forcibly extracting money from society, this is exactly what net-value-creation looks like in the world.
Having an issue with users uploading CSAM (a problem for every platform) is very different from giving them a tool to quickly and easily generate CSAM, with apparently little-to-no effort to prevent this from happening.
Well, its worth noting that with the nonconsensual porn, child and otherwise, it was generating X would often rapidly punish the user that posted the prompt, but leave the grok-generated content up. It wasn't an issue of not having control, it was an issue of how the control was used.
Sorry, misread your original comment. But it seems to me that young people in the 50's and 60's (aka the golden age most Americans think of) where much more dissatisfied than older people -- the 60's were notorious for protests.
I think we all knew this would happen quickly. Clearly there's a demand for personal AI agents - does anyone have thoughts on what it would take to make a more secure one? Would current services like email need to be redesigned to accommodate AI agents?
* Clear labeling of action types (read/get vs write/post)
* A better way of describing what an agent is potentially about to do (based purely on the functions the agent is about to call)
* More occurrences of AI agents hurting more than helping in the current ecosystem
You might be okaying actions hundreds or thousands of times before you encounter an injection attack, at which point you probably aren't reading things before you approve.
I agree, that's the main issue with this approach. Long-term, it should only be used for truly sensitive actions. More mundane things like replying to emails will need a better solution.
> No matter what we do, the coming wars will be horrific. Billions will die. But that’s what is beautiful; diversity is messy. On a cosmic scale, this period is just a blip, it isn’t what matters.
Does anyone know what geohot is talking about, or did he join a doomsday cult after he got tired of self-driving cars?
Either way, a largely unregulated tech industry is arguably what got us into this mess, with social media apps that encourage conspiracies and partisanship. Maybe billions won't die if we stop taking such a business-friendly laissez-faire approach.
why on earth would geniuses be the future of humankind? lost me on that line of reasoning. secondly whats wrong with ubi? there are a plenty of people that aren't able to function in the world we've created and they'd benefit greatly from being left to do their own thing. i get that you're trying to be one of those geniuses (lol), but you could stop trying to tell other people whats good for them, thanks. and the world will be saved by open source software? thats it? cool.
edit: jmc - not replying to you so much as just commenting on the blog in general. i agree with your comment.
I interpret the UBI paragraph as it's one of those "create the problem, sell the solution" cases. First make a system where you need a yo–yo to live and if you don't have a yo–yo you die. Then give everyone a free yo–yo. Geohot says: Well how about just stopping this if you don't have a yo–yo you die thing instead, it seems a bit silly?
I interpret the geniuses in the post as analogy for future LLMs that substantially outperform those we have now, and the post as an argument for open weight models.
His apparent belief that we're heading for some kind of war I see as something separate. Why he connects this with UBI I don't understand though.
Where's the money/resources coming from? (Even Musk's putative net worth would be drained in less than 3 months at $1000/mn just for the USA--not even considering how badly stock values would crash if he were forced to cash out)
Where's the money going to? (You want inflation outside of collectibles and crypto? Give people in need money to spend without commensurate requirements to improve supply)
Whose included in Universal?
--
The experiments so far that have called themselves Ubi apparently use "universal" in an inconceivable way, and if there is one thing we have found out about humanity, it's that results for a few don't often play out the same at a large scale.
Positive sum game started to collapse more than 3 decades ago. So it’s not that simple as just “unregulated tech industry”. They made it worse for sure, but they’re not the only reason.
reply