Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This year with the introduction of ChatGPT-4 we may have seen the invention of something with the equivalent impact on society of explosives, mass communication, computers, recombinant DNA/CRISPR and nuclear weapons – all rolled into one application.

I just can't stand this kind of language. ChatGPT is quite useful but have you tried to ask it something serious that is not twitter worthy? We are not there yet. And in any case, this is not the first superhuman tool that humans made. Nukes have existed for 70 years and probably become much more accessible. Biotech could create thousands of humanity-ending viruses, today. These are fears that we live with and will forever live with, but we can't live lives only in fear.



> Nukes have existed for 70 years and probably become much more accessible. Biotech could create thousands of humanity-ending viruses, today. These are fears that we live with and will forever live with, but we can't live lives only in fear.

Building nukes and bioweapons aren't as good of a business model as AGI though. The government was incentivised to at least take some precautions with nukes. Nukes can't be developed and launched by individual bad actors. AGI isn't that comparable to nukes for numerous reasons. Bioweapons maybe, but I wouldn't be in support of companies researching bioweapons without regulation.

It's not a choice between living in fear and going full steam ahead. Both are idiotic positions to take. The reasonable approach here would be to publicly fund alignment research while slowing and regulating AI capability research to ensure the best possible outcomes and minimise risk.

You're basically arguing in favour of a free market approach to developing what has the potential to be a dangerous technology. If you wouldn't allow the free market to regulate something as mundane as automobile safety, then why would you trust the free market to regulate AI safety?

Companies who wish to develop state-of-the-art AI models should be required to demonstrate they are taking reasonable steps to ensure safety. They should be required to disclose state-of-the-art research projects to the government. They they should be required to publish alignment research so we can learn...


> Building nukes and bioweapons aren't as good of a business model as AGI though

I agree. It's quite possible that humanity-ending AI is also not a good business, don't you agree?

I think the whole Apocalypse discussion is premature distraction for the moment. A more improtant discussion is what kinds of AI will end up making money. We have already seen how the internet turned from an infinite frontier to a more modern version of TV dominated by a few networks with addictive buttons. Unfortunately we will see the same with AI becuase such is the nature of money today, and capitalism is one thing that AI will not change. The applications of AI that make the most money will dominate, to the detriment of applications that are only benefiting small groups of people (such as the disabled).

> to publicly fund alignment research while

We don't really know if alignment research is what we need. Governments should fund AI research in general, otherwise it would be like the early attempts of the EU to regulate AI. In fact any kind of funding of AI ethics at the moment is dubious because it is changing so fast . Stopping it for six months will not solve those ethical issues either, it will just delay their obsolescence by six months. This is stupid on the face of it


For public funding in AI research to work it would need to overwhelm private research AND not be exploited by bureaucrats.

Neither of these seem remotely realistic to happen.


Excellent point about living in fear.

Though take the examples - nuclear weapons or biotech - as you say both have huge potential for harm.

However both are regulated and relatively inaccessible to the average person.

While training models like ChatGPT is still relatively inaccessible for the average person, using them is potentially not.

One of the features of software is the almost zero cost of copying - making proliferation much more of an issue than for nukes or custom made viruses tech [1]

ChatGPT is over-hyped of course, but I think the genie and bottle issue is more real here than for military tech or biotech.

Having said all that I do think the solution is largely around applying existing laws to these new tools.

[1] Ok if they escape, then can self replicate....


There s the optimistic scenario that GPT 17 will build me a spaceship to escape this blue planet and its nuclear dangers


Directly or in the Jeff Bezos sense of making you enough money? :-)


If anything, this might be like a web 1.0. .

It is better than the promises we had in 1980s where we went through an AI winter.

But it is going to take some time for people an corporations to figure out if it is all hype, the next crypto, or if there are some real applications for this new technology.

Look at the cloud, S3 was launched in 2006, but you did not see much about it on Harvard Business Review till 2011. And even then, it was potential promises of what the cloud could do. Things did not really pick up till 2016.


I asked it about how to transition from nation states to local ownership at scale and was very happy with its answer. Better and more comprehensive than anyone I think around me would have answered - in 5 seconds - and it introduced me to new concepts like time banks and community currencies which I could ask follow up questions on.

I think it’s truly mind blowing a computer can now simulate some of the best conversations I’ve ever had on a variety of topics.


How about how to go about rolling back the Citizens United decision?

That would be new, useful, but not really twitter-worthy.

1. https://en.wikipedia.org/wiki/Citizens_United_v._FEC


Soon enough they'll argue it's a better invention than oxygen


Oxygen wasn't invented.


Well - apparently according to a newly published theory by Stephen Hawking - it evolved.


I'm sorry, but that statement is not accurate. Oxygen is an element that exists naturally in the universe, and it was not invented or created by humans or any other living organism. However, it is true that oxygen has played a crucial role in the evolution of life on Earth. Photosynthesis, a process by which plants and other photosynthetic organisms produce oxygen, has had a profound impact on the composition of Earth's atmosphere and the development of complex life forms.


Nope - because you jumped to a wrong assumption.

I was referring to the new idea that the laws of physics were not set at the dawn of the universe, but rather evolved - and as the existence of oxygen depends on the laws of physics - ergo oxygen evolved.


>These are fears that we live with and will forever live with, but we can't live lives only in fear.

But we can't lie to ourselves about reality in order to prevent fear either.

The opinions from Elon musk, to Sam Altman to even the person who started it all Geoffrey Hinton are actually inline with the blog post.

Hinton even says things like these chatGPT models literally can understand things you tell it.

Should we call climate scientists fear mongers because they talk about a catastrophic but realistic future? I think not, and the same can be said for the people I mentioned.

I personally think these experts are right, but you are also right in that "we are not there yet". But given the trajectory of the technology for the past decade we basically have a very good chance of being "there" very soon.

Agi that is perceptually equivalent to a person more intelligent then us is now a very realistic prospect within our lifetimes.


> Should we call climate scientists fear mongers

But they have evidence, measurements and a quantitative model etc.

Where is the AGI FUD people's evidence? It's largely very opinionated arguments of rectal origin. But modern AI is a quantitative model that is completely known and can be readily analyzed. If there is some proof or even substantial quantitative or empirical evidence that those numbers are imminently dangerous, then we are talking.


>But they have evidence, measurements and a quantitative model etc.

There's no evidence for something that has yet to happen yet. We have a model for increasing temperature but even this is not entirely accurate. Did we predict the heavy rain in CA as a result of warming? The evidence is somewhat solid but there is an aspect to it that is speculative as well. What we do know is that huge changes will occur in climate.

Additionally, Can we make a projection about the climate with the alternative energy initiatives in place? Not an accurate one. We don't have a mathematical model that can accurately predict what will happen. We may have models but those models will likely be off.

The effects of climate change on civilization are the main course here. These claims are basically all pure speculation. We have no idea what's going to happen with rising temperature and how it will change society as we know it. Should we just clamp down on all speculation and doom-saying when it could be a realistic scenario?

For AI there's tons of evidence of about the increase in capabilities. If you want to quantify it into a model though you would have to create some sort of numerical scale. have 1, with logic gates, 2 with math calculations, 3 with chess AI, and so on and so forth. Just at each number in the scale is some milestone of intelligence that must be surpassed by machines.

If you graph this scale on the Y axis with time as the X axis you get an increasing graph, where milestone after milestone is surpassed over time. You may get some AI "winters" but overall the trend is up and from this projection the evidence while still highly speculative is Very similar to the climate model in terms of an increasing projection. If AI continues to increase indefinitely as the projections show you eventually hit the AGI point on that scale.

I mean this is what a model is. Typically common sense is enough here but since you want a model you just do this and boom your common sense is now a numerical model and you have your "evidence" which is plastered with enough technical numbers and graphs to satisfy your desire to feed your rectum with "numbers" and "science" as if that's all there is to logic, reasoning and evidence.

There's your evidence of AGI, just as strong as climate change in terms of projection. We know climate will increase and we know the capabilities of AI will increase over time. And the speculation of the apocalyptic effects to society as a result of powerful AI? Same as the speculation of climate change apocalypse. All made up, but all within the realm of probability and realism.

The difference between climate change and AGI is that AGI had a observable inflection point with chatGPT. The sudden shift was so drastic and jarring that we get a lot of people like you who just want to call everything BS even if it's a realistic possibility. With climate change it's like, yeah apocalyptic temperature changes are just around the corner you'd be stupid not to agree but your still driving your car, and using energy that causes global warming.

It's like we're honest to ourselves with climate, but we don't act honestly because the doom that is encroaching on our society is happening really slowly. Too slowly to make us act. Just handle it later.

With AGI the change was so sudden and drastic we can't even be honest with ourselves. What if I spent years honing my software engineering skills... does all that skill go to waste? I have to lie to myself in order to maintain all those years of time I spent honing my craft. I have to suppress the speculation even if it's a realistic projection.


Agree that all models are wrong, but some model is better than no model and arbitrary Fud. Climate alarmists at least have a model of the dangers

> For AI there's tons of evidence of about the increase in capabilities

That is not evidence that AI will destroy humanity. The fact that Ai is increasing in capabilities also means that it is increasing its capability to align with humans, no? I don't get why the reverse is considered as the sole and inevitable conclusion

I also don't agree about ChatGpt being the inflection point. The capabilities of Gpts were known for years, but chatGpt popularized them because it made it so easy to use. If these scientists failed to see the capabilities of the model it was because they did not care until the media brought it up. That means they are not very good scientists


>I also don't agree about ChatGpt being the inflection point. The capabilities of Gpts were known for years, but chatGpt popularized them because it made it so easy to use

You mean LLMs not GPTs. chatGPT was an inflection point in terms of publicity but also in terms of additional Reinforcement training that made the model highly highly usable. chatGPT was the first model that was incredibally usable (and I don't mean usable in terms of like a GUI or UI, usable in the sense that the AI was actively trying to assist you).

>That is not evidence that AI will destroy humanity. The fact that Ai is increasing in capabilities also means that it is increasing its capability to align with humans, no? I don't get why the reverse is considered as the sole and inevitable conclusion

The article didn't say humanity will be destroyed by AI. It's more saying society will dramatically shift and it implies that a lot of humans will suffer as a result. The shift is dramatic enough that while maybe apocalyptic is a bit too extreme of a word it's not entirely unfitting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: