Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>I'd say in 10-20 years we'll not see anything revolutionary, as in AI smart enough to continue working on improving AI

You do realize we've just at least doubled the amount of cognitive bandwidth on earth right? For every one brain, there are now two. A step-wise change in bandwidth is most definitely going to have some very interesting implications and probably much sooner than 10 years.



> You do realize we've just at least doubled the amount of cognitive bandwidth on earth right?

What you mean? Human population is now past exponential curve and entering the saturation point.

Or do you mean adding ChatGPT? Then it's not doubled. Pretty sure that's centralized.


> Or do you mean adding ChatGPT? Then it's not doubled. Pretty sure that's centralized.

Is it though? Think about it. If I have a medical or legal conversation with it that I derive value from, that's akin to a fully trained professional human who took several decades to grow into an adult and then undergo training just popping into existence in seconds during inference and then just as quickly disappearing. Just a few short months ago, the only way I could have had that conversation was to monopolize the cognitive bandwidth of a real human being who then couldn't utilize it to do other things.

So, when I say doubled, what I mean is that in theory every person who can access a sufficiently powerful LLM can effectively create cognitive bandwidth out of thin air every time they use the model to do inference.

Look what happened every time there has been a stepwise change in bandwidth throughout history. Printing press, telephone, dial up internet, broadband, fiber optic. The system becomes capable of vastly more throughput and latencies are decreased significantly. The same thing is happening here. This is a revolution.


> You do realize we've just at least doubled the amount of cognitive bandwidth on earth right?

I don't realize that. What do you mean exactly?


LLMs can perform cognitive labor. Moreover they can increasingly perform it at a level that matches and/or surpasses humans. Humans can use them to augment their own capabilities, and execute faster on more difficult tasks with a higher rate of success. In addition, cognitive labor can be automated.

When bandwidth increases, latency decreases. We're going to be able to get significantly higher throughput than we were previously able to. This is already happening.


Nope, it's now another voice, a colleague who's work you constantly have to check over because you can't trust it enough to just say, "go ahead, I know you're trust worthy".

This is already happening. ??


Sort of just proves my point, no? It's faster for me to just check its work than to do the work from scratch myself and that work isn't monopolizing the wetware of another human being. Cognitive bandwidth has increased. The system is capable of more throughput than before. In fact, how much more throughput could you get if you employ enough instances of LLMs to saturate the cognitive bandwidth of a human who's sole job is to simply verify the outputs?

If you look at the leap from GPT-3 -> GPT-4 in terms of hallucinations and capabilities etc, and you combine that with advances in cognitive architecture like reflexion and AutoGPT, it's pretty clear the trajectory is one of becoming increasingly competent and trustworthy.

The degree to which you need to check it's work depends on your use case, level of risk tolerance, and methods for verification. I think one of the reasons AI art has absolutely exploded is because there's no consequences for a generation that fails and it can be verified instantly. Compare that to doing your taxes where it's high stakes if you get it wrong, you're far less likely to rely on it. There is a landscape of usefulness with different peaks and valleys.


What are one of the professional use cases where you would just feel comfortable YOLOing some ChatGPT generated code into prod? Publishing a journal without verification etc?

You should also take note of the warnings in the GPT-4 manual, it's a much more convincing liar than GPT-3. Quite explicitly says that.

My fear is that I just get lazy and trust it all the time.

I think one of the reasons AI art has absolutely exploded is because there's no consequences for a generation that fails and it can be verified instantly.

What are you talking about exactly?


What's with the assumption that anyone needs to YOLO anything? Your coworkers don't let you YOLO your code to prod, and you don't let them YOLO their code to prod. Trust but verify, right?

My point with the AI art comment is that not every output of these models is something that needs to go to production! There's a continuum of how much something matters if it's wrong, and it depends on who is consuming the output and what it is they need to do with it, and the degree to which other stakeholders are involved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: