Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Infinite Conversation[1] was linked on HN a while back and I think it's a good example of this.

I'm not sure if it's GPT-3 but the "conversation" the two philosophers have are littered with wrong information, such as attributing ideas to the wrong people; ie it wouldn't be too far fetched if they suggested that Marx was a film director.

The trouble with that incorrect information - and The Infinite Conversation is an extreme example of this because of the distinctive voices - is that it is presented with such authority that it isn't very hard at all to perceive it as perfectly credible; Zizek sitting there and telling me that Marx was the greatest romcom director of all time, without even a slight hint of sarcasm could easily gaslight me into believing it.

Now, this example here isn't two robot philosophers having coffee, but throw in a convincing looking chart or two and... well I mean it works well enough when the communicator is human, telling us that climate change isn't real.

[1] https://infiniteconversation.com/



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: