I had a lot of fun with ChatGPT’s wholly fabricated but entirely legitimate-sounding descriptions of different Emacs packages (and their quite detailed elisp configuration options) for integrated cloud storage, none of which exist.
I’m not sure that fabricated nonsense would actually make Bing’s results any worse than they are today.
“It’s okay I don’t mind verifying all these answers myself” is an odd sort of sentiment, and also inevitably going to prove untrue in one sense or another.
If it generated the code, I would have to audit that code for correctness/safety/etc.
Or, more likely, I would just lazily assume everything is fine and use it anyway, until one day the unexamined flaws destroyed something costly in a manner difficult to diagnose because I didn't bother to actually understand what it was doing.
There really should be more horror at the imminent brief and temporary stint of humans as editors, code reviewers, whatever, over generative AI mechanisms (temporary because that will be either automated or rendered moot next). I'm unaware of any functional human societies that have actually reached the "no one actually has to work unless they want to do so, because technology" state, so this is an interesting transition, for sure.
> Or, more likely, I would just lazily assume everything is fine and use it anyway, until one day the unexamined flaws destroyed something costly in a manner difficult to diagnose because I didn't bother to actually understand what it was doing.
Well yeah, I'm right there with you. But that feels a lot like any software, open or closed source. Human programmers on average are better than AI programming today, but human programmers aren't improving as fast as AI is. Ten years from now, AI code will be able to destroy your data in far more unpredictable and baroque ways than some recent CS grad.
> I'm unaware of any functional human societies that have actually reached the "no one actually has to work unless they want to do so, because technology" state, so this is an interesting transition, for sure.
This is a really interesting thought. Are we seeing work evaporate, or just move up the stack? Is it still work if everyone is just issuing natural language instructions to AI? I think so, assuming you need the AI's output in order to get a paycheck which you need to live.
Then again, as a very long time product manager, I'm relatively unfazed by the current state of AI. The hundreds of requirements docs I've written over decades of work were all just prompt engineering for human developers. The exact mechanism for converting requirements to product is an implementation detail ;)
I’m not sure that fabricated nonsense would actually make Bing’s results any worse than they are today.
“It’s okay I don’t mind verifying all these answers myself” is an odd sort of sentiment, and also inevitably going to prove untrue in one sense or another.