> “AI still remains, I would argue, completely unproven. And fake it till you make it may work in Silicon Valley, but for the rest of us, I think once bitten twice shy may be more appropriate for AI,” he said. “If AI cannot be trusted…then AI is effectively, in my mind, useless.”
This quote is enough to dismiss the whole article.
“Unproven”, I don’t get how anyone can use LLMs and come away with this opinion. There is simply no better way to do a fuzzy search than by typing a vague prompt into an LLM.
I was trying to find a movie title the other day, only remembered it had Lime in the title and had a Jack-the-Ripper setting. ChatGPT found it easily. Sure you have to fact check the results, but there’s undeniable value there.
I was trying to remember the name of a book series I read as a child in the late 80s/early 90s. I gave ChatGPT part of a title (it had Scorpion and a few other words in the title), a few plot points, and the decade it was published, and asked for an ISBN. It confidently returned a book with Scorpion in the title, a short plot summary, published in 1983, an ISBN, even an author, and it was all entirely made up. It took me a few minutes to realize this when my searches on Amazon and library websites turned up nothing.
If I asked a job candidate any question and they confidently replied with a set of facts that were entirely made up, I would not consider them for any position under any circumstances.
I’m not sure this is the best example of the power of an LLM. Not denying there are actual uses cases, but simply searching Google “Lime movie Jack the Ripper” will display the answer in the first result (and I imagine would have been able to do that for the past 10+ years)
I put "movie title the other day, only remembered it had Lime in the title and had a Jack-the-Ripper" in Google and the first answer is "The Limehouse Golem".
Is that it ? Dunno if behind the scenes it was using an LLM or "classical" search.
The issue that LLMs are constantly hallucinating and are not capable following long term rules. Since its still niche, its not a problem but what if professionals like Lawyers or doctors start using it day to day, then we are in trouble. I wouldn't go as far as saying its useless but its effectiveness is very close to zero in most fields not related to spamming.
If you try to replace your whole job with an LLM, yes you will have problem.
I work in IT, I use ChatGPT daily to spit out scripts, ask it to come up with function names, convert code from a technology to another, ask it to do minor refactoring that I don't know how to do.
I can immediately validate the output, learn from it, and even work with techs I'm not familiar with.
of course, I don't try to replace my whole job out of it.
ChatGPT is 'good' at doing statistical analysis with python on a given dataset, that can help in the harder task you quoted "distinguishing between ideas that are correct, and ideas that are plausible-sounding but wrong".
The job itself cannot be easily verified, but you can use LLM on a subset of theses tasks.
This quote is enough to dismiss the whole article.