How many hours you spent trying ChatGPT out? I spent at least high tens, maybe even hundreds. You're absolutely wrong. Yes, it hallucinates, yes its wrong about obscure topics - but calling having success with it luck is absolutely wrong. It's very consistently good. Especially about things like programming, physics, math - and now I'm using it as my teaching assistant for my pilot training, it's perfect (and I can very simply verify the answers are good with my FAA Pilot Handbook, don't worry).
> I can very simply verify the answers are good with my FAA Pilot Handbook
Thank you for agreeing with my point.
If you need to check the answers with your FAA Pilot Handbook, wouldn't it be simpler just to read the FAA Pilot Handbook? That handbook, unlike the text generated by ChatGPT, was written by a process that was aware of the semantic relationship between the text being written and the things in the world that the text was referring to. That is what makes the FAA Pilot Handbook a reliable source of information about flying. ChatGPT is not any such process. That's why it is not a reliable source of information--as you agree, since you need to check what it says about flying with the FAA Pilot Handbook.
> If you need to check the answers with your FAA Pilot Handbook, wouldn't it be simpler just to read the FAA Pilot Handbook?
No, absolutely not. It's much easier to verify a couple specific pieces of information that you're unsure of then go hunting through a large corpus of information trying to pick out the bits that are specific to what you want.
I've used ChatGPT across lots of different areas and I find it incredibly useful. I'm not blindly trusting what it spits out, but it's pretty simple to verify what it's saying. While I definitely do have concerns about the impacts of ChatGPT on a societal level, and what will happen when so much computer-generated content can flood the Internet, but, at a personal level, the complaint that ChatGPT "bullshits with confidence" is not really that much of a problem for what I use it for.
Edit: To give a specific, real-world example, there was a post recently about using ChatGPT to replace a SQL Analyst. Now, ChatGPT definitely will and did churn out wrong answers, but it was incredibly useful to use as a starting point for some complex queries. When it fails, it tended to fail in pretty obvious ways, and to the complaint that it can fail in more subtle ways that look correct, I've certainly dealt with tons of human-generated queries that had the same issues. Are those all useless?
I work with all available material in many different ways (Anki cards, videos, different explanations of the same thing, etc), and ChatGPT is another way to learn and help me generate learning material. For example I have it ask me questions like a tutor would. Or I ask various questions when I'm unsure about the wider context - e.g. it provides much more about the underlying physics than the Pilot Handbook itself. If I don't understand, I can ask for clarification, or an explanation like I am 5.
Reading the Pilot Handbook is a big part of learning but being limited to it would be hard. I'm very happy about having Chatgpt available.
Perhaps you and the poster are approaching your evaluations from different points of view. I've found that if I set out to break ChatGPT I can very easily do it. If my goal is to look for mistakes or find a failure case it's almost trivial to do so.
At the same time if I'm looking for success I normally find it.
Essentially if you work cooperatively with the tool then you'll find it useful, if you are antagonistic towards it you can also have success in breaking it.
I asked it to write some simple code do do a task. It confidently told me to use a library and use some functions in it.
Couldn’t get it to work. Couldn’t get any examples out of google of it being sued that way. Eventually looked through the code of the library and found that while some functions worked int he way chatgpt was trying, the functions it had selected didn’t work, didn’t support those arguments, and never had.
> Especially about things like programming, physics, math
I routinely find ChatGPT giving me completely made-up APIs and incorrect explanations when it come to programming. And I haven't found it much better with math. Sorry, I don't buy it. Maybe it's good at training pilots, and if so great, but it's wrong enough for me that it's hard to trust in general.
The tax/tariff laws might dictate 30 minutes; however, in this case, it is also a practical limit for recording on your phone. Manufacturers need to set some boundaries in case someone is making a pocket recording that potentially could fill up the storage, drain the battery, and crash the device. If you want to record more than 30 minutes, you are better off buying recording equipment.