One thing that I think people need to understand is the distinction between science and the practice of medicine.
The practice of medicine is not science. The practice of medicine can be informed by science - to a degree that is almost entirely up to the practitioner. ; but it remains more art than science. The distinction is similar to that between an aviation maintenance and aerospace engineering.
Being a doctor is like debugging a unique humongous software application with thousands of years of organic patchwork added by a godlike AI which just generates random code and watches the failed experiments die, no source code to any of it just the final binary, no debugger that lets you kill the process and dump its memory or trace its execution, just a bunch of test cases that cost lots of money to run and may yield false positives and false negatives making every bug some sort of heisenbug which may or may not show up when you try to reproduce it and reduces you to guessing with some percentage of certainty fuzzy logic, and even if you do manage to figure anything out in your desperate attempt to fix it you're reduced to essentially inserting event objects into the software's internal event system and hoping it produces the desired effects as it circulates throughout the entire system while hoping it won't adversely affect anything else and it's not even guaranteed to make anything better, or alternatively you pry the program open very carefully while it's running and cut code sections which grew tumorous or stitch together other sections which are leaking precious data or remodel the pipes so the data flows more efficiently, all to hopefully fix the problem while praying to god your attempt to fix it didn't actually make it worse because some malware randomly made it into the program while you worked on it.
Yeah. It's like a never ending stream of bug reports coming in constantly and you have like ten minutes to make sense of each and every one amidst that chaos and do something about all of it because if you don't there isn't gonna be enough time to see them all and the godlike AI's operating system is gonna start killing the buggiest of the processes assigned to you before you've even figured out what's wrong and when that happens the report is closed forever and there's nothing you can do anymore. The same bugs show up in the programs over and over again every single day and there's no permanent solution, no permanent fix at the source code level that would end the suffering of all of those programs. Just constant swimming against the current. You don't even fully understand the system you're trying to fix, it's like ancient technology left to you by a long gone alien civilization and the best you can do is reverse engineer bits and pieces of it from the outside in and top down so you can do some very educated guesswork based on research that's way too expensive to be reproduced or replicated in any way, research that doesn't produce exact predictions like physics but rather purely statistical statements like "in case of A associated with conditions B if you do C you'll fix D% of the programs". Everything is framed like this, everything, you're never fully certain, everything has risk, there's even a chance the research you're basing all of this on is just completely made up or tainted by conflicts of interest or something equally stupid.
ChatGPT correctly identified my wife's condition in a couple seconds when a dozen doctors failed to for over a decade. Less than half a percent of people suffer from this condition, but enough do that doctors should have thought about it. All I did was write down the symptoms and poof. It was actually really frustrating that doctors were mystified but this tech gave us a new (and ultimately correct) path of inquiry. I felt robbed of all that time. My trust in doctors is at an all-time low, but her medication is effective, so something in the system works.
It took four years after graduation for me to obtain enough knowledge and training to help my own mother with her chronic pain. I literally did not know what to do until relatively recently. Other doctors didn't either and they tried everything.
I also nearly died myself about 2 years ago. Appendicitis of all things. Unusual presentation. I didn't see it. Surgeons didn't see it. Gastroenterologists didn't see it. Ultrasounds were inconclusive. I couldn't believe it when I saw the CT scan. I underwent surgery. Twice. The infection would not go away despite 5 intravenous antibiotics 24/7. They wanted to operate a third time and I became convinced I would die if I went under the knife again. Then they somehow managed to fly in an interventional radiologist who found and drained a few abscesses. 40 days I was at the hospital.
Local models integrated into EMR systems could be a great tool for doctors but not ChatGPT. I really don't recommend feeding confidential medical information into a corporation's computer. At least doctors are ethically obligated to maintain confidentiality.
To be clear, I'm just using ChatGPT as a (slightly tongue in cheek) shorthand for LLMs in general, but I do think there is a large potential for them within medicine.
They are so unreasonably effective for being, fundamentally, word predictors.
Add to that the fact that experience matters a lot too. The more variety of cases a doctor sees, the more the chances are that their diagnosis is going to be useful.
Yes and no, the more a doctor sees a particular diagnosis the less likely they are to consider anything else.
Honestly of all the professions that could be enhanced by AI, medicine has the most to gain. An AI can remember a much larger data set than any human and look for correlations among many more dimensions.
Yes that's what I think as well. And it's sad that medical data being weaponized against you is the thing that takes more sound bytes than how medical data collection can improve diagnosis and healthcare. But with the way medical industry, insurance industry etc are we cant blame people for worrying about data collection specially in a country like the US where you can become destitute without medical insurance
The great thing is even LLMs who only take text input still can yield better results than doctors. Think about a healthcare system which utilizes transformers working on images etc.
“Have you tried turning it off and on again?” Doesn’t have quite the same applicability. I imagine quite a lot of medical issues could be solved if reboots were simpler.
For sure. A friend of mine went to med school and chose the general practitioner path. One time I was hanging out at their place, so I leafed through some of the GP journals on the coffee table. It seemed much more akin to car repair than to science.
yeah, I was just thinking about this today. Doctors seem less like engineers and more like QA testers or PMs trying to fix bugs since there's nobody better suited to the job available.
Hopefully medical science will keep advancing rapidly here..
I doubt doctors will get better at this in our lifetimes. As we discover ever more about the human body, how are doctors expected to keep up with the research on top of everything else?
I’m hoping AI systems can help. A good medical AI should be able to stay abreast of all the research, know your personal medical history and suggest and evaluate any and all medical tests that are needed. Doctors will hate it, but I expect AIs will give much more accurate diagnoses than humans before long.
Just some statistics can help. My family doctor has some kind of awful green screen practice software but amazingly it has prognosis statistics for certain lab diagnostic results. Makes for better discussion about treatment plans.
I agree. Doctors' performance will remain the same as it has for thousands of years: overall, not so good.
But, they've been using AI systems for well near a decade, especially in triage and real-time transcriptions. I haven't met many that don't love AI already in their day to day work.
They are bound by regulations and liability, your typical GPs and even the specialists can't greenlight experiments outside of big research hospitals. Medicine and clinical diagnosis are heavily regulated. That's also why you rarely see single author papers in practical clinical medicine. Doctors are busy people, with horrendous work life balance. Most software engineers have it way easier, you get to choose your tools and you get to choose where you want to work. In the business of software engineering, a sufficiently senior engineer can get away with a lot of nonsense without risk of lawsuits.
Which is which though? As I understand it, aviation maintenance is quite rigorous, systematic, and informed by and taking into account every mistake made in the past.
The practice of medicine is not science. The practice of medicine can be informed by science - to a degree that is almost entirely up to the practitioner. ; but it remains more art than science. The distinction is similar to that between an aviation maintenance and aerospace engineering.
Sometimes doctors are just out of their depth.