Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Being a doctor is like debugging a unique humongous software application with thousands of years of organic patchwork added by a godlike AI which just generates random code and watches the failed experiments die, no source code to any of it just the final binary, no debugger that lets you kill the process and dump its memory or trace its execution, just a bunch of test cases that cost lots of money to run and may yield false positives and false negatives making every bug some sort of heisenbug which may or may not show up when you try to reproduce it and reduces you to guessing with some percentage of certainty fuzzy logic, and even if you do manage to figure anything out in your desperate attempt to fix it you're reduced to essentially inserting event objects into the software's internal event system and hoping it produces the desired effects as it circulates throughout the entire system while hoping it won't adversely affect anything else and it's not even guaranteed to make anything better, or alternatively you pry the program open very carefully while it's running and cut code sections which grew tumorous or stitch together other sections which are leaking precious data or remodel the pipes so the data flows more efficiently, all to hopefully fix the problem while praying to god your attempt to fix it didn't actually make it worse because some malware randomly made it into the program while you worked on it.


Not only that but you need to provide expertise more on the timelines of a customer support agent rather than a software developer or researcher.


Yeah. It's like a never ending stream of bug reports coming in constantly and you have like ten minutes to make sense of each and every one amidst that chaos and do something about all of it because if you don't there isn't gonna be enough time to see them all and the godlike AI's operating system is gonna start killing the buggiest of the processes assigned to you before you've even figured out what's wrong and when that happens the report is closed forever and there's nothing you can do anymore. The same bugs show up in the programs over and over again every single day and there's no permanent solution, no permanent fix at the source code level that would end the suffering of all of those programs. Just constant swimming against the current. You don't even fully understand the system you're trying to fix, it's like ancient technology left to you by a long gone alien civilization and the best you can do is reverse engineer bits and pieces of it from the outside in and top down so you can do some very educated guesswork based on research that's way too expensive to be reproduced or replicated in any way, research that doesn't produce exact predictions like physics but rather purely statistical statements like "in case of A associated with conditions B if you do C you'll fix D% of the programs". Everything is framed like this, everything, you're never fully certain, everything has risk, there's even a chance the research you're basing all of this on is just completely made up or tainted by conflicts of interest or something equally stupid.


And if you make a mistake that other people feel you really shouldn't have and it damages the thing, you get sued into oblivion.


And much like software development, it will all soon be done by ChatGPT.


ChatGPT correctly identified my wife's condition in a couple seconds when a dozen doctors failed to for over a decade. Less than half a percent of people suffer from this condition, but enough do that doctors should have thought about it. All I did was write down the symptoms and poof. It was actually really frustrating that doctors were mystified but this tech gave us a new (and ultimately correct) path of inquiry. I felt robbed of all that time. My trust in doctors is at an all-time low, but her medication is effective, so something in the system works.


It took four years after graduation for me to obtain enough knowledge and training to help my own mother with her chronic pain. I literally did not know what to do until relatively recently. Other doctors didn't either and they tried everything.

I also nearly died myself about 2 years ago. Appendicitis of all things. Unusual presentation. I didn't see it. Surgeons didn't see it. Gastroenterologists didn't see it. Ultrasounds were inconclusive. I couldn't believe it when I saw the CT scan. I underwent surgery. Twice. The infection would not go away despite 5 intravenous antibiotics 24/7. They wanted to operate a third time and I became convinced I would die if I went under the knife again. Then they somehow managed to fly in an interventional radiologist who found and drained a few abscesses. 40 days I was at the hospital.

I feel so grateful to be alive.


Thanks, gonna keep this in mind!


Local models integrated into EMR systems could be a great tool for doctors but not ChatGPT. I really don't recommend feeding confidential medical information into a corporation's computer. At least doctors are ethically obligated to maintain confidentiality.


To be clear, I'm just using ChatGPT as a (slightly tongue in cheek) shorthand for LLMs in general, but I do think there is a large potential for them within medicine.

They are so unreasonably effective for being, fundamentally, word predictors.


> I do think there is a large potential for them within medicine

As do I.


thats why i majored in biomed engineering but stopped short of going into medical school

a lot of the 'magic' around being human evaporates when you have to quantify the 'worth' of any particular organ

untangling this closed-source blob always reveals secrets: 2023 just had an update to anatomy! https://www.urmc.rochester.edu/news/story/newly-discovered-a...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: