Yeah, but that goes into the realm of personal preference right?
I suspect if you're giving people a choice in the future, they're going to flock to the human doctors. Especially human doctors receiving good recommendations.
There has to be something more than personal preference if you want to sway the masses on AI physicians. There has to be some way to measure outcomes in a valid, verifiable and public fashion. Even then, some human doctors will do worse than AIs, and some will do better. And again, at that point, you can expect people, given a choice, to flock to those humans who did better.
We'd need to get to the point where AIs do consistently better than, say, 60 to 70% of the human doctors for insurance companies to feel even semi-comfortable saying "we use AI doctors". An even higher percentage would be necessary for an insurance company to feel comfortable mandating AI doctors. And we'd need AIs to do consistently better than nearly all the humans for humans to choose AI doctors independently of their insurers forcing them to use AI doctors.
> We'd need to get to the point where AIs do consistently better than, say, 60 to 70% of the human doctors for insurance companies to feel even semi-comfortable saying "we use AI doctors".
I feel like at the rate AI is developing we will rapidly get to this point, then surpass it. Doctors will also probably be "enhanced" by AI. Imagine feeding all of your data (more than a human could digest, especially for every patient) into an LLM and letting it diagnose...
I suspect if you're giving people a choice in the future, they're going to flock to the human doctors. Especially human doctors receiving good recommendations.
There has to be something more than personal preference if you want to sway the masses on AI physicians. There has to be some way to measure outcomes in a valid, verifiable and public fashion. Even then, some human doctors will do worse than AIs, and some will do better. And again, at that point, you can expect people, given a choice, to flock to those humans who did better.
We'd need to get to the point where AIs do consistently better than, say, 60 to 70% of the human doctors for insurance companies to feel even semi-comfortable saying "we use AI doctors". An even higher percentage would be necessary for an insurance company to feel comfortable mandating AI doctors. And we'd need AIs to do consistently better than nearly all the humans for humans to choose AI doctors independently of their insurers forcing them to use AI doctors.