1. PhD Defenses in the humanities have both a written thesis and an oral on the spot defense. You can't ChatGPT that easily because a professor will know how to curve ball to things that are seemingly unrelated but comparable to a human.
2. As an MA student I was solving semantic problems for engineers for analysis that they couldn't fathom. They were very smart at technical things (and great writers), but when language problems come up, it was a challenge. You can be a great communicator but not understand language itself.
3. Most people in positions for AI are being evaluated by things AI is good at. So as a candidate with a very good understanding of language, the AI wouldn't know how to evaluate my ability. I would really have to outline a problem in language AI has to face and explain it to a human. Then get them to understand it's value.
I think all PhDs have an oral defense. Not only that its common to have Quals where you have to present the state of the art research in your field and answer oral questions about it. Even if you had ChatGPT it would be tough because a lot of questions are like, "Why did XYZ do ABC after seeing result 123?" The problem is that often its an untrue statement they are trying to get you to justify -- XYZ didn't do ABC or didn't do it after seeing 123. This is something I imagine most LLMs to struggle with today and say, "No, that didn't happen -- here's actually what happened" when its in a nuanced field.
Maybe it was just legacy from my department (and this was 20+ years ago -- I think things like these are probably less accepted now).
Funny story, a year or so after my quals I was at a conference and talking about it (at a lunch table, not a presentation). I was saying with one of these questions I answered it correctly and explained why what they were asking wouldn't work. One of the senior professors at the table chuckled and said that he was one of the committee members for one of the members of my committee and how my committee member fell hook, line, and sinker for one of these questions and wrapped himself in circles during quals.
2. As an MA student I was solving semantic problems for engineers for analysis that they couldn't fathom. They were very smart at technical things (and great writers), but when language problems come up, it was a challenge. You can be a great communicator but not understand language itself.
3. Most people in positions for AI are being evaluated by things AI is good at. So as a candidate with a very good understanding of language, the AI wouldn't know how to evaluate my ability. I would really have to outline a problem in language AI has to face and explain it to a human. Then get them to understand it's value.