DS theory has never been popular. Even Bayesianism (cf. "Bayesian nonparametrics" two decades ago) took a back seat after LLMs came out. DS theory, with its greater computational complexity, is therefore even less attractive.
Yes and we are IMHO worse off, without a proper theory of reasoning under uncertainty, it's difficult to understand what LLMs are doing (or even hard to define what they should be doing).
I agree that DS is computationally prohibitive, but another way out (aside from probability, which I don't like either) is with various systems of fuzzy logic (or you can just go with the most expressive one under the lovely name \L\Pi 1/2).
(BTW I am also exploring approach to uncertainty based on untyped lambda calculus, where each term is interpreted as a kind of "model of the world". Uncertainty degree is given by whether the term has a normal form or not. If it has not, then it is certain, while if it has a normal form, it means that additional assumptions/arguments need to be supplied to specify the model further.)
I think that's overly reductivist. In the general case DS operates on up to 2^M sets where M is the cardinality of the hypothesis space: worst case scenario. That's not true if hypotheses are hierarchical, or if evidence is frequently about the same set, or there just isn't enough evidence to fuse to get to 2^M.
In the worst case scenario there are efficient approximation methods which can be used.