The most moneyed and well-coordinated organizations have honed a large hammer, and they are going to use it for everything, and so almost certainly future big findings in the areas you mention, probabilistically inclined models coming from ML will be the new gold standard.
But yet the only thing that can save us from ML will be ML itself because it is ML that has the best chance to be able to extrapolate patterns from these blackbox models to develop human interpretable models. I hope we do dedicate explicit effort to this endeavor, and so continue the human advances and expanse of human knowledge in tandem with human ingenuity with computers at our assistance.
Spoiler: "Interpretable ML" will optimize for output that either looks plausible to humans, reinforces our preconceptions, or appeals to our aesthetic instincts. It will not converge with reality.
> One strong theme is the prevalence of context features (e.g. DNA, base64) and token-in-context features (e.g. the in mathematics – A/0/341, < in HTML – A/0/20). 29 These have been observed in prior work (context features e.g.
[38, 49, 45]
; token-in-context features e.g.
[38, 15]
; preceding observations
[50]
), but the sheer volume of token-in-context features has been striking to us. For example, in A/4, there are over a hundred features which primarily respond to the token "the" in different contexts. 30 Often these features are connected by feature splitting (discussed in the next section), presenting as pure context features or token features in dictionaries with few learned features, but then splitting into token-in-context features as more features are learned.
> [...]
> The general the in mathematical prose feature (A/0/341) has highly generic mathematical tokens for its top positive logits (e.g. supporting the denominator, the remainder, the theorem), whereas the more finely split machine learning version (A/2/15021) has much more specific topical predictions (e.g. the dataset, the classifier). Likewise, our abstract algebra and topology feature (A/2/4878) supports the quotient and the subgroup, and the gravitation and field theory feature (A/2/2609) supports the gauge, the Lagrangian, and the spacetime
I don't think "hundreds of different ways to represent the word 'the', depending on the context" is a-priori plausible, in line with our preconceptions, or aesthetically pleasing. But it is what falls out of ML interpretation techniques, and it does do a quantitatively good job (as measured by fraction of log-likelihood loss recovered) as an explanation of what the examined model is doing.
That is not considered interpretable then, and I think most people working in the field are aware of this gotcha.
Iirc when EU required banks to have interpretable rules for loans, a plain explanation was not considered enough. What was required was a clear process that was used from the beginning - i.e. you can use an AI to develop an algorightm to make a decision, but you can’t use AI to make a decision and explains reasons afterwards.
Spoiler: basic / hard sciences describe nature mathematically.
Open a random physics book, and you will find lots and lots of derivations (using more or less acceptable assumptions depending on circumstance under consideration).
Derivations and assumptions can be formally verified, see for example https://us.metamath.org
Ever more intelligent machine learning algorithms and data structures replacing human heuristic labor, will simply shift the expected minimum deliverable from associations to ever more rigorous proofs in terms of less and less assumptions.
Machine learning will ultimately be used as automated theorem provers, and their output will eventually be explainable by definition.
When do we classify an explanation as explanatory? When it succeeds in deriving a conclusion from acceptable assumptions without hand waving. Any hand waving would result in the "proof" not having passed formal verification.
But yet the only thing that can save us from ML will be ML itself because it is ML that has the best chance to be able to extrapolate patterns from these blackbox models to develop human interpretable models. I hope we do dedicate explicit effort to this endeavor, and so continue the human advances and expanse of human knowledge in tandem with human ingenuity with computers at our assistance.