My own reflection on this idolatry has been along the lines of how readily some people are at negating their own and humanity in general’s fundamental agency. Having AGI, SAI, etc. is completely meaningless if we as our own agents are not there to value it. In a sense, people preaching the coming dominance of AI are suicidal or homicidal, since they are pursuing their own demise by technical means.
Pope Francis talks exactly about this in the letter:
> 38. ... The Church is particularly opposed to those applications that threaten the sanctity of life or the dignity of the human person.[78] Like any human endeavor, technological development must be directed to serve the human person and contribute to the pursuit of “greater justice, more extensive fraternity, and a more humane order of social relations,” which are “more valuable than advances in the technical field.” ...
> 39. To address these challenges, it is essential to emphasize the importance of moral responsibility grounded in the dignity and vocation of the human person. This guiding principle also applies to questions concerning AI. In this context, the ethical dimension takes on primary importance because it is people who design systems and determine the purposes for which they are used.[80] Between a machine and a human being, only the latter is truly a moral agent—a subject of moral responsibility who exercises freedom in his or her decisions and accepts their consequences.[81] It is not the machine but the human who is in relationship with truth and goodness, guided by a moral conscience that calls the person “to love and to do what is good and to avoid evil,”[82] bearing witness to “the authority of truth in reference to the supreme Good to which the human person is drawn.”[83] Likewise, between a machine and a human, only the human can be sufficiently self-aware to the point of listening and following the voice of conscience, discerning with prudence, and seeking the good that is possible in every situation.[84] In fact, all of this also belongs to the person’s exercise of intelligence.
He even brings up x-risk at one point, which gives me some hope in this message reaching those members of the faith who have influence on the new administration.
The existential risk that AI poses is first and foremost the threat that it be centralized and controlled by a closed company like OpenAI, or a small oligopoly of such companies.
I don’t think centralization is the real threat. As James Currier [1] pointed out, AI will be commoditized through open-source and model convergence, making oligopoly control unlikely.
The real challenge is standardizing safety across open models and countering malignant AI use, especially amid demographic challenges like declining fertility.
AI + VR will most probably create addictive, lifelike experiences that may affect real-world relationships. Like TikTok and Instagram algorithms, this could reduce the desire for intimacy and worsen declining fertility rates.
That concern is your right to prioritize, but it lessens the term "existential risk" into a metaphor. The literal existential risk is the risk that AI destroys all humans in pursuit of goals that have nothing in common with human values.
He is also not cheering its “coming” but worried about the misuse of its power. You can say the same thing about other powerful inventions and their inventors.
Hinton's views on human consciousness would seem remarkably "unhuman" coming from your PoV and if I understand you correctly. I think his point is based more on self-preservation rather than idolatry. My observations about him are that he does like AI and welcomes AGI. He does not think we humans as species are anything special.