Some of the leading artificial intelligence CEOs have warned about the existential threat of the technology they have done much to promulgate. These, alongside a healthy number of experts and professors in the field, have gathered more than 350 signatures for a statement urging action against “the risk of extinction from AI.”
Sam Altman, the CEO of ChatGPT OpenAI, was joined by DeepMind founder Demis Hassabis, Anthropic AI lab head Dario Amodei, and hundreds of others in describing the potential harm of AI as relative to nuclear war. Computer Science Professor Geoffrey Hinton, otherwise known as the ‘Godfather of AI’, who quit Google this month with regret about his life’s work, was also among the signatories.
The collective clout of the signatories did not, however, result in a long and thoughtful statement, but rather a 22-word advisory for policymakers. This read:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
No highlighting of the specific harms of AI (which would have been helpful given the apparent ignorance about the technology among policymakers) nor the possible methods of mitigating the risks were mentioned. Responding to the statement, published by the Center for AI Safety, tech writer Jürgen Geuter wrote in a post on Twitter that “if the people signing this document about “AI risk” were serious, they wouldn’t keep building these systems and selling them (actually renting them out) on the open market.”
Panelists at The European Conservative’s latest event, on the “coding of a new Europe,” agreed that the digital transition should be “put on ice” at least until the position of the human has been fully realised. Political analyst Carlos Perona Calvete stressed that “the human subject has to be the locus.”
Mr. Altman will be meeting with European Commission President Ursula von der Leyen later this week.