Skip to content
Search
Close
SHOP
  • News
  • Analysis
  • Commentary
  • Essays
  • Interviews
  • Reviews
  • Tributes
  • Media
  • Events
Menu
  • News
  • Analysis
  • Commentary
  • Essays
  • Interviews
  • Reviews
  • Tributes
  • Media
  • Events
  • NEWS

Consortium of Tech Luminaries and Scientists Urges Halt to AI Research

Without guardrails in place, Elon Musk and over 1,300 AI experts and executives foresee a precarious future for mankind.
  • Tristan Vanheuckelom
  • — March 30, 2023
Without guardrails in place, Elon Musk and over 1,300 AI experts and executives foresee a precarious future for mankind.
  • Tristan Vanheuckelom
  • — March 30, 2023

Over 1,300 scientists and AI experts seek a six-month stop in the development of powerful AI systems, asking labs engaged in such work not to proceed until risks have become more manageable.

Several major players in the tech sector have signed an open letter, issued by the Future of Life Institute, which calls on AI labs to pause experiments with AI, effective immediately.

A consultation of the EU’s transparency register reveals that the non-profit is primarily funded by the Musk Foundation; concern about the future of AI is a running theme in past comments made by SpaceX, Tesla, and Twitter CEO Elon Musk.

Together with over 1,300 signatories, Musk is worried about the training of systems more powerful than OpenAI’s newly launched GPT-4, the most advanced language model to date. 

According to them, such advanced AI systems could pose great risks to society and humans if not handled with care. AI labs, they argue, are in such a race to “deploy ever more powerful digital minds” that the planning and management of those systems are in danger of becoming only secondary considerations.

They believe that AI labs and experts should therefore use the proposed pause to develop and implement much-needed safety protocols.

If this is not done, creators of such systems—which can already compete with humans on certain tasks—will eventually be unable to “understand, predict, or reliably control” them, they warn.

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the letter asks, remarking that “such decisions must not be delegated to unelected tech leaders.”

The risk of propaganda and misinformation being spread by these AI systems is another cited risk. “Should we let machines flood our information channels with propaganda and untruth?” reads the next rhetorical question.

Sam Altman, chief executive at OpenAI, was not among those who signed the letter. Notably, other heavyweights, such as Sundar Pichai and Satya Nadella, CEOs of Alphabet and Microsoft, were not included either.

The letter could, however, boast of having garnered the support of Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI luminaries Yoshua Bengio (known as one of the ‘godfathers of AI’), and researcher Stuart Russell, a pioneer in the field.

Tristan Vanheuckelom is a Belgian journalist, a book and film reviewer for various Dutch-language publications, and a writer for The European Conservative. His other interests include history, political science, and theology.
  • Tags: AI, artificial intelligence, Elon Musk, mankind, safety protocols, Tristan Vanheuckelom

READ NEXT

Iran-Belgium Prisoner Swap

Carlos Perona Calvete May 28, 2023

Leftists Attack Macron For Saying France Is in the “Process of Decivilization”

Robert Semonsen May 28, 2023

Twitter Pulls Out of EU’s Anti-Disinformation Code

Tristan Vanheuckelom May 28, 2023

IMPRESSUM

SUBSCRIPTION

LOG IN

PRIVACY POLICY

CONTACT

[email protected]

© The European Conservative 2023

  • Impressum
  • Privacy Policy
  • General Privacy Policy
  • Terms & Conditions

Made by DIGITALHERO

Issue 26, Spring 2023

  • News
  • Analysis
  • Commentary
  • Essays
  • Interviews
  • Reviews
  • Tributes
  • Media
  • Events
Menu
  • News
  • Analysis
  • Commentary
  • Essays
  • Interviews
  • Reviews
  • Tributes
  • Media
  • Events
Search

About

SHOP

JOBS & VACANCIES

Login