Elon Musk, Steve Wozniak sign letter seeking a halt in AI experiments due to significant dangers

The Tesla and Apple founders join other tech leaders in calling for a pause in AI due to ‘profound risks to humanity’

Category:

As the AI race heats up with ChatGPT fueling wide interest among users, a huge flag of caution was recently raised by tech billionaires Elon Musk and Steve Wozniak on its potential threat.

The Tesla boss and the Apple co-founder joined other tech leaders on Thursday by signing an open letter that seeks a pause on AI systems experiments with human-competitive intelligence, which can pose profound risks to society and humanity.

The group demands that AI Labs immediately pause, at least for the next six months, the training of AI systems that are more potent than GPT-4.

The most recent iteration of the artificial intelligence language model Chat GPT, GPT-4, is astonishingly strong, promising creativity, visual input, and more context than its predecessor.

The OpenAI software’s fourth iteration can examine a sizable amount of data from all over the internet to figure out how to produce writing that sounds human and provide users with in-depth answers to inquiries, as seen lately in Microsoft’s Bing chat and search features.

As an added example, it is now also being tested to see whether it can assist and prevent fraud by payment processor Stripe Inc. It is also now being used by Morgan Stanley to organize data on wealth management.

Other tech giants have also embraced these AI systems in the past few months with the likes of Microsoft launching its Bing search and productivity-enhancing MS Office tool Copilot which are all powered by ChatGPT4 and Google’s Bard which aims to rival OpenAI’s popular software.

But amidst its astoundingly fascinating features, many of the tech industry’s bigwigs want its extensive use paused for a few months, citing that the full-throttle drive into AI technology could result in more harm than good.

“As stated in the widely endorsed Asilomar AI Principles, advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the open letter posted at the Future of Life (FLI) website said.

The tech leaders also stressed that only when the developers of these AI systems are certain that the outcomes will be good and the risks will be manageable should the world welcome such powerful AI systems.

“We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

The group also stressed that AI labs and independent experts should use this proposed halt in AI experiments “to jointly develop and implement a set of shared safety protocols” for advanced AI design and development that are rigorously audited and overseen by independent outside experts.

The open letter currently features more than 2,000 signees, including professors, tech executives, and scientists, with FLI individually verifying its signatories.


Read More Stories: Google opens Bard AI chatbot to users in the US and UK to contend with rival ChatGPT

Categories:
JM Agreda
JM Agreda
JM Agreda is a freelance journalist for more than 12 years writing for numerous international publications, research journals, and news websites. He mainly covers business, tech, transportation, and political news for Businessner.