My Blog
Technology

AI Poses Major Societal Risks, Says Statement Signed by Industry Leaders

AI Poses Major Societal Risks, Says Statement Signed by Industry Leaders
AI Poses Major Societal Risks, Says Statement Signed by Industry Leaders


Artificial intelligence industry leaders are concerned about the potential threats advanced AI systems pose to humanity. On Tuesday, several industry leaders like OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis, along with other scientists and notable figures signed a statement warning of the risks of AI. 

The curt, one-sentence-long statement was posted on the website of the nonprofit Center for AI Safety. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads. 

Nearly every major tech company has released an AI chatbot or other generative AI tools in recent months, following the launch of OpenAI’s ChatGPT and Dall-E last year. The technology has begun to seep into everyday life and could change everything from how you search for information on the web to how you create a fitness routine. The rapid release of AI tools has also spurred scientists and industry experts to voice concerns about the technology’s risks if development continues without regulation.

The statement is the latest in a series of recent warnings on the potential threats of the advanced technology. Last week, Microsoft, an industry leader in AI and an investor in OpenAI, released a 40-page report saying AI regulation is needed in order to stay ahead of bad actors and potential risks. In March, Tesla and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak and a thousand other tech industry folks signed an open letter demanding companies halt production on advanced AI projects for at least six months, or until industry standards and protocols have caught up.

“Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders” reads the letter, which was published March 22. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Some critics have noted that the attention tech leaders are giving to the future risks of the technology fail to address current problems, like AI’s tendency to “hallucinate,” the unclear ways an AI chatbot arrives at an answer to a prompt, and data privacy and plagiarism concerns. There is also the potential that some of these tech leaders are requesting a halt on their competitors’ products so that they can have time to build an AI product of their own. 

Editors’ note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.



Related posts

23 Memorial Day Gross sales on iPad, Apple Watch, AirPods and Extra You May not Need to Omit

newsconquest

2022 Easter Jeep Safari Ideas Are Meaner and Greener Than Ever

newsconquest

What the Jan. 6 probe found out about social media, but didn’t report

newsconquest