Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

AI May Bring ‘Risk of Extinction’: AI Architects Caution

The AI revolution’s leaders are calling for regulators to take action to implement preventative measures, comparing potential disasters to pandemics and nuclear war.

While AI can prove itself helpful, the very technology may cause humankind to go extinct.

Hundreds of AI industry leaders and researchers have joined forces to spread warnings on the potential dangers that AI can bring in the future.

A Tuesday letter published by the Center for AI Safety –  whose members include executives from Microsoft, Google, and OpenAI – claims that the very artificial intelligence technology that they are developing can pose real danger to humanity’s existence. 

The danger, suggests the group, may be of a similar magnitude to the horrors of pandemics and nuclear wars.

But what can humankind do to alleviate the risks presented by AI?

In the letter, AI specialists gave a very brief order: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Despite emphasising the severity of what damages AI can potentially bring, the letter’s signatories fail to expand on their warning, not elaborating any further.

As the letter presents a picture of AI as a lingering threat, people are left wondering what action should be taken? How would such events happen? What should we expect? Why would AI – an object of human creation –  even turn on its creators? Where are the guardrails that would prevent such an event from happening?

If AI presents such a threat, why are we supposed to take action to prevent its dangers instead of continuing our efforts against climate change, geopolitical conflict, and maybe even alien invasion?

 

The real risk

OpenAI CEO Sam Altman has been vocal in his activities to bring regulations to the AI industry in the US and EU – where his stint in Europe has left many wondering whether OpenAI will leave the jurisdiction (so far, OpenAI has no plans to leave the region).

Meanwhile, experts have already been cautioning of AI’s risks – with a previous open letter signed by 31,810 endorsers, including Elon Musk, Steve Wozniak, Yuval Harari, and Andrew Yan calling for the pause of training powerful AI models.

“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt,” the letter says, clarifying that “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

Such experts are fearful of the event of a potential AI Foom – where an AI becomes capable of improving its own systems on its own, therefore increasing its own capabilities to the point where it can surpass the need for human intelligence to help guide it – and have been warning the public for years.

Despite all their efforts, the rapid pace of technology development, with media focus added in the mix, just recently propelled this conversation to the global spotlight.

And now, people are now being more diversified in how they believe AI will impact the future of social interactions.

While some see that AI can bring us an idyllic scenario where technological advancement remains at the forefront, where humans interact with AI, aligned interests and all – others see things differently. In the alternate vision, some believe humans will end up having to keep up with AI as new jobs centred around the technology are created, much like the invention of the automobile.

Some see things more extreme and negatively – viewing that the technology has potential to grow and mature to become uncontrollable by its own developers, posing a threat to human society.

Until that time comes, developers (and users) still are working on improving the capabilities of AI. Even if a Frankenstein’s monster scenario arises where technology can turn its head on us from misuse, we will never know if the next generation of AI software updates will provoke the world’s AI wizards to create a slew of myths of the world’s destruction.

Share Post:

Twitter
LinkedIn
Telegram
Facebook
Pinterest