Artificial intelligence threatens to cause the extinction of humanity. These are not the words of an outsider suffering from psychological disorders, but the terrifying conclusions of a report commissioned by the American government, and based on interviews with more than 200 AI experts such as executives from OpenAI, Google DeepMind or specialists in weapons of mass destruction…

THE does the general public really measure the extent implications of artificial intelligence? Nothing is less certain, if we are to believe a report commissioned by the US State Department.

This document highlights a “catastrophic” risk posed by the rapid evolution of AI for the national security of the United States. The government only has a short time left to prevent disaster… and we can imagine that France is just as concerned.

In order to produce this report, Gladstone AI spent over a year interviewing 200 people including some of the AI ​​industry’s most senior executives, cybersecurity researchers, weapons of mass destruction experts, and government officials.

There conclusion sends shivers down your spine : the most advanced AI systems could, in the worst case, “ pose an extinction level threat to the human species “.

It might seem like a hoax, but the State Department confirmed to CNN that it commissioned this report as part of an evaluation continues how AI aligns with its goal of protecting American interests.

However, the spokesperson specifies that this report does not represent the government’s vision American. These are conclusions drawn from interviews with numerous experts.

AI can save the world… but also become uncontrollable

When AI creates buzz among the general public and seduces investors, this frightening warning reminds us that the risks are also very real…

According to CEO and co-founder of Gladstone AI, Jeremie Harris, “ AI is a technology that can transform the economy, it can cure diseasesmake scientific discoveries, and overcome challenges that were thought to be insurmountable “.

But, because there is a but, “ She could also bring serious risksincluding catastrophic risks, of which we must be aware “.

He adds that more and more evidenceincluding empirical research and analyzes published in the world’s largest conferences on AI, suggest that beyond a certain threshold of capacity, AI could get out of control.

The two big dangers linked to AI

According to the report, there are two main dangers linked to AI. The first is that the most advanced systems could be weaponized to inflict damage which would be potentially irreversible.

The second is that the AI labs could lose control systems that they themselves developed, with “ potentially devastating consequences for global security “.

Still according to the document, “ The rise of AI and AGI (artificial general intelligence) has the potential to destabilize global security in a way recalling the introduction of nuclear weapons “.

There is a severe risk of AI race that could result in a conflict and accidents worthy of weapons of mass destruction. This is why Gladstone is calling for further action to be taken against this threat.

The firm calls in particular to open a new AI agency, to impose emergency regulatory barriers, and to limit the computing power that can be used to train AI models.

They believe there is a need urgent and clear that the American government intervene “. For its part, remember that the EU has just adopted the first AI Act.

Concern is growing behind the scenes of the AI ​​industry

The Gladstone researchers explain that they were able to draw these alarming conclusions thanks to an unprecedented level of access to information public and private sectors.

In particular, they were able to speak to the technical teams and managers from OpenAI, Google DeepMind, Meta or Anthropic which has just launched the super-powerful AI Claude 3.

In a presentation video, CEO Jeremie Harris explains that “ Along the way, we learned some thought-provoking things “.

So, ” behind the scenes, the security situation of AI advances seems quite inadequate compared to the risks that AI could introduce to national security very soon “.

In the past, many eminent experts have warned about the risks associated with AI. About a year ago, Geoffrey Hinton, considered the godfather of AIquit his job at Google and claimed that there is a 10% chance that AI will lead to the extinction of humanity within 30 years.

At the 2023 Yale CEO Summit, 42% of CEOs believed that AI has the potential to destroy humanity within five to ten years.

Other AI bigwigs cited in the report include Elon Musk, FTC’s Lina Khan, and a former OpenAI executive.

Furthermore, Gladstone adds that certain employees of AI companies share the same concerns privately.

This still secret AI terrifies experts


So, ” an individual from a well-known AI lab opined that, whether a specific next generation AI model were one day released into open access, it would be terribly bad “.

For good reason, this model “ has such persuasive skills that he could destroy democracy whether it was used to influence elections or manipulate voters “.

The report’s authors asked experts to privately share their personal estimates that an AI incident leads to irreversible global effects in 2024. responses vary from 4% to 20%

As usual, the greatest danger mentioned is the emergence of an AGI that can match or even surpass human learning capabilities. Such AI is considered to be “ the main risk factor for catastrophic loss of control “.

Furthermore, the document states thatOpenAI, Google DeepMind, Anthropic and Nvidia have all publicly estimated that a AGI could see the light of day by 2028.

It could introduce risks “ like never before the United States has encountered » comparable to “ weapons of mass destruction » if it was militarized.

For example, it could be used to design and implement cyberattacks capable of destroying critical infrastructure like the electrical network, from a simple prompt.

Another risk would be the spreading disinformation campaigns to destabilize society and erode trust in institutions.

We could also fear drone swarm attacks, psychological manipulationthe militarization of science, or even an uncontrollable AI seeking to take power over humans.

How is the American government preparing?

White House spokesperson Robyn Patterson says President Joe Biden’s executive order on AI is ” the most important action any government has taken to seize the promises and manage the risks of artificial intelligence “.

He adds that the president and vice-president will continue to work with their international partners and press Congress to pass legislation aimed at managing the risks associated with these emerging technologies.

When the snake of capitalism bites its tail

The pressure of competition pushes companies to accelerate the development of AI at the expense of security, and poses the risk that the most advanced AI systems could be stolen and weaponized against the United States.

A statement reminiscent of the internal conflict that erupted at OpenAI in November 2023, when Sam Altman was fired from his position as CEO for a few days. The reason is kept secret, but rumor has it that the firm would have made a very dangerous discovery.

By taking a step back to analyze the situation, we can realize that capitalism risks leading our societies to self-destruct

    Share the article:

Our blog is powered by readers. When you purchase through links on our site, we may earn an affiliate commission.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *