7 February 2025

Yoshua Bengio (L) and Max Tegmark (R) discusses the development of artificial general intelligence during a direct podcast for CNBC's “Beyond the Valley” in Davos, Switzerland in January 2025.

CNBC

Two artificial intelligence scientists in the world told CNBC that artificial intelligence made from “agents” may be dangerous because its creators may lose control of the regime.

In the latest episode of CNBC's “Behind the valley” Podcasts were released on Tuesday, Max Tejark, a professor at the Massachusetts Institute of Technology and President of the Future of Life Institute, and Yoshua Benjio, who was called “The Forces of Artificial Intelligence” and a professor at the University of Montreal their concerns about artificial general intelligence, or AGI. The term widely refers to the most intelligent artificial intelligence systems.

Their concerns of the largest companies in the world are now talking about “AULECTIC AI” – which claim companies that will allow AI Chatbots to act such as assistants or agents, assistance in work and daily life. Industry estimates are different for how long AGI will come into existence.

With this concept, the idea that artificial intelligence systems can have a “agency” and its own ideas, according to Benjio.

“The researchers in artificial intelligence are inspired by human intelligence to build a machine intelligence, and in humans, there is a mixture of the ability to understand the world such as pure intelligence and conscious behavior, and this means … to use your knowledge to achieve goals, Benio told CNBC” behind the valley. “

“At the present time, this is the way we build AGI: We are trying to make them agents who understand a lot about the world, and then they can act accordingly. But this is in fact a very dangerous suggestion.”

Panjo added that following this approach will be like “creating a new type or a new smart entity on this planet” and “not knowing whether they will act in ways consistent with our needs.”

“Instead, we can think, what are the scenarios in which things are going badly and all depend on the agency? In other words, it is because artificial intelligence has its own goals that we can be in a problem.”

Bengo said that the idea of ​​self -conservation can also begin, as artificial intelligence becomes more intelligent.

“Do we want to be in competition with more intelligent entities of us? It is not a very reassuring gamble, right? So we have to understand how self -conservation can appear as a goal in artificial intelligence.”

AI key tools

For MIT TegMark, the key lies in the so-called “Tool AI”-the systems created for a specific specific purpose, but it should not be agents.

TEGMARK said that the AI ​​tool can be a system that tells you how to treat cancer, or something that has “some agency” like a self -driving car “where you can prove or get high and truly high guarantees that you will still be able to control it.

“I think, in an optimistic note here, we can get almost everything that we motivate with artificial intelligence … if we simply insist on the presence of some basic safety standards before people can sell strong artificial intelligence systems,” said Tijark.

“They have to prove that we can keep them under control. Then the industry is creating quickly to learn how to do this better.”

The Future of Life Institute in Tegmark in 2023 called for a stop to develop artificial intelligence systems that could compete with human level. Although this did not happen, Tegmark said that people talk about this topic, and now it's time to take action to know how to put handrails in place to control AGI.

“So far, many people are talking about the conversation. We have to see whether we can get them to walk,” Tijak told CNBC's “Beyond the Valley”.

“It is clear that it is crazy for us to build something smarter than us before we discover how to control it.”

There are many views about AGI's arrival date, partially driven by different definitions.

The CEO of Openai Sam Altman said his company knows how to build AGI and said it will reach soon than people believe, although he reduced the influence of technology.

“I think we will strike AGI sooner than that most people in the world are thinking and it will be less important.” He said In December.

Leave a Reply

Your email address will not be published. Required fields are marked *