7 January 2025

US President-elect Donald Trump and Elon Musk watch the launch of the sixth test flight of the SpaceX Starship rocket in Brownsville, Texas, on November 19, 2024.

Brandon Bell | Via Reuters

The American political landscape is set to undergo some shifts in 2025, and these changes will have some major implications for the regulation of AI.

President-elect Donald Trump It will open on January 20. He will be joined at the White House by a group of senior advisers from the business world – including Elon Musk and Vivek Ramaswamy – who are expected to influence policy thinking about emerging technologies such as artificial intelligence and cryptocurrencies.

Across the Atlantic, a story of two jurisdictions emerged, the United Kingdom and the European Union – Difference in organizational thinking. While the European Union has taken a tougher stance on the Silicon Valley giants behind the most powerful artificial intelligence systems, Britain has adopted a new approach. A more light touch approach.

In 2025, the state of AI regulation globally may be in a state of overhaul. CNBC takes a look at some of the key developments to watch — from the evolution of the EU's landmark AI law to what the Trump administration could do to the U.S.

Musk's influence on American politics

Elon Musk walks on Capitol Hill on the day of a meeting with Senate Republican leader-elect John Thune (R-SC), in Washington, US, December 5, 2024.

Benoit Tessier | Reuters

Although not an issue that came up significantly during Trump's election campaign, artificial intelligence is expected to be one of the key sectors set to benefit from the incoming US administration.

For example, Trump appointed Musk CEO of an electric car manufacturer Teslato co-lead the Government Efficiency Administration alongside Ramaswamy, an American biotechnology entrepreneur who dropped out of the 2024 presidential race to support Trump.

Appian CEO Matt Calkins told CNBC that Trump's close relationship with Musk could stand the United States in good stead when it comes to artificial intelligence, citing the billionaire's experience as co-founder of OpenAI and CEO of xAI, his own AI lab. Also positive indicators.

“We finally have one person in the administration who really knows about artificial intelligence and has an opinion about it,” Calkins said in an interview last month. Musk has been one of Trump's most prominent supporters in the business community, even appearing at some of his campaign rallies.

There is currently no confirmation on what Trump has planned regarding potential presidential directives or executive orders. But Calkins thinks it's likely Musk will look to propose guardrails to make sure the development of artificial intelligence doesn't endanger civilization — a risk he faces. It has been warned about several times in the past.

“He has an unquestionable reluctance to allow AI to cause catastrophic human outcomes,” Calkins told CNBC. “He's certainly concerned about that. He's been talking about it long before he had a political position.”

Currently, there is no comprehensive federal legislation on AI in the United States, but rather a patchwork of regulatory frameworks at the state and local levels, with several AI bills introduced in 45 states as well as Washington, D.C., Puerto Rico, and the US Virgin Islands.

EU law on artificial intelligence

The EU is so far the only jurisdiction globally to push forward comprehensive rules for AI through its AI Act.

Jack Silva | norphoto | Getty Images

The European Union has so far been the only jurisdiction globally to move forward with comprehensive legal rules for the AI ​​industry. Earlier this year, the EU AI Law – the first regulatory framework of its kind for AI – was passed. Officially entered into force.

The law has not yet fully taken effect, but it is already causing tension among major US tech companies, which worry that some aspects of the regulation are too stringent and could quash innovation.

In December, the EU Office for Artificial Intelligence, a newly created body that oversees models under AI law, published a second draft of the Code of Practice for General Purpose Artificial Intelligence (GPAI) models, which refers to systems like OpenAI's GPT family that include linguistic models. Major, or LLM.

The second draft included exemptions for providers of some open source artificial intelligence models. These templates are usually publicly available to allow developers to build their own custom versions. It also includes a requirement for developers of “systematic” GPAI models to undergo rigorous risk assessments.

Computer and Communications Industry Association – which includes its members Amazon, Google and dead – Warned that it “contains measures that go beyond the agreed scope of the law, such as far-reaching copyright measures.”

The Office of Artificial Intelligence was not immediately available for comment when contacted by CNBC.

It should be noted that EU law on artificial intelligence is still far from full implementation.

As Shelley McKinley, chief legal officer at popular code repository platform GitHub, told CNBC in November, “The next phase of the work has begun, which may mean we have more ahead of us than behind us at this point.”

For example, in February, the first provisions of the law will become enforceable. These provisions cover “high-risk” AI applications such as remote biometric identification, loan decision-making, and educational enrollment. The third draft of the code for GPAI models is scheduled to be published in the same month.

European tech leaders worry that the EU's punitive measures on US tech companies risk sparking a response from Trump, which could in turn lead to the bloc softening its approach.

Take antitrust regulation, for example. The European Union has been an active player taking action to limit the dominance of US tech giants – but that's something that could trigger a blowback from Trump, according to Swiss company Proton's CEO, Andy Yen.

“(Trump's) view is that he probably wants to regulate his own technology companies,” Yen told CNBC in a November interview at the Web Summit technology conference in Lisbon, Portugal. “He doesn't want Europe to get involved.”

Copyright review in the United Kingdom

British Prime Minister Keir Starmer gives a media interview while attending the 79th session of the United Nations General Assembly at the United Nations Headquarters in New York, United States, on September 25, 2024.

Leon Neal | Via Reuters

One country to watch is the United Kingdom. Previously, Britain did this I refrained from making legal obligations to AI model makers out of fear that new legislation might be too restrictive.

However, Keir Starmer's government has said it plans to legislate for AI, although details remain scant at the moment. The general expectation is that the UK will take a more principles-based approach to regulating AI, rather than the EU's risk-based framework.

Last month, the government dropped its first major indication of where regulation is heading, announcing a consultation on it. Measures to regulate the use of copyrighted content to train AI models. Copyright is a big issue for generative AI and rights management boards, in particular.

Most LLM degree holders use public data from the open web to train their AI models. But this often includes examples of artwork and other copyrighted material. Artists and publishers love New York Times They claim that these systems Unfairly removing their valuable content without consent To generate the original output.

To address this issue, the UK government is considering an exception to copyright law for training AI models, while continuing to allow rights holders to choose not to use their works for training purposes.

Appian's Calkins said the UK could end up being a “global leader” on the issue of copyright infringement through AI models, adding that the country “is not subject to the same overwhelming pressure campaign from domestic AI leaders as the US.”

Relations between the United States and China are a potential point of tension

US President Donald Trump, right, and Chinese President Xi Jinping, walk past members of the People's Liberation Army during a welcoming ceremony outside the Great Hall of the People in Beijing, China, on Thursday, November 9, 2017.

Chilai Shen | Bloomberg | Getty Images

Finally, as world governments seek to regulate rapidly growing AI systems, there is a risk that geopolitical tensions between the United States and China could escalate under Trump.

In his first term as president, Trump imposed a number of tough policy measures toward China, including the decision to add Huawei to a trade blacklist that restricts it from doing business with American technology suppliers. He also launched an effort to ban TikTok, which is owned by Chinese company ByteDance, in the US – although it has since been banned He softened his stance on TikTok.

China is The race to overtake the United States for dominance At Amnesty International. At the same time, the United States has taken measures to restrict China's access to key technologies, especially chips like those it designed Nvidiawhich is required to train the most advanced AI models. China has responded by trying to build its own domestic chip industry.

Technology experts worry that the geopolitical divide between the United States and China over artificial intelligence could lead to other risks, such as the possibility of one of the two countries developing an artificial intelligence system. A form of artificial intelligence that is more intelligent than humans.

Max Tegmark, founder of the non-profit Future of Life Institute, believes that the United States and China could in the future create a form of artificial intelligence that can improve itself and design new systems without human oversight, which could force the governments of both countries to individually come up with rules about AI safety.

“The optimistic way forward is for the United States and China to unilaterally impose national safety standards to prevent their companies from doing harm and building uncontrollable AGI, not to appease superpowers,” Tegmark told CNBC in a November interview. Competition, but only to protect themselves.”

Governments are already trying to work together to figure out how to create regulations and frameworks around AI. In 2023, the United Kingdom hosted A Global AI Safety Summitwhich was attended by the US and Chinese administrations, to discuss potential barriers around technology.

— CNBC's Arjun Kharpal contributed to this report

Leave a Reply

Your email address will not be published. Required fields are marked *