Sundar Pichai, CEO of Alphabet Inc. , During the 2024 business, government and community forum in Stanford, Stanford, California, April 3, 2024.
Justin Sullivan Gety pictures
Google He removed a pledge to refrain from the use of artificial intelligence of potential harmful applications, such as weapons and monitoring, according to the updated “principles of artificial intelligence” of the company.
A previous version Among the AI principles of the company, the company said that the company will not follow up “weapons or other technologies whose main purpose or implementation is to cause or facilitate people infected”, and “technologies that collect or use monitoring information that violates internationally accepted rules.”
These goals are no longer presented to the principles of artificial intelligence Website.
“There is a global competition that takes place to lead artificial intelligence within a growing geopolitical scene,” says Tuesday. Blog post Dimis Hasabis, CEO of Google DeepMind, participated in writing. “We believe that democracies should lead to the development of artificial intelligence, guided by basic values such as freedom, equality and respect for human rights.”
The updated company principles reflect the growing Google ambitions to provide artificial intelligence technology and services for more users and customers, which included governments. The change is also in line with the increase in the discourse of the Silicon Valley leaders about the artificial intelligence race between the United States and China, with CTO Sheam Sankar from Baldir said on Monday.
The previous edition of the company's artificial intelligence principles said that Google “will take into account a wide range of social and economic factors.” The new State Google will make the principles of artificial intelligence to “where we believe that the total potential benefits greatly exceed the risks and renewed renovations.”
In the blog post on Tuesday, Google said it “will remain consistent with the principles of international law and human rights accepted on a large scale – always evaluating a specific work by assessing whether the benefits are significantly outperforming the potential risks.”
The principles of the new artificial intelligence were The Washington Post reported for the first time On Tuesday, before Google's Fourth quarter profits. The company's results were lost in Wall Street and led the shares up to 9 % in trading after working hours.
Hundreds of demonstrators, including Google, are assembled in front of the offices of San Francisco in Google and the closure of traffic in the ONE Market Street area Thursday evening, demanding an end to its work with the Israeli government, and the protest against Israeli attacks on Gaza, in San Francisco, California, the United States on 14 December 2023.
Anadolu Anadolu Gety pictures
Google established the principles of artificial intelligence in 2018 after Renovation A government contract called Project Maven, which helped the government to analyze and interpret the drone videos using artificial intelligence. Before ending the deal, several thousand employees signed a petition against the contract and dozens of opposition to Google's participation. The company also leak At that time, she said that giving bidding to the $ 10 billion in Bentagon Bentagon because the company “cannot be sure” that it will be in line with the company's artificial intelligence principles.
By promoting the technology of artificial intelligence for customers, the Bachhai leadership team has strongly followed the federal government contracts, which caused Increased In some areas within the explicit workforce of Google.
“We believe that companies, governments and organizations that share these values must work together to create artificial intelligence that protects people, enhances global growth and supports national security,” said a blog on Tuesday from Google.
Google last year I finish More than 50 employees after a series of Protests Against Project Nimbus, a $ 1.2 billion joint contract with Amazon provides the Israeli government and military with cloud computing and artificial intelligence services. Executive officials have repeatedly said that the contract does not violate any of the company “You have principles.”
but, Documents and Reports The company's agreement was allowed to give Israel Artificial intelligence tools This included the classification of images, tracking of objects, as well as Rulings For state -owned weapons. The New York Times in December I mentioned Four months before the signing of Nimbus, Google officials expressed concern that signing the deal will harm its reputation and that “Google Cloud services can be used or linked to human rights violations.”
Meanwhile, the company was turning on internal discussions on geopolitical conflicts such as the war in Gaza.
Google has announced updated guidelines for the internal Mekegen Forum in September, which restricts more political discussions on geopolitical content, international relations, military conflicts, economic actions and regional disputes, according to the internal documents that he sees CNBC at that time.
Google did not immediately respond to a request for comment.
He watches: The Battle of Google's Uphill Ai in 2025