5 February 2025

Jack Silva Soup pictures Lightrockket | Gety pictures

Google On Wednesday, Gemini 2.0 – its “most artificial intelligence” suite has released – for everyone.

In December Give For reliable developers and laboratories, as well as wrap some features in Google products, but this is a “public version”, according to Google.

The 2.0 Flash Models Collection, which is described as “the optimal spinal model for high -sized tasks, and high frequency on a large scale”; 2.0 Pro experimental, which is largely focused on coding performance; And 2.0 Flash-Lite, which is rising as “the most expensive model so far.”

Gueini Flash costs 10 cents per million symbols of textual inputs, photos and video, although Flash-Light has a cost-effective version, costs 0.75 percent for itself.

Continuous releases are part of Google's broader strategy to invest in “artificial intelligence agents” where the AI ​​Arm Race races rises between both technology and startup giants.

DeadAmazon, MicrosoftOpenai and Anthropic also moves towards AICENT AI, or models that can complete multiple multi -step tasks on behalf of the user, instead of having to walk in every individual step.

Read more report on CNBC on artificial intelligence

“Over the past year, we were investing in developing more agents, which means that they can understand more about the world around you, think about multiple steps forward, and take measures on your behalf, with your supervision,” Google wrote in December Blog postAdding that Gemini 2.0 has “new developments in multimedia – such as the original image and sound outputs – and the use of original tools”, and that the forms family “will enable us to build the new Amnesty International agents that bring us closer to our vision of the world.”

Man, and AmazonThe start of the artificial intelligence founded by Openai executive officials, is a major rival in the race to develop artificial intelligence agents. In October, startup He said Artificial intelligence agents managed to use computers like humans to complete complex tasks. The startup said that the possibility of using the device for the device allows its ability to explain what is on the computer screen, choose buttons, enter the text, navigate web sites, and carry out tasks through any program and browsing the Internet in the actual time.

“He can” use computers in the same way we do. “He said he could do tasks with” dozens or even hundreds of steps. “

I released Openai a Similar tool Recently, providing an feature called the operator that will automate tasks such as planning for holidays, filling models, making restaurant reservations and arranging groceries. the MicrosoftStartup described the operator as “an agent who can go to the web to perform tasks for you.”

Earlier this week, Openai announced another tool He invited the deep research This allows the artificial intelligence agent to collect complex research reports, analyze questions and user selection topics. In December, Google launched a similar tool of the same name – deep search – which works as “researcher assistant, exploring complex topics and collecting reports on your behalf.”

CNBC mentioned for the first time In December Google will provide many features of artificial intelligence early in 2025.

“In history, you do not always need to be first, but you have to implement well and be the best in the separation as a product,” said CEO Sondar Beshai at a strategic meeting at that time. “I think this is what is going on around 2025.”

Leave a Reply

Your email address will not be published. Required fields are marked *