News center > News > Headlines > Context
Google Gemini2.0Flash series AI model debuts, programming and inference performance reaching a new level
Editor
2 hours ago 5,399

Google Gemini2.0Flash series AI model debuts, programming and inference performance reaching a new level

Source: IT Home

Google Company released a blog post yesterday (February 5), inviting all Gemini application users to access the latest Gemini 2.0 Flash application model and release the 2.0 Flash Thinking inference experimental model .

2.0 Flash: A new update, fully open

2.0 Flash model was originally unveiled at the 2024 I/O Conference and has quickly become a popular choice in the developer community with its low latency and high performance. The model is suitable for large-scale, high-frequency tasks, capable of handling context windows of up to 1 million tokens, demonstrating strong multimodal reasoning capabilities.

The Gemini 2.0 Flash model can interact with applications including YouTube, Google Search and Google Maps, helping users discover and expand their knowledge in multiple application scenarios.

Gemini 2.0 Flash Thinking Model

Gemini 2.0 Flash Thinking Model is based on the speed and performance of 2.0 Flash. This model is trained to break the prompts into a series of steps, thereby enhancing its inference capabilities and providing better quality response.

2.0 Flash Thinking Experimental model demonstrates its thinking process, where users can see why it responds in some way, what its assumptions are, and tracks the model's inference logic. This transparency allows users to have a deeper understanding of the decision-making process of the model.

Gemini also launched a 2.0 Flash Thinking version that interacts with apps such as YouTube, Search, and Google Maps. These connected applications have made Gemini a unique AI assistant, and will explore how to combine new reasoning capabilities with user applications to help users complete more tasks.

2.0 Pro Experimental Edition: Best Programming Performance and Complex Prompt Word Processing

Google also launched the Gemini 2.0 Pro Experimental Edition, which officially claims that the model is good at programming and can answer complex prompts. The model has a context window of 2 million tokens, which can fully analyze and understand massive information, and supports calling tools such as Google search and code execution.

Developers can now experience this experimental model in Google AI Studio and Vertex AI, which Gemini advanced users can also access on desktop and mobile. IT Home has the following related performance pairs:

2.0 Flash-Lite: The most cost-effectiveBenefit Model

Google AI Studio also launched the Gemini 2.0 Flash-Lite model, which is officially claimed to be the most cost-effective model to date. Designed to keep costs low and fast response while delivering higher quality than 1.5 Flash.

This model also supports 1 million tokens context windows and multimodal inputs. For example, it can generate a line of relevant descriptions for 40,000 unique photos in a paid subscription to Google AI Studio, at a cost of less than 1 Dollar.

Keywords: Bitcoin
Share to: