To no one's surprise976 Archives there have been a lot of AI-related announcements at the Google Cloud Next event. Even less surprising: Google's annual cloud computing conference has focused on new versions of its flagship Gemini model and advances in AI agents.
So, for those following the whiplash competition between AI heavy hitters like Google and OpenAI, let's unpack the latest Gemini updates.
SEE ALSO: Google reveals Reddit Answers is powered by Gemini AIOn Wednesday, Google announced Gemini 2.5 Flash, a "workhorse" that has been adapted from its most advanced Gemini 2.5 Pro model. Gemini 2.5 Flash has the same build as 2.5 Pro but has been optimized to be faster and cheaper to run. The model's speed and cost-efficiency are possible because of its ability to adjust or "budget" its processing power based on the desired task. This concept, known as "test-time compute," is the emerging technique that reportedly made DeepSeek's R1 model so cheap to train.
Gemini 2.5 Flash isn't available just yet, but it's coming soon to Vertex AI, AI Studio, and the standalone Gemini app. On a related note, Gemini 2.5 Pro is now available in public preview on Vertex AI and the Gemini app. This is the model that has recently topped the leaderboards in the Chatbot Arena.
Google is also bringing these models to Google Workspace for new productivity-related AI features. That includes the ability to create audio versions of Google Docs, automated data analysis in Google Sheets, and something called Google Workspace Flows, a way of automating manual workflows like managing customer service requests across Workspace apps.
Agentic AI, a more advanced form of AI that reasons across multiple steps, is the main driver of the new Google Workspace features. But it's a challenge for all models to access the requisite data to perform tasks. Yesterday, Google announced that it's adopting the Model Context Protocol (MCP), an open-source standard developed by Anthropic that enables "secure, two-way connections between [developers'] data sources and AI-powered tools," as Anthropic explained.
"Developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers," read a 2024 Anthropic announcement describing how it works. Now, according to Google DeepMind CEO Demis Hassabis, Google is adopting MCP for its Gemini models.
This Tweet is currently unavailable. It might be loading or has been removed.
This will effectively allow Gemini models to quickly access the data they need, producing more reliable responses. Notably, OpenAI has also adopted MCP.
And that was just the first day of Google Cloud Next. Day two will likely bring even more announcements, so stay tuned.
Topics Google Google Gemini
(Editor: {typename type="name"/})
8 Years Later: Does the GeForce GTX 580 Still Have Game in 2018?
Uber's India rival Ola looking to raise $600 million in fresh funding
Kendall Jenner dressed like Paris Hilton for her 21st birthday
'It Chapter Two' spooks CinemaCon with first footage
Productivity Boost: Enable 'Night Mode' on All Your Devices
England is burning a massive effigy of Donald Trump holding Hillary Clinton's head
Soccer stars Alex Morgan, Megan Rapinoe talk about getting equal pay
Elon Musk's favorite Autopilot feature just got updated
SpaceX lands its first rocket on West Coast ground: Watch
J.K. Rowling shuts down Daily Mail headline in 1 resounding tweet
接受PR>=1、BR>=1,流量相当,内容相关类链接。