Popular tech giant, Google, has unveiled ‘agentic,’ a round of personalised AI advancement models capable of taking actions for users. The development, powered by its AI flagship Gemini model, is designed to redefine the connection between its products and users in a more intelligent and personalised way.
At its I/O 2025, held between Tuesday and Wednesday, the tech giant explained that the latest innovations underscore its commitment to making AI broadly more useful and intuitive, which is set to transform various activities and connections. The Google I/O is the company’s flagship event featuring the latest announcements and updates in technology.
While AI continues to evolve as a significant part of human activities, its increasing adoption further underscores its revolutionary powers. AI is evolving from a complex theoretical concept into practical, helpful tools that solve real-world problems and enhance our capabilities.
In recognition of the wave, Google noted that Gemini has now been incorporated into every one of its 15 products that collectively serve more than 500 million users. The company explained that the innovations unveiled at the event suggest a focus on expanding the scope of AI applications and making artificial intelligence broadly more useful and intuitive for a wider audience.
“This year’s proceedings highlighted the ongoing advancements in Google’s AI capabilities, with its most advanced models, Gemini, being extensively integrated across its product offerings and research initiatives,” the company said.
Among the new revelations by the leading search engine company, Gemini 2.5 Flash is now Google’s default model which offers speed and efficiency, Imagen 4, FLO and Veo 3 for enhanced image generation and filmmaking tool, advanced reasoning AI Mode tool in Search, Gemini introduction to Chrome and Google Apps and ability to delegate complex tasks for AI – encapsulated under ‘Agent Mode’.
Google’s innovation represents the flexible future AI presents, therefore showcasing how AI is moving from the lab into everyday tools designed to enrich the global population.
Also Read: Google has announced changes to Android: Here are 8 that you will find exciting.
At Google’s I/O 2025, the company unveiled a number of tools, features, and advancements aimed at repositioning the usage of AI across its products.
The tech giant is integrating ‘agentic’ AI, designed to help users save time by performing tasks on their behalf, such as purchasing tickets for events, making restaurant reservations, or booking local appointments. According to the company, the AI Mode, which will start rolling out by the end of 2025, will scan websites, analyse options, handle complex form-filling, and present options that meet users’ criteria.
Gemini 2.5 Flash, now flagged as Google’s default model, will combine quality with efficiency, coupled with the incorporation of Deep Think in Gemini 2.5 Pro, tailored towards enhanced reasoning and the tackling of complex challenges. In addition, new previews for text-to-speech are also being introduced that support native audio output and multi-speaker support for two voices.
For high-quality image and video generation, the company is introducing Imagen 4 and Veo 3. Imagen 4, described as a frontier-pushing image generation model, was introduced by Google and is now accessible within the Gemini app. The new model is captured with an improved colour generation and is 10x faster than the previous model.
Veo 3 offers improved visual generation by making video more accessible on Gemini than before. Veo 3 includes native audio generation capabilities, allowing users to add sound effects and background noise, with dialogue creation coming soon. The product, described as Google’s state-of-the-art video generation model, is now available on the Gemini app for Gemini subscribers in the United States.
Another innovative product is Flow, an AI filmmaking tool designed to help users seamlessly create cinematic clips, scenes, and stories with consistency. Flow allows users to create story elements like cast, locations, objects, and styles using natural language, simplifying complex filmmaking tasks.
Structured at advanced reasoning, the company is introducing AI Mode in Search, described as the company’s most powerful tool in Search. The model will witness the incorporation of Gemini 2.5 into Search for AI mode and AI Overviews across the U.S. this week and other markets later. The tech giant also explained that Gemini in Chrome will begin rolling out on desktop to Google AI Pro and Google AI Ultra subscribers in the US who use English as their main Chrome language on Windows and macOS. The feature allows users to seek clarification from Gemini on any complex information across web pages.
Google is also introducing Deep Search capabilities that utilise an advanced “query fan-out” technique. The feature can initiate hundreds of searches, reason across diverse information, and create a deep-referenced report within just minutes. This helps users save hours of research and helps to quickly grasp complex topics. Another is Gemini Live, which will soon be integrated with Google services like Maps, Calendar, Tasks, and Keep for deeper daily assistance, helping to manage daily life more seamlessly.
On work-in-progress features, Google said it’s working on AI Mode in Labs, which will offer personalised suggestions based on past searches, making the search experience more relevant. For Google AI Ultra subscribers, an experimental version of Agent Mode will be introduced in the Gemini app, thereby allowing users to delegate complex planning and tasks, seamlessly combining features like live web browsing for in-depth research.
In addition, the shopping experience has also been upgraded with the Gemini model capabilities, where users can also ask an Agentic to check out features and make purchases with Google Pay when the price is right, though with guidance and oversight. This shopping experience is expected to be available in AI Mode in the U.S. in the coming months.