-->

Google’s Gemini AI Evolution: Ordering Food and Uber Directly With your requests being executed as orders on your phone

Google's Gemini AI has been deeply integrated into Samsung's devices, marking a significant milestone in the partnership between the two tech giants. This close collaboration ensures that whenever Google introduces cutting-edge advancements to its AI suite, Galaxy users are often the first in line to experience them. The latest developments suggest that Gemini is moving beyond simple conversation and into the realm of "Agentic AI," where it can perform real-world tasks on your behalf.

  • ✨ Gemini is evolving into an "Agentic AI" capable of executing complex tasks within third-party applications.
  • ✨ New beta features suggest automation for booking Uber rides and ordering food deliveries without manual input.
  • ✨ Users will maintain full oversight, with the ability to monitor, stop, or take over any task the AI performs.
  • ✨ This update represents the natural progression of AI integration on mobile devices, focusing on practical productivity.
A Samsung Galaxy Z Flip device showing the Google Gemini AI interface for task automation

It appears that Gemini is set to gain new capabilities that may enable it to perform tasks within apps for you, such as ordering you an Uber or having food delivered to you without you having to do those tasks manually. This shift toward agentic behavior means your Samsung Galaxy device will become more of a personal assistant and less of a simple tool.

The Rise of Agentic AI: The Next Frontier for Mobile Devices

The experts at 9to5Google have discovered intriguing details within the Google app 17.4 beta. Their investigation revealed references to a new feature called “Get tasks done with Gemini.” Based on the findings, it is highly likely that this will initially be offered as an experimental feature within Google Labs, allowing early adopters to test the boundaries of AI automation.

The beta version explains the feature clearly: “Gemini can help with tasks, like placing orders or booking rides, using screen automation on certain apps on your device." This suggests that the AI isn't just communicating via APIs but is actually interacting with the app's interface to navigate through menus and selections.




Google is being transparent about the limitations of this technology. The company cautions users that Gemini can make mistakes and emphasizes that "You're responsible for what it does on your behalf, so supervise it closely." To ensure user safety and accuracy, Galaxy owners will have the power to stop the AI agent at any moment and manually complete the process.

In a practical scenario, you could simply tell Gemini to "get me an Uber to the office." Using its agentic AI capabilities, Gemini would launch the Uber app, input your destination, and prepare the booking for your final approval. The same logic applies to food delivery services. For instance, you could instruct Gemini to order a specific dish from your favorite Thai restaurant, and it would handle the navigation through Uber Eats or other supported platforms.

This capability won't be available in every app immediately. Google is expected to roll out support for a handful of popular services initially, gradually expanding the list as the AI's reliability improves. This is the natural progression for Galaxy AI and mobile intelligence in general. While we don't have a definitive release date, the presence of these features in the beta version suggests that the future of hands-free app interaction is closer than ever.

What exactly is meant by "Agentic AI" in Gemini?

Agentic AI refers to the AI's ability to act as an "agent" that can perform multi-step tasks within applications on your behalf. Unlike a standard chatbot that only provides information, Agentic AI can actually interact with the user interface of an app to complete actions like booking a ride or placing an order.

Will this feature work on all Samsung Galaxy devices?

While Google and Samsung haven't confirmed the full list of compatible devices, it is expected that this feature will be available on modern Galaxy devices that support the latest versions of the Google app and Gemini, specifically those optimized for Galaxy AI.

Is it safe to let an AI order food or rides for me?

Google has built in several safeguards, including a requirement for user supervision. You can see the AI navigating the screen in real-time and can stop the process at any point. Final confirmation for payments will likely still require a manual tap or biometric authentication from the user.

Which apps will Gemini support for screen automation?

Initially, support is expected for major service platforms like Uber and Uber Eats. Google will likely expand this to other high-utility apps such as travel booking sites, grocery delivery services, and potentially even social media scheduling tools in the future.

🔎 The evolution of Gemini into a proactive agent marks a transformative moment for mobile technology. By bridging the gap between voice commands and app interaction, Google and Samsung are redefining what it means to have a "smart" phone. As these agentic capabilities move from beta testing to a wider release, the convenience of having an AI that can manage your daily chores will likely become an indispensable part of the Galaxy ecosystem.