Apple has announced a major overhaul of Siri, powered by Google’s Gemini AI models, as part of a multi‑year partnership designed to turn Siri into a full‑fledged conversational assistant on par with leading chatbots.
What Apple officially announced
- In a joint statement, Apple and Google said that Apple’s new “Foundation Models” for Siri will be built on top of Google’s Gemini models and Google Cloud infrastructure, while still using Apple’s Private Cloud Compute framework for security and data handling.
- Tech reports indicate that the first Gemini‑powered Siri experience will roll out to users in an iOS 26.4 update as early as February as a preview, with a broader, next‑generation Siri release tied to iOS 27 later in the year.
What will change in Siri
- The new Siri will be far more context‑aware, able to understand what is on your screen and work deeply across apps like Mail, Messages, Calendar and third‑party apps, handling multi‑step tasks such as summarising emails, scheduling meetings, and performing in‑app actions from a single natural‑language request.
- Siri is also set to gain chatbot‑like capabilities: web search with natural‑language answers, text and image generation, document and page summaries, and language correction, all powered by a customised Gemini model integrated into iOS, macOS and iPadOS.
Other AI models and privacy
Advertisement
- Even as Gemini becomes the default intelligence behind Siri, Apple is expected to keep optional integrations with other large models like ChatGPT for certain complex queries, rather than replacing those partnerships outright.
- Apple stresses that AI processing will run either on‑device or via an end‑to‑end encrypted private cloud; user data from Siri interactions will not be used to train Google’s public Gemini models, in line with Apple’s long‑standing privacy stance




