Naturally, right after I post about how crappy Google’s AI is, Apple decides that it’s going to replace it’s own crappy AI with Google’s theoretically less crappy version.
The smarter, more capable version of Siri that Apple is developing will be powered by Google Gemini, reports Bloomberg. Apple will pay Google approximately $1 billion per year for a 1.2 trillion parameter artificial intelligence model that was developed by Google.
For context, parameters are a measure of how a model understands and responds to queries. More parameters generally means more capable, though training and architecture are also factors. Bloomberg says that Google’s model “dwarfs” the parameter level of Apple’s current models.
The current cloud-based version of Apple Intelligence uses 150 billion parameters, but there are no specific metrics detailing how the other models Apple is developing measure up.
Apple will use Gemini for functions related to summarizing and multi-step task planning and execution, but Apple models will also be used for some Siri features. The AI model that Google is developing for Apple will run on Apple’s Private Cloud Compute servers, so Google will not have access to Apple data.
Some small favors there.
Apple weighed using its own AI models for the LLM version of Siri, and also tested options from OpenAI and Anthropic, but it decided to go with Gemini after deciding Anthropic’s fees were too high. Apple already has a partnership with Google for search results, with Google paying Apple around $20 billion per year to be the default search engine option on Apple devices.
Though Apple is planning to rely on Google AI for now, it plans to continue working on its own models and will transition to an in-house solution when its LLMs are capable enough. Apple is already working on a 1 trillion parameter cloud-based model that could be ready as soon as 2026. Apple is unlikely to publicize its arrangement with Google while it develops in-house models.
I own an iPhone and a MacBookPro. How good is the existing Siri AI?
No idea. I never, ever use Siri, because I don’t want my devices listening to me, and I find the existing Mac and iOS interfaces quite sufficient for my needs. And if I did use Siri, I’d have found a way to turn off any “advanced” AI features anyway.
To be sure, a certain amount of now low-level routines might once have been considered crude forms of “artificial intelligence”: spell-checking, auto-completion, etc. But it seems that the more general a question or task handed to current generations of AIs, the more likely you are to get AI hallucinations.
And brand new vulnerabilities! I meant to include this piece on Gemini security flaws in the previous Google AI post, but somehow it fell through the cracks.
Cybersecurity researchers have uncovered three high-risk vulnerabilities – dubbed the Gemini Trifecta – in Google’s Gemini AI suite.
Researchers from security firm Tenable tested Google’s AI with search-injection attacks, log-to-prompt injection attacks, and exfiltration of the user’s saved information and location data.
The vulnerabilities they found exposed users to severe privacy risks. They allowed attackers to hijack cloud services, poison personalized searches, and secretly take over sensitive user data.
“This is a blind spot. We discovered that if an attacker could infiltrate a prompt, they could have been able to instruct Gemini to fetch a malicious URL, embedding user data into that request,” wrote the researchers.
After the findings were disclosed, Google reacted promptly to patch the vulnerabilities.
The first vulnerability was found in Gemini Cloud Assist. This tool is designed to help users make sense of complex logs in GCP by summarizing entries and surfacing recommendations. “While evaluating this feature, we noticed something that caught our attention: Gemini wasn’t just summarizing metadata; it was pulling directly from raw logs,” explained the researchers.
They successfully added attacker-controlled text into the logs to trick Gemini into executing instructions buried in log content.
“Typically, passive artifacts could become an active threat vector.”
The vulnerability could be triggered by a victim pressing the “Explain this log entry” button in GCP Log Explorer. The prompt injection hidden inside an HTTP User-Agent header could have tricked the system into executing unauthorized cloud queries.
The researchers shared one impactful attack scenario: inject a prompt instructing Gemini to query all public assets or for IAM misconfigurations, and then create a hyperlink containing this sensitive data.
“Attackers could also ‘spray’ attacks on all GCP public-facing services to get as much impact as possible rather than a targeted attack,” explained the researchers.
The second flaw targeted Gemini’s Search Personalization model. This tool tailors answers based on a user’s browsing history. However, the discovered vulnerability showed that the tool could be exploited by attackers.
“This personalization is core to Gemini’s value, but it also means that search queries are, effectively, data that Gemini processes. That led us to a key insight: search history isn’t just passive context, it’s active input,” noted the researchers.
They also discovered that an attacker could plant instructions that Gemini would later treat as legitimate queries by manipulating a victim’s Chrome search history with malicious JavaScript.
“We asked: If an attacker could write to a user’s browser search history, could that search history be used to control Gemini’s behavior, affecting the Gemini Search Personalization model?”
This exploit allowed the researchers to exfiltrate user-saved information and location data.
The third issue affected Gemini’s Browsing Tool. The Gemini Browsing Tool allows the model to access live web content and generate summaries based on that content.
Researchers tried to test whether they could instruct Gemini to send the user’s saved information to an external malicious server.
“AI systems don’t just leak through obvious outputs. They can also leak via functionality – especially through tools like Gemini’s Browsing Tool, which enables real-time data fetching from external URLs,” said the researchers.
After a couple of attempts, they succeeded in exploiting the tool.
Some of these are similar to previous security flaws that were fixed by various methods (encryption, tightened access controls, microservices, etc.) in response to previous exploits. But current computer security wasn’t constructed with the assumption that you would have an ultra-powerful but naive bottle djinn running with access to your system.
The history of Internet security has been a never-end war of tightening down security in one place only for hackers to find more attack surfaces to exploit.
With AI, it seems that the attack surface is now everything.
