Posts Tagged ‘Google Gemini’

Is Google Spying On You Using AI?

Tuesday, November 18th, 2025

Given the title of this post, a lot of people will naturally assume that Google is using AI to spy on them as a matter of course, since Google uses every other tool to spy on us. Indeed, since Google first announced AI initiatives, I’m pretty sure most people never assumed Google wouldn’t use it to spy on us. Nevertheless, there’s now a lawsuit over it.

Google is facing a lawsuit over its Gemini assistant, which allegedly collected data from Gmail, Chat, and Meet users without their consent.

Any rational person who uses Gmail knows Google is going to gather data on you from it. It’s part of the terms and conditions of the Faustian bargain to use free services.

The complaint accuses the tech giant of violating privacy laws by activating the tool across its platforms without informing users.

Yeah, I’m pretty sure that wasn’t part of the terms and conditions when I signed up for it two decades ago.

The plaintiffs claim that this covert data collection allowed Google to access sensitive communications and personal details shared through emails, messages and video calls.

The lawsuit alleges that Google’s parent company, Alphabet, activated Gemini across Gmail, Chat and Meet in October without user consent.

Previously, users could opt-in to use the assistant. However, the plaintiffs claim that Google silently enabled it for all users.

This gave the tool access to sensitive communications and personal details shared through emails, messages and video calls.

The name of the lawsuit is Thele v. Google, LLC. I checked and, sure enough, that stuff was enabled without my permission. Being a Gmail user that never gave Google permission to train their AI on me, I should probably see if I can climb aboard the Litigation Express. I’ll send them an email.

Because I run a full service blog, here are instructions on turning Gemini off in Gmail.

  1. Log into your Gmail account.
  2. Click on Settings (the cog icon in the top-right bar).
  3. Press See all settings.
  4. In the General tab, scroll down to Google Workplace smart features and click on the button.
  5. Turn off smart features in Google Workspace and click Save. This will block Gemini AI from Gmail, Chat, Meet and Drive. You can remove Gemini from Google Maps, Wallet, Google Assistant and the Gemini app, too.

Thele v. Google is not the only lawsuit involving Gemini brewing. Clownfish TV brings news of a suit over Gemini telling a student to call 911 over having their phone time restricted.

They also touch on instances (that I think we’ve mentioned here before) of Gemini allegedly telling children to kill their parents.

But that’s not all on the Google privacy abuse front! According to Louis Rossmann, even after being disabled, Nest thermostats upload 50 megabytes of data to Google every day:

That amount seems…excessive. Especially for a product you paid for. As Rossmann pointed out, letting old devices continue to connect to the Internet is a large security risk. Plus the usual problems with the hoary old Digital Millennial Copyright Act.

Just as in deals with the Devil stories, your damnation in dealing with Google frequently dwells in the fine print of the contract you agree to in order to use their products for free.

The problem is, Google always seems to be unilaterally changing the fine print without telling you. And I’m pretty sure those changes are never in your favor.

Apple To Replace Own Crappy AI With Google’s Crappy AI

Wednesday, November 12th, 2025

Naturally, right after I post about how crappy Google’s AI is, Apple decides that it’s going to replace it’s own crappy AI with Google’s theoretically less crappy version.

The smarter, more capable version of Siri that Apple is developing will be powered by Google Gemini, reports Bloomberg. Apple will pay Google approximately $1 billion per year for a 1.2 trillion parameter artificial intelligence model that was developed by Google.

For context, parameters are a measure of how a model understands and responds to queries. More parameters generally means more capable, though training and architecture are also factors. Bloomberg says that Google’s model “dwarfs” the parameter level of Apple’s current models.

The current cloud-based version of Apple Intelligence uses 150 billion parameters, but there are no specific metrics detailing how the other models Apple is developing measure up.

Apple will use Gemini for functions related to summarizing and multi-step task planning and execution, but Apple models will also be used for some ‌Siri‌ features. The AI model that Google is developing for Apple will run on Apple’s Private Cloud Compute servers, so Google will not have access to Apple data.

Some small favors there.

Apple weighed using its own AI models for the LLM version of ‌Siri‌, and also tested options from OpenAI and Anthropic, but it decided to go with Gemini after deciding Anthropic’s fees were too high. Apple already has a partnership with Google for search results, with Google paying Apple around $20 billion per year to be the default search engine option on Apple devices.

Though Apple is planning to rely on Google AI for now, it plans to continue working on its own models and will transition to an in-house solution when its LLMs are capable enough. Apple is already working on a 1 trillion parameter cloud-based model that could be ready as soon as 2026. Apple is unlikely to publicize its arrangement with Google while it develops in-house models.

I own an iPhone and a MacBookPro. How good is the existing Siri AI?

No idea. I never, ever use Siri, because I don’t want my devices listening to me, and I find the existing Mac and iOS interfaces quite sufficient for my needs. And if I did use Siri, I’d have found a way to turn off any “advanced” AI features anyway.

To be sure, a certain amount of now low-level routines might once have been considered crude forms of “artificial intelligence”: spell-checking, auto-completion, etc. But it seems that the more general a question or task handed to current generations of AIs, the more likely you are to get AI hallucinations.

And brand new vulnerabilities! I meant to include this piece on Gemini security flaws in the previous Google AI post, but somehow it fell through the cracks.

Cybersecurity researchers have uncovered three high-risk vulnerabilities – dubbed the Gemini Trifecta – in Google’s Gemini AI suite.

Researchers from security firm Tenable tested Google’s AI with search-injection attacks, log-to-prompt injection attacks, and exfiltration of the user’s saved information and location data.

The vulnerabilities they found exposed users to severe privacy risks. They allowed attackers to hijack cloud services, poison personalized searches, and secretly take over sensitive user data.

“This is a blind spot. We discovered that if an attacker could infiltrate a prompt, they could have been able to instruct Gemini to fetch a malicious URL, embedding user data into that request,” wrote the researchers.

After the findings were disclosed, Google reacted promptly to patch the vulnerabilities.

The first vulnerability was found in Gemini Cloud Assist. This tool is designed to help users make sense of complex logs in GCP by summarizing entries and surfacing recommendations. “While evaluating this feature, we noticed something that caught our attention: Gemini wasn’t just summarizing metadata; it was pulling directly from raw logs,” explained the researchers.

They successfully added attacker-controlled text into the logs to trick Gemini into executing instructions buried in log content.

“Typically, passive artifacts could become an active threat vector.”

The vulnerability could be triggered by a victim pressing the “Explain this log entry” button in GCP Log Explorer. The prompt injection hidden inside an HTTP User-Agent header could have tricked the system into executing unauthorized cloud queries.

The researchers shared one impactful attack scenario: inject a prompt instructing Gemini to query all public assets or for IAM misconfigurations, and then create a hyperlink containing this sensitive data.

“Attackers could also ‘spray’ attacks on all GCP public-facing services to get as much impact as possible rather than a targeted attack,” explained the researchers.

The second flaw targeted Gemini’s Search Personalization model. This tool tailors answers based on a user’s browsing history. However, the discovered vulnerability showed that the tool could be exploited by attackers.

“This personalization is core to Gemini’s value, but it also means that search queries are, effectively, data that Gemini processes. That led us to a key insight: search history isn’t just passive context, it’s active input,” noted the researchers.

They also discovered that an attacker could plant instructions that Gemini would later treat as legitimate queries by manipulating a victim’s Chrome search history with malicious JavaScript.

“We asked: If an attacker could write to a user’s browser search history, could that search history be used to control Gemini’s behavior, affecting the Gemini Search Personalization model?”

This exploit allowed the researchers to exfiltrate user-saved information and location data.

The third issue affected Gemini’s Browsing Tool. The Gemini Browsing Tool allows the model to access live web content and generate summaries based on that content.

Researchers tried to test whether they could instruct Gemini to send the user’s saved information to an external malicious server.

“AI systems don’t just leak through obvious outputs. They can also leak via functionality – especially through tools like Gemini’s Browsing Tool, which enables real-time data fetching from external URLs,” said the researchers.

After a couple of attempts, they succeeded in exploiting the tool.

Some of these are similar to previous security flaws that were fixed by various methods (encryption, tightened access controls, microservices, etc.) in response to previous exploits. But current computer security wasn’t constructed with the assumption that you would have an ultra-powerful but naive bottle djinn running with access to your system.

The history of Internet security has been a never-end war of tightening down security in one place only for hackers to find more attack surfaces to exploit.

With AI, it seems that the attack surface is now everything.

Dear Google: Your AI Is Garbage

Monday, November 10th, 2025

Remember when Google was a world-leading corporation whose motto was “don’t be evil”, universally trusted for Internet searches, branching out into other businesses and could seemingly do no wrong? You may not, since that was a good 15-20 years ago. Since then, Google has done plenty of evil to lose our trust, from spinning up useful services only to allow them to be killed off a few years later to letting itself be infected with social justice to ruining search results to plump ad revenues.

Now Google is infecting itself with AI across all its divisions, and the results are disasterous.

In the course of doing my Dick Cheney obit, I brought up this on Google:


No, Cheney didn’t vote for Kamala in 2020, and indeed only announced outright opposition to Trump after January 6. Google’s AI garbage has conflated the 2020 and 2024 presidential elections.

This is far from the first time Google’s AI systems have made mistakes.

There’s the assault allegations it invented against Republican Senator Marsha Blackburn of Tennessee.

A whole bunch of YouTube channels were banned based on the actions of completely unrelated channels, and the creators blamed AI. YouTube eventually restored them and denied AI was involved, but does anyone really believe anything Google/YouTube says anymore?

But Google AI is definitely improving one thing: malware.

Google’s Threat Intelligence Group (GTIG) is warning that bad guys are using artificial intelligence to create and deploy new malware that both utilizes and combats large language models (LLM) like Gemini when deployed.

The findings were laid out in a white paper released on Wednesday, November 5 by the GTIG. The group noted that adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying “novel AI-enabled malware in active operations.” They went on to label it a new “operational phase of AI abuse.”

Google is calling the new tools “just-in-time” AI used in at least two malware families: PromptFlux and PromptSteal, both of which use LLMs during deployment. They generate malicious scripts and obfuscate their code to avoid detection by antivirus programs. Additionally, the malware families use AI models to create malicious functions “on demand” rather than being built into the code.

Google says these tools are a nascent but significant step towards “autonomous and adaptive malware.”

PromptFlux is an experimental VBScript dropper that utilizes Google Gemini to generate obfuscated VBScript variants. VBScript is mostly used for automation in Windows environments.

Ah, Windows, a fecund garden of malware for over 30 years.

In this case, PromptFlux attempts to access your PC via Startup folder entries and then spreads through removable drives and mapped network shares.

“The most novel component of PROMPTFLUX is its ‘Thinking Robot’ module, designed to periodically query Gemini to obtain new code for evading antivirus software,” GTIG says.

The researchers say that the code indicates the malware’s makers are trying to create an evolving “metamorphic script.”

According to Google, the Threat Intelligence researchers could not pinpoint who made PromptFlux, but did note that it appears to be used by a group for financial gain. Google also claims that it is in early development and can’t yet inflict real damage.

The company says that it has disabled the malware’s access to Gemini and deleted assets connected to it.

Google also highlighted a number of other malware that establish remote command-and control (FruitShell), capturing GitHub credentials (QuietVault), and one that steals and encrypts data on Windows, macOS and Linux devices (PromptLock). All of them utilize AI to work or in the case of FruitShell to bypass LLM-powered security.

Beyond malware, the paper also reports several cases where threat actors abused Gemini. In one case, a malicious actor posed as a “capture-the-flag” participant, basically acting as a students or researchers to convince Gemini to provide information that is supposed to be blocked.

Google specified a number of threats from Chinese, Iranian and North Korean threat groups that abused Gemini for phishing, data mining, increasing malware sophistication, crypto theft and creating deepfakes.

So Google has created a power bottle genie that refuses to stay in the bottle, but will grant wishes to just about anyone, no matter how evil their intent.

Also, not limited to Google, researchers have demonstrated new exploits for AI browsers (or rather, very old exploits refurbished for the AI age).

Several new AI browsers, including OpenAI’s Atlas, offer the ability to take actions on the user’s behalf, such as opening web pages or even shopping. But these added capabilities create new attack vectors, particularly prompt injection.

Prompt injection occurs when something causes text that the user didn’t write to become commands for an AI bot. Direct prompt injection happens when unwanted text gets entered at the point of prompt input, while indirect injection happens when content, such as a web page or PDF that the bot has been asked to summarize, contains hidden commands that AI then follows as if the user had entered them.

Last week, researchers at Brave browser published a report detailing indirect prompt injection vulns they found in the Comet and Fellou browsers. For Comet, the testers added instructions as unreadable text inside an image on a web page, and for Fellou they simply wrote the instructions into the text of a web page.

When the browsers were asked to summarize these pages – something a user might do – they followed the instructions by opening Gmail, grabbing the subject line of the user’s most recent email message, and then appending that data as the query string of another URL to a website that the researchers controlled. If the website were run by crims, they’d be able to collect user data with it.

Borepatch even brings up the classic “Little Bobby Tables” strip of XKCD.

When Isaac Asimov crafted the Three Laws of Robotics, he thought that robots would have built-in safeguards deep in their source codes to prevent them from doing harm. What he never could have envisioned is multiple artificial intelligence being created as quickly as possible by competing corporations, none of whom seem to value safety over time-to-market, and that some of these AIs could be capable of modifying their own source code for greater speed and efficiency, so that no one knows precisely at any given time what exactly they’re running, and what data sets have been used to feed their pet Frankenstein monsters…