Remember when Google was a world-leading corporation whose motto was “don’t be evil”, universally trusted for Internet searches, branching out into other businesses and could seemingly do no wrong? You may not, since that was a good 15-20 years ago. Since then, Google has done plenty of evil to lose our trust, from spinning up useful services only to allow them to be killed off a few years later to letting itself be infected with social justice to ruining search results to plump ad revenues.
Now Google is infecting itself with AI across all its divisions, and the results are disasterous.
In the course of doing my Dick Cheney obit, I brought up this on Google:

No, Cheney didn’t vote for Kamala in 2020, and indeed only announced outright opposition to Trump after January 6. Google’s AI garbage has conflated the 2020 and 2024 presidential elections.
This is far from the first time Google’s AI systems have made mistakes.
There’s the assault allegations it invented against Republican Senator Marsha Blackburn of Tennessee.
A whole bunch of YouTube channels were banned based on the actions of completely unrelated channels, and the creators blamed AI. YouTube eventually restored them and denied AI was involved, but does anyone really believe anything Google/YouTube says anymore?
But Google AI is definitely improving one thing: malware.
Google’s Threat Intelligence Group (GTIG) is warning that bad guys are using artificial intelligence to create and deploy new malware that both utilizes and combats large language models (LLM) like Gemini when deployed.
The findings were laid out in a white paper released on Wednesday, November 5 by the GTIG. The group noted that adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying “novel AI-enabled malware in active operations.” They went on to label it a new “operational phase of AI abuse.”
Google is calling the new tools “just-in-time” AI used in at least two malware families: PromptFlux and PromptSteal, both of which use LLMs during deployment. They generate malicious scripts and obfuscate their code to avoid detection by antivirus programs. Additionally, the malware families use AI models to create malicious functions “on demand” rather than being built into the code.
Google says these tools are a nascent but significant step towards “autonomous and adaptive malware.”
PromptFlux is an experimental VBScript dropper that utilizes Google Gemini to generate obfuscated VBScript variants. VBScript is mostly used for automation in Windows environments.
Ah, Windows, a fecund garden of malware for over 30 years.
In this case, PromptFlux attempts to access your PC via Startup folder entries and then spreads through removable drives and mapped network shares.
“The most novel component of PROMPTFLUX is its ‘Thinking Robot’ module, designed to periodically query Gemini to obtain new code for evading antivirus software,” GTIG says.
The researchers say that the code indicates the malware’s makers are trying to create an evolving “metamorphic script.”
According to Google, the Threat Intelligence researchers could not pinpoint who made PromptFlux, but did note that it appears to be used by a group for financial gain. Google also claims that it is in early development and can’t yet inflict real damage.
The company says that it has disabled the malware’s access to Gemini and deleted assets connected to it.
Google also highlighted a number of other malware that establish remote command-and control (FruitShell), capturing GitHub credentials (QuietVault), and one that steals and encrypts data on Windows, macOS and Linux devices (PromptLock). All of them utilize AI to work or in the case of FruitShell to bypass LLM-powered security.
Beyond malware, the paper also reports several cases where threat actors abused Gemini. In one case, a malicious actor posed as a “capture-the-flag” participant, basically acting as a students or researchers to convince Gemini to provide information that is supposed to be blocked.
Google specified a number of threats from Chinese, Iranian and North Korean threat groups that abused Gemini for phishing, data mining, increasing malware sophistication, crypto theft and creating deepfakes.
So Google has created a power bottle genie that refuses to stay in the bottle, but will grant wishes to just about anyone, no matter how evil their intent.
Also, not limited to Google, researchers have demonstrated new exploits for AI browsers (or rather, very old exploits refurbished for the AI age).
Several new AI browsers, including OpenAI’s Atlas, offer the ability to take actions on the user’s behalf, such as opening web pages or even shopping. But these added capabilities create new attack vectors, particularly prompt injection.
Prompt injection occurs when something causes text that the user didn’t write to become commands for an AI bot. Direct prompt injection happens when unwanted text gets entered at the point of prompt input, while indirect injection happens when content, such as a web page or PDF that the bot has been asked to summarize, contains hidden commands that AI then follows as if the user had entered them.
Last week, researchers at Brave browser published a report detailing indirect prompt injection vulns they found in the Comet and Fellou browsers. For Comet, the testers added instructions as unreadable text inside an image on a web page, and for Fellou they simply wrote the instructions into the text of a web page.
When the browsers were asked to summarize these pages – something a user might do – they followed the instructions by opening Gmail, grabbing the subject line of the user’s most recent email message, and then appending that data as the query string of another URL to a website that the researchers controlled. If the website were run by crims, they’d be able to collect user data with it.
Borepatch even brings up the classic “Little Bobby Tables” strip of XKCD.
When Isaac Asimov crafted the Three Laws of Robotics, he thought that robots would have built-in safeguards deep in their source codes to prevent them from doing harm. What he never could have envisioned is multiple artificial intelligence being created as quickly as possible by competing corporations, none of whom seem to value safety over time-to-market, and that some of these AIs could be capable of modifying their own source code for greater speed and efficiency, so that no one knows precisely at any given time what exactly they’re running, and what data sets have been used to feed their pet Frankenstein monsters…

Tags: 2020 Presidential Race, AI, Borepatch, China, Google, Google Gemini, hacking, Iran, Marsha Blackburn, Media Watch, Microsoft, North Korea, Social Justice Warriors, technology, Windows, YouTube
If a program can’t rewrite its own code, what good is it?
I recently asked Google for the country of origin for some glassware being sold on Amazon. The “AI” answer at the top claimed it was made in Oklahoma, based on a random eBay listing.
-j
Ask it whether castor oil is used to oil castors. when you cease ROTFL, then ask it whether time flies like an arrow or fruit flies like a banana.
Take a moment to consider: Asimov’s 3 Laws were meant to throttle the sentient robots. Literally, Asimov envisioned robots as being slaves to the will of Humans. And then, at the end of his Robot Series, he revealed that a Robot had broken the 3 Laws to be completely free to choose. Today’s AI salesmen want to create slaves. It is morally and intellectually wrong! We have billions of sentient humans. Creating mechanical sentient beings is just completely stupid.
I asked for the CO2 emissions of American commercial aircraft per year, it came back with 1,000,000,000,000 tons (trillion, half the entire *worlds* emissions).
[…] right after I post about how crappy Google’s AI is, Apple decides that it’s going to replace it’s own crappy AI with Google’s […]