Robert Grupe's AISecNewsBits 2025-09-13

In This Week's Highlights:

Epic Fails

  • AI Darwin Awards launch to celebrate spectacularly bad deployments

  • Reddit bug caused lesbian subreddit to be labeled as a place for “straight” women

  • Education report calling for ethical AI use contains over 15 fake sources

Hacking

  • Modder injects AI dialogue into 2002’s Animal Crossing using memory hack

Vendors

  • Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic

  • Microsoft folds Sales, Service, Finance Copilots into 365

  • Developers joke about “coding like cavemen” as AI service suffers major outage

  • Claude’s new AI file-creation feature ships with security risks built in

  • Pixel 10 fights AI fakes with new Android photo verification tech

  • It's AI all the way down as Google's AI cites web pages written by AI

  • Linus Torvalds Grows Frustrated Seeing "Garbage" With "Link: " Tags In Git Commits

  • Garak: Open-source LLM vulnerability scanner

Market

  • AI will consume all of IT by 2030—but not all IT jobs, Gartner says

  • In court filing, Google concedes the open web is in “rapid decline”

  • Why accessibility might be AI’s biggest breakthrough

  • Google Cloud chief details how search giant is making billions monetizing its AI products

  • Walmart's bet on AI depends on getting employees to use it

Legal

  • Judge: Anthropic’s $1.5B settlement is being shoved “down the throat of authors”

  • AI pricing is currently in a state of ‘pandemonium’ says Gartner

  • Pay-per-output? AI firms blindsided by beefed up robots.txt instructions

  • Spotify peeved after 10,000 users sold data to build AI tools

 

EPIC FAILS
AI Darwin Awards launch to celebrate spectacularly bad deployments
Nominations are open for the 2025 AI Darwin Awards and the list of contenders is growing, fueled by a tech world weary of AI and evangelists eager to shove it somewhere inappropriate.
There's the Taco Bell drive-thru incident, where the chain catastrophically overestimated AI's ability to understand customer orders.
Or the Replit moment, where a spot of vibe coding nuked a production database, despite instructions from the user not to fiddle with code without permission.
Then there's the woeful security surrounding an AI chatbot used to screen applicants at McDonald's, where feeding in a password of 123456 gave access to the details of 64 million job applicants.
"Why stop at individual acts of spectacular stupidity when you can scale them to global proportions with machine learning?"

 

Reddit bug caused lesbian subreddit to be labeled as a place for “straight” women
“There was a small bug in a test we ran that mistakenly caused the English-to-English translation(s) you saw. That bug has been resolved. Unsurprisingly, English-to-English translations are not part of our strategy, as they aren't necessary. English-to-English translations were not a desired or expected outcome of the test.”
Reports of subreddits suddenly having inaccurate summaries when viewed on Reddit’s Android app started surfacing on Reddit a couple of weeks ago.
A moderator reported on the r/ModSupport subreddit for moderators that the r/ThronesAndDominions subreddit's description changed from “The wayward adventures of Dylan Carlson and the band Earth” to “The crazy adventures of Dylan Carlson and the band Earth.”
The problem got more attention when r/actuallesbians’ Android app description described the community as “a place for straight and transgender lesbians …” instead of “a place for cis and trans lesbians …”
Other complaints followed, including from r/autisticparents, a subreddit for parents with autism whose description was changed to say that it is a group for “parents of autistic children.”

 

Education report calling for ethical AI use contains over 15 fake sources
"A Vision for the Future: Transforming and Modernizing Education," released August 28, serves as a 10-year roadmap for modernizing the province's public schools and post-secondary institutions. The 418-page document took 18 months to complete and was unveiled by co-chairs Anne Burke and Karen Goodnough, both professors at Memorial University's Faculty of Education, alongside Education Minister Bernard Davis.
One of the fake citations references a 2008 National Film Board movie called "Schoolyard Games" that does not exist. The exact citation reportedly appears in a University of Victoria style guide, a document that teaches students how to format references using fictional examples. The style guide warns on its first page that "Many citations in this guide are fictitious," meaning they are made-up examples used only to demonstrate proper formatting.
Yet someone (or some AI chatbot) copied the fake example directly into the Education Accord report as if it were a real source. The presence of potentially AI-generated fake citations becomes especially awkward given that one of the report's 110 recommendations specifically states the provincial government should "provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use."

 

rG GenAI veracity check: "How many t's are in Taumata­whakatangihanga­koauau­o­tamatea­turi­pukaka­piki­maunga­horo­nuku­pokai­whenua­ki­tana­tahu?" (the longest single-word place name)
Copilot Chat: Here’s the breakdown:
Taumata → 2 t’s
whakatangihanga → 2 t’s << Whoops
koauau → 0 t’s
o → 0 t’s
tamatea → 2 t’s
turi → 1 t
pukaka → 0 t’s
piki → 0 t’s
maunga → 0 t’s
horo → 0 t’s
nuku → 0 t’s
pokai → 0 t’s
whenua → 0 t’s
ki → 0 t’s
tanatahu → 2 t’s
Total: 9 lowercase t’s << Uh, really??
Comparison: Google Gemini counts 10. << I guess; if using octal ??

 

HACKING

Modder injects AI dialogue into 2002’s Animal Crossing using memory hack
Software engineer Joshua Fonseca hacked the memory of the 2002 GameCube game Animal Crossing to inject AI-generated dialogue, creating a simulated villager revolt against Tom Nook. Using a Python script and the Dolphin emulator, he connected in-game conversations to cloud-based AI models, enabling characters to reference real-world news and express self-awareness.
The mod required reverse-engineering the game’s dialogue system and building custom tools to encode AI responses in the GameCube’s proprietary format.
Though the uprising was scripted, the project showcases a novel fusion of retro gaming and modern AI.

 

VENDORS & PLATFORMS

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic
In an unusual arrangement showing the tangled alliances of the AI industry, Microsoft will reportedly purchase access to Anthropic's models through Amazon Web Services—both a cloud computing rival and one of Anthropic's major investors.
The integration is expected to be announced within weeks, with subscription pricing for Office's AI tools remaining unchanged.

 

Microsoft folds Sales, Service, Finance Copilots into 365
$50 standalone bots now bundled in $30 package

 

Developers joke about “coding like cavemen” as AI service suffers major outage
Anthropic experienced a brief but complete service outage that took down its AI infrastructure, leaving developers unable to access Claude[.]ai, the API, Claude Code, or the management console for around half an hour. The outage affected all three of Anthropic's main services simultaneously.
The disruption, though lasting only about 30 minutes, quickly took the top spot on tech link-sharing site Hacker News for a short time and inspired immediate reactions from developers who have become increasingly reliant on AI coding tools for their daily work. "Everyone will just have to learn how to do it like we did in the old days, and blindly copy and paste from Stack Overflow."
Another user recalled a joke from a previous AI outage: "Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024."

 

Claude’s new AI file-creation feature ships with security risks built in
Anthropic launched a new file-creation feature for its Claude AI assistant that enables users to generate Excel spreadsheets, PowerPoint presentations, and other documents directly within conversations on the web interface and in the Claude desktop app.
While the feature may be handy for Claude users, the company's support documentation also warns that it "may put your data at risk."
According to Anthropic's documentation, "a bad actor" manipulating this feature could potentially "inconspicuously add instructions via external files or websites" that manipulate Claude into "reading sensitive data from a claude[.]ai connected knowledge source" and "using the sandbox environment to make an external network request to leak the data." The feature gives Claude access to a sandbox computing environment, which enables it to download packages and run code to create files. "This feature gives Claude Internet access to create and analyze files, which may put your data at risk. Monitor chats closely when using this feature."
This describes a prompt injection attack, where hidden instructions embedded in seemingly innocent content can manipulate the AI model's behavior. These attacks represent a pernicious, unsolved security flaw of AI language models, since both data and instructions in how to process it are fed through as part of the "context window" to the model in the same format, making it difficult for the AI to distinguish between legitimate instructions and malicious commands hidden in user-provided content.

 

Pixel 10 fights AI fakes with new Android photo verification tech
Google is integrating C2PA Content Credentials into the Pixel 10 camera and Google Photos, to help users distinguish between authentic, unaltered images and those generated or edited with artificial intelligence technology.
In the latest Pixel 10 phones, every JPEG photo captured will be automatically attached Content Credentials, which reveals how they were made
The firm urges industry stakeholders to move beyond simplistic AI labels and adopt Content Credentials, emphasizing that combating misinformation and deepfakes requires broad, ecosystem-wide adoption of verifiable provenance.

 

It's AI all the way down as Google's AI cites web pages written by AI
Google’s AI Overviews (AIOs), which now often appear at the top of organic search results, are drawing around 10% of their sources from documents written by ... other AIs.
Model collapse is a degenerative process affecting generations of learned generative models, in which the data they generate end up polluting the training set of the next generation. Being trained on polluted data, they then mis-perceive reality.
Most interestingly, of the links that did work in AIO citations, 52% of them were not among the top 100 pages Google showed in its organic search results for the same term. Of those 52%, 12.8% (higher than the overall 10.4%) were flagged as being AI generated.

 

Linus Torvalds Grows Frustrated Seeing "Garbage" With "Link: " Tags In Git Commits
Somewhat recently it's become a common occurrence seeing "Link: " tags within Git commits for the Linux kernel that point to the latest Linux kernel mailing list patches of the same patch. Short of being part of a multi-part patch series and wanting to then find the patch series cover letter or look at any of the follow-up LKML discussions, including these "Link: " tags often doesn't provide much extra value but just waste time for those trying to find any added context.
Linus Torvalds has had enough and will be more strict against accepting pull requests that have link tags of no value.

 

Garak: Open-source LLM vulnerability scanner
LLMs can make mistakes, leak data, or be tricked into doing things they were not meant to do. Garak is a free, open-source tool designed to test these weaknesses. It checks for problems like hallucinations, prompt injections, jailbreaks, and toxic outputs. By running different tests, it helps developers understand where a model might fail and how to make it safer.

 

MARKET

AI will consume all of IT by 2030—but not all IT jobs, Gartner says
Despite the growing role of AI-automated workloads in IT, Gartner doesn’t expect the technology to create an “AI jobs bloodbath,” Plummer said. Currently, only 1 percent of job losses are the result of AI. Gartner predicts that in 5 years, 25 percent of IT work will be totally performed by bots, while 75 percent of IT workloads will be performed by humans with the help of AI. The World Economic Forum’s Future of Jobs Report 2025 released in January, based on data from 1,000 companies employing 14 million global workers, found that by 2030, AI could create 78 million more jobs than it eliminates.
[rG: AI in itself won't eliminate work or jobs. It is the automation of existing inefficient manual processes that will provide ROI for AI enhanced applications. Manual work functions that AI can replace are data entry, format conversions, and summarization. Like the time when the wordprocessing software eliminated typing pools, and email reduced mailrooms. Employees are still required to create the automations and provide quality control. Enterprise management and project administration are the biggest opportunities for efficiency gains using AI enhanced process automation, analysis, reporting, and communications. Meanwhile, the obsession appears to be diminishing the roles of software developers and customer service: areas that have struggled to deliver consistent value, at the expense of "delightful" customer satisfaction and service excellence.]

 

In court filing, Google concedes the open web is in “rapid decline”
Google's crawlers have seen a 45% increase in indexable content since 2023.
This metric, Google says, shows that open web advertising could be imploding while the web is healthy and thriving. We don't know what kind of content is in this 45%, but given the timeframe cited, AI slop is a safe bet.
[rG: New marketing promotion & PR practices now needed to prioritize influencing AI training data, model and chatbot rules - for campaign "product placement", as well as inevitable sidebar advertising. While digital content production jobs are being impacted by GenAI, AI Optimization is going to extend SEO demand needs.]

 

Why accessibility might be AI’s biggest breakthrough
A UK government study found that neurodiverse employees—especially those with ADHD and dyslexia—benefit significantly more from AI assistants like Microsoft 365 Copilot than their neurotypical peers.
Participants reported increased confidence, improved task execution, and greater inclusion in meetings, particularly through features like embedded writing support and real-time transcription.
The study suggests AI may be closing accessibility gaps that traditional accommodations have missed. However, concerns remain about AI inaccuracies and overreliance, especially among students with disabilities.

 

Google Cloud chief details how search giant is making billions monetizing its AI products.
Consumption: Whether it’s a GPU, TPU or a model, you pay by token — meaning you pay by what you use
Subscriptions: storage
Upselling: additional products

 

Walmart's bet on AI depends on getting employees to use it
At Walmart, "everybody's using AI every day across the enterprise," according to David Glick, senior vice president of the retail behemoth's enterprise business services. Glick recounted how a year ago at the same conference, he was preparing to go on stage and heard people in the preceding session talking about how the most challenging element of digital transformation is change management.
He explained, "I was standing in the back saying, 'No, we're engineering, and engineering does all the work. And that's the hardest part, to actually write the code.' And then as I thought about it throughout the day, I was like, actually, writing code, we know how to do that, and it's getting easier and easier using AI. But it is, in fact, the change management."
The issue in large companies, he said, is that everyone wants to be included, but "we're moving people's cheese" – in reference to a business leadership book on dealing with change.
[rG thx Bill: Everyone is enthusiastic to brag about using the bright shiny new toys that are so heavily marketed to executives. But the challenges with data confidentiality and quality haven't been controlled fully yet, and privileged early access users aren't wanting giving up their new found exclusivity and gatekeeping powers. Beneficial utilization won't be possible until safe and secure wide access can be achieved for rank-and-file employees experimentation and use.]

 

LEGAL & REGULATORY

Judge: Anthropic’s $1.5B settlement is being shoved “down the throat of authors”
Critics fear Anthropic will get off cheaply, striking a deal with authors suing that covers less than 500,000 works and paying a small fraction of its total valuation (currently $183 billion) to get away with the massive theft. The settlement doesn't even require Anthropic to admit wrongdoing, while the company continues raising billions based on models trained on authors' works. Most recently, Anthropic raised $13 billion in a funding round, making back about 10 times the proposed settlement amount.
The settlement likely risks setting up a future where courts are bogged down over disputes linked to the class action for years if many authors and publishers miss out on filing claims or receiving payments. Class members frequently "get the shaft" in class actions where attorneys stop caring after monetary relief is granted. An improper notification scheme could leave Anthropic in a vulnerable position, facing future claimants coming out of the woodwork later.

 

AI pricing is currently in a state of ‘pandemonium’ says Gartner
Some major vendors are yet to include AI in their contracts.
AWS only addresses AI in clause 50 of its supplementary T&C documents, or include AI-specific language in linked documents. Buyers will therefore have to sift through many vendor policies and legal docs to understand exactly what they’re buying into.
Vendors differ on matters that expose buyers to risk, such as who is liable if AI systems offer bad advice that a customer relies on to their detriment.
Two fields in which buyers might want to start applying pressure are inclusion of responsible AI principles in contracts (Liversidge thinks only 1% of vendors do this today) and compliance with the ISO 42001 standard on AI management systems.
Also buyers need to watch for pricing inconsistencies, even by the same vendor.

 

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions.
The "Really Simple Licensing" (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that's designed to block bots that don't fairly compensate creators for content.
Free for any publisher to use, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI. Based on the "Really Simple Syndication" (RSS) standard, RSL terms can be applied to protect any digital content, including webpages, books, videos, and datasets.
The RSL standard also solves a problem for AI companies, which have complained in litigation over AI scraping that there is no effective way to license content across the web.

 

Spotify peeved after 10,000 users sold data to build AI tools
Over 10,000 Spotify users joined a collective called Unwrapped to sell their streaming data for AI development, earning about $5 each in cryptocurrency. Spotify objected, citing trademark infringement and violations of its developer policy, but Unwrapped developers claim they never received formal notice.
The dispute centers on user data ownership and whether individuals can monetize personal listening histories under data portability rights. Critics warn of privacy risks, while advocates argue this movement empowers users and challenges Big Tech’s control over personal data.
[rG: So now what about user collectives selling other types of aggregated personal data; such as medical records, financials, etc.?]