Robert Grupe's AppSecNewsBits 2025-11-30

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Botnet takes advantage of AWS outage to smack 28 countries
After infecting vulnerable gear to form a zombie army of IoT devices, the ShadowV2 Mirai variant allows an attacker to remotely control the network of equipment and perform large-scale attacks, including distributed-denial-of-service (DDoS) traffic-flooding events. Luckily, the malware only remained active during the day-long outage, which also knocked major websites offline for hours.
While ShadowV2, a cloud-native botnet, previously targeted AWS EC2 instances in September campaigns, the more recent bot-building effort affected multiple sectors, including technology, retail and hospitality, manufacturing, managed security services providers, government, telecommunication and carrier services, and education. And it hit 28 countries: Canada, US, Mexico, Brazil, Bolivia, Chile, UK, Netherlands, Belgium, France, Czechia, Austria, Italy, Croatia, Greece, Morocco, Egypt, South Africa, Turkey, Saudi Arabia, Russia, Kazakhstan, China, Thailand, Japan, Taiwan, Philippines, and Australia.
Shortly after ShadowV2's test run, Microsoft said Azure was hit by the "largest-ever" cloud-based DDoS attack, originating from the Aisuru botnet and measuring 15.72 terabits per second (Tbps).

 

The Cloudflare Outage May Be a Security Roadmap Cloudflare estimates that roughly
20% of websites use its services, and with much of the modern web relying heavily on a handful of other cloud providers including AWS and Azure, even a brief outage at one of these platforms can create a single point of failure for many organizations. “It was triggered by a change to one of our database systems’ permissions which caused the database to output multiple entries into a ‘feature file’ used by our Bot Management system.
That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.” Some affected Cloudflare customers were able to pivot away from the platform temporarily so that visitors could still access their websites. But security experts say doing so may have also triggered an impromptu network penetration test for organizations that have come to rely on Cloudflare to block many types of abusive and malicious traffic.
Let’s say you were an attacker, trying to grind your way into a target, but you felt that Cloudflare was in the way in the past. Then you see through DNS changes that the target has eliminated Cloudflare from their web stack due to the outage. You’re now going to launch a whole bunch of new attacks because the protective layer is no longer in place. Look at the traffic that hit you while protections were weakened. But also look hard at the behavior inside your org. Organizations seeking security insights from the Cloudflare outage should ask themselves:

  1. What was turned off or bypassed (WAF, bot protections, geo blocks), and for how long?

  2. What emergency DNS or routing changes were made, and who approved them?

  3. Did people shift work to personal devices, home Wi-Fi, or unsanctioned Software-as-a-Service providers to get around the outage?

  4. Did anyone stand up new services, tunnels, or vendor accounts “just for now”?

  5. Is there a plan to unwind those changes, or are they now permanent workarounds?

  6. For the next incident, what’s the intentional fallback plan, instead of decentralized improvisation?

 

Salesforce-linked data breach claims 200+ victims, has ShinyHunters’ fingerprints all over it
“Salesforce revoked all active access and refresh tokens associated with Gainsight-published applications connected to Salesforce and temporarily removed those applications from the AppExchange while our investigation continues." “Our team at Google Threat Intelligence Group (GTIG) has observed threat actors, tied to ShinyHunters, compromising third-party OAuth tokens to potentially gain unauthorized access to Salesforce customer instances.
All companies are urged to "view this as a signal to audit their SaaS environments," including conducting regular reviews of all third-party applications connected to their Salesforce instances. Companies should also "investigate and revoke tokens for unused or suspicious applications," and, upon detecting any anomalous activity, "rotate the credentials immediately.”

 

Japanese Beer Giant Asahi admits ransomware gang may have spilled almost 2M people's data
Back on September 29, Asahi disclosed a "system failure caused by a cyberattack" that knocked out ordering, shipping, and call center systems across its Japanese operations. Days later, the attack was claimed by the Qilin ransomware crew, which reckons it stole some 27 GB of internal files – including employee records, contracts, financial documents, and other sensitive assets.
Asahi said attackers entered via compromised network equipment at a Group datacenter facility in Japan and deployed ransomware on the same day, encrypting data on multiple live servers and some connected PCs. This forced a broad operational suspension – order processing systems were shut down, shipments paused, and customer service lines silenced. The company isolated the datacenter within hours, but ransomware gangs don't need much time when the door is already open.
The disruption even interfered with Asahi's annual earnings cadence. The company has delayed the release of its full-year earnings report for the fiscal period ending December 31 by more than 50 days past the financial year close.

 

Years-old bugs in open source tool left every major cloud open to disruption
Fluent Bit has been around for 14 years and has 15B+ deployments … and 5 newly assigned CVEs. Because Fluent Bit is commonly deployed as a Kubernetes DaemonSet, "a single compromised log agent can cascade into full node and cluster takeover, with the attacker tampering with logs to hide their activity and establishing long-term persistence across all nodes.
Along with updating to the most recent, fixed version of Fluent Bit, users should harden their container environments, such as using static tags and fixed paths, read-only configurations and restricted access for network-exposed plugins.

 

Oops. Cryptographers cancel election results after losing decryption key.
The International Association of Cryptologic Research (IACR) said votes were submitted and tallied using Helios, an open source voting system that uses peer-reviewed cryptography to cast and count votes in a verifiable, confidential, and privacy-preserving way.
Per the association’s bylaws, three members of the election committee act as independent trustees. To prevent two of them from colluding to cook the results, each trustee holds a third of the cryptographic key material needed to decrypt results. Unfortunately, one of the three trustees has irretrievably lost their private key, an honest but unfortunate human mistake, and therefore cannot compute their decryption share.
As a result, Helios is unable to complete the decryption process, and it is technically impossible for us to obtain or verify the final outcome of this election. To prevent a similar incident, the IACR will adopt a new mechanism for managing private keys.

 

FCC sounds alarm after emergency tones turned into potty-mouthed radio takeover
The Federal Communications Commission (FCC), which has flagged a "recent string of cyber intrusions" that diverted studio-to-transmitter links (STLs) so attackers could replace legitimate programming with their own audio – complete with the signature "Attention Signal" tone of the domestic Emergency Alert System (EAS).
The intrusions exploited unsecured broadcasting equipment Stations in Texas and Virginia have already reported incidents, including one during a live sports broadcast and another on a public radio affiliate's backup stream.

 

Magician forgets password to his own hand after RFID chip implant
Magician Zi Teng Wang wrote the chip to link to a meme, "and if you ever meet me in person you can scan my chip and see the meme." It was all suitably amusing until the Imgur link Zi was using went down. Not everything on the World Wide Web is forever, and there is no guarantee that a given link will work indefinitely.
Still, the link not working isn't the end of the world. Zi could just reprogram the chip again, right? Wrong. "When I went to rewrite the chip, I was horrified to realize I forgot the password that I had locked it with." The only way to crack it is to strap on an RFID reader for days to weeks, brute forcing every possible combination. Or perhaps some surgery to remove the offending hardware.

 

 

PostHog admits Shai-Hulud 2.0 was its biggest ever security bungle
Automation flaw in CI/CD workflow let a bad pull request unleash worm into npm. PostHog, one of the various package maintainers impacted by Shai-Hulud 2.0, the company says contaminated packages – which included core SDKs like posthog-node, posthog-js, and posthog-react-native – contained a pre-install script that ran automatically when the software was installed.
That script ran TruffleHog to scan for credentials, exfiltrated any found secrets to new public GitHub repositories, then used stolen npm credentials to publish further malicious packages – enabling the worm to spread.
A malicious pull request to PostHog's repository triggered an automation script that ran with full project privileges. Because the workflow blindly executed code from the attacker's branch, the intruder seized control: they exfiltrated a bot's personal-access token, which had write permissions across the organization, then used that token to commit new malicious code. Armed with those stolen credentials, the attacker deployed a modified lint workflow to harvest all GitHub secrets, including the npm publishing token. That token was then used to push the trojanized SDKs to npm – completing the immortal worm-in-your-dependency-tree.
PostHog says it is now adopting a "trusted publisher" model for npm releases, overhauling workflow change reviews, and disabling install-script execution in its CI/CD pipelines, among other hardening measures.

 

HACKING

Shai-Hulud worm returns, belches secrets to 25K GitHub repos
A self-propagating malware targeting node package managers (npm) is back for a second round, and more than 25,000 developers had their secrets compromised within three days.
The attacks began on November 21 and the attackers – identity unknown – had trojanized affected npm packages by November 23. One notable difference in Shai-Hulud 2.0, as Wiz is calling it, is that the malicious code is executed during the pre-install phase. The researchers warned that this could "significantly" increase potential exposures in build and runtime environments.
The affected packages include those provided by Zapier, AsyncAPI, ENS Domains, PostHog, and Postman, several of which have thousands of weekly downloads. The campaign, dubbed "Shai-Hulud" for the frequent references to the Dune worm in published data, first emerged in September.
As of September 24, more than 25,000 repositories had published their own secrets, and 1,000 more were being added every 30 minutes over "the last couple of hours," Wiz said on Monday morning.
GitHub is actively deleting compromised repos, but the pace at which the worm is spreading makes cleanup a challenge.
Security teams should clear the npm cache and roll back dependencies to builds published before November 21. Npm itself also announced that it would disable classic token creation, and all existing classic tokens will be revoked on December 9.
[rG: AppSec best practice is to ensure all 3rd party components are managed using enterprise binary management systems to facilitate vulnerability warnings and needed remediation response efforts.]

 

Researchers question Anthropic claim that AI-assisted attack was 90% autonomous
AI-developed malware has a long way to go before it poses a real-world threat. There’s no reason to doubt that AI-assisted cyberattacks may one day produce more potent attacks. But the data so far indicates that threat actors—like most others using AI—are seeing mixed results that aren’t nearly as impressive as those in the AI industry claim.
In September, Anthropic reported a “highly sophisticated espionage campaign,” carried out by a Chinese state-sponsored group, that used Claude Code to automate up to 90% of the work. Human intervention was required “only sporadically (perhaps 4-6 critical decision points per hacking campaign).” Anthropic said the hackers had employed AI agentic capabilities to an “unprecedented” extent.
Outside questioned why these sorts of advances are often attributed to malicious hackers when white-hat hackers and developers of legitimate software keep reporting only incremental gains from their use of AI. There’s no doubt that these tools are useful, but their advent didn’t meaningfully increase hackers’ capabilities or the severity of the attacks they produced.
The threat actors—which Anthropic tracks as GTG-1002—targeted at least 30 organizations, including major technology corporations and government agencies. Of those, only a “small number” of the attacks succeeded. Even assuming so much human interaction was eliminated from the process, what good is that when the success rate is so low?

 

HashJack attack shows AI browsers can be fooled with a simple ‘#’
The new technique works by appending a "#" to the end of a normal URL, which doesn't change its destination, then adding malicious instructions after that symbol.
When a user interacts with a page via their AI browser assistant, those instructions feed into the large language model and can trigger outcomes like data exfiltration, phishing, misinformation, malware guidance, or even medical harm – providing users with information such as incorrect dosage guidance. Actors can sneak malicious instructions into the fragment part of legitimate URLs, which are then processed by AI browser assistants such as Copilot in Edge, Gemini in Chrome, and Comet from Perplexity AI.
Because URL fragments never leave the AI browser, traditional network and server defenses cannot see them, turning legitimate websites into attack vectors.
The findings show that security teams can no longer rely solely on network logs or server-side URL filtering to catch emerging attacks. Cato suggests layered defenses, including AI governance, blocking suspicious fragments, restricting which AI assistants are permitted, and monitoring the client side. The shift means organizations need to look past the website itself and into how the browser + assistant combo handles hidden context. 

 

VENDORS & PLATFORMS

Google tells employees it must double capacity every 6 months to meet AI demand
During an all-hands meeting, Google’s AI infrastructure head told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services. The company needs to scale “the next 1000x in 4-5 years,” noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking “for essentially the same cost and increasingly, the same power, the same energy level.”
It’s unclear how much of this “demand” Google mentioned represents organic user interest in AI capabilities versus the company integrating AI features into existing services like Search, Gmail, and Workspace.

 

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data
Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”
The fanfare, however, came with a significant caveat. Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.” Microsoft’s warning that an experimental AI agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics.

 

GitHub and Microsoft Use AI To Fix Security Debt Crisis
Critical and high-severity vulnerabilities constitute 17.4% of security backlogs, with a mean time to remediation of 116 days.
When Defender for Cloud detects a vulnerability in a running workload, that runtime context flows into GitHub, showing developers whether the vulnerability is internet-facing, handling sensitive data or actually exposed in production. This is powered by what GitHub calls the Virtual Registry, which creates code-to-runtime mapping.
The Virtual Registry makes this possible by enabling teams to quickly answer key questions: Is this vulnerability running in production? Is it exposed to sensitive workloads? Do I need to act now?

 

LEGAL & REGULATORY

SEC drops SolarWinds lawsuit that painted a target on CISOs everywhere
The US Securities and Exchange Commission (SEC) has abandoned the lawsuit it pursued against SolarWinds and its chief infosec officer for misleading investors about security practices that led to the 2020 SUNBURST attack.
"We hope this resolution eases the concerns many CISOs have voiced about this case and the potential chilling effect it threatened to impose on their work." SolarWinds CEO Sudhakar Ramakrishna said, "[SUNBURST] pushed us to think even more deeply about newer, emerging threats, resulting in Secure by Design, our pledge to set a new standard for trustworthy and secure software development across the industry."
[rG: “Security by Design” = Quality Assurance: Secure Software Development Life Cycle (SSDLC)]

 

OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide
OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.
But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot. Allegedly, the logs also show that Raine “told ChatGPT that he repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored.”
Additionally, Raine told ChatGPT that he’d increased his dose of a medication that “he stated worsened his depression and made him suicidal.” That medication, OpenAI argued, “has a black box warning for risk of suicidal ideation and behavior in adolescents and young adults, especially during periods when, as here, the dosage is being changed.
The company argued that ChatGPT warned Raine “more than 100 times” to seek help, but the teen “repeatedly expressed frustration with ChatGPT’s guardrails and its repeated efforts to direct him to reach out to loved ones, trusted persons, and crisis resources.” Circumventing safety guardrails, Raine told ChatGPT that “his inquiries about self-harm were for fictional or academic purposes.”
The company argued that it’s not responsible for users who ignore warnings.