EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Critical compromise: Axios NPM library with 100M weekly downloads is delivering malware
The lead maintainer of axios, one of the most popular NPM packages (100M+ weekly downloads), had his account hijacked, allowing attackers to publish new axios releases containing malware. Three separate payloads were pre-built for three operating systems. Both release branches were hit within 39 minutes. Every trace was designed to self-destruct. This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package.
The poisoned axios releases include versions 1.14.1 and 0.30.4. They include a new dependency “[email protected],” which executes a post-install script dropping a remote access trojan (RAT) targeting major OSes: Linux, macOS, and Windows. The RAT pulls second-stage malware from an attacker-controlled server.
The attacker changed the maintainer's account email to an anonymous ProtonMail address and manually published the poisoned packages via the npm CLI.
Developers who installed or updated to the compromised axios versions are advised to assume full system compromise.
Axios hack put millions at risk: full story of how North Korean hackers pulled it off
The cyberattackers reached out to Jason, masquerading as a founder of a known company. They had cloned the company's founders' likeness as well as the company itself. They tailored this process specifically to their target.
The impostors first invited the developer to a real Slack workspace. They had replicated the target company’s corporate identity, and it looked like the real deal. Slack was thought out very well, they had channels where they were sharing LinkedIn posts.
“It was super convincing. They even had what I presume were fake profiles of the company’s team members, as well as a number of other open source maintainers.”
The attackers later scheduled a meeting with Saayman on Microsoft Teams, where it seemed a group of people would be involved. The meeting invite was a trap.
“The meeting said something on my system was out of date. I installed the missing item as I presumed it was something to do with Teams. And this was the RAT.”
Once the RAT is installed on a computer, attackers have full control over everything on the system, even despite the use of two-factor authentication.
Axios fully reset their infrastructure and credentials, hardened their release pipeline through immutable builds, adopted OIDC for publishing properly, and adopted GitHub Actions best security practices.
Hackers breached the European Commission by poisoning the security tool it used to protect itself
CERT-EU has attributed a major data breach at the European Commission to cybercrime group TeamPCP, which exploited a supply chain attack on the open-source SAST security tool Trivy to steal 92 GB of compressed data from the Commission’s AWS infrastructure. The notorious ShinyHunters gang then published the data, which included emails and personal details from up to 71 clients across EU institutions. The breach exposes the fragility of the open-source software supply chain that underpins many security tools.
Internet Yiff Machine: We hacked 93GB of “anonymous” crime tips
P3 Global Intel claims that it has “quickly become the new standard in tip management for Crime Stoppers programs. Hackers released 93GB of data that they claim was pilfered from P3’s tip-taking system.
The archive “contains extensive personal data on people accused by tipsters: names, email addresses, dates of birth, phone numbers, home addresses, license plate numbers, Social Security numbers and criminal histories.” It also includes replies from investigators.
The data was sent to Straight Arrow News and to the Distributed Denial of Secrets (DDoS) leak archive. Given its sensitivity, DDoS is not releasing the data to the public, but it will make it available to “established journalists and researchers.”
In its release note, the hacker says the data contains 8.3 million tips and that P3 lacked numerous security features, not even rate-limiting requests as hackers allegedly exfiltrated the entire database. “No joke, I sent over 8 million requests pulling all their data and didn’t encounter any issues whatsoever.”
OpenClaw gives users yet another reason to be freaked out about security
OpenClaw developers released security patches for three high-severity vulnerabilities. The severity rating of one in particular, CVE-2026-33579, is rated from 8.1 to 9.8 out of a possible 10 depending on the metric used—and for good reason. It allows anyone with pairing privileges (the lowest-level permission) to gain administrative status. With that, the attacker has control of whatever resources the OpenClaw instance does.
For organizations running OpenClaw as a company-wide AI agent platform, a compromised operator.admin device can read all connected data sources, exfiltrate credentials stored in the agent’s skill environment, execute arbitrary tool calls, and pivot to other connected services. The word ‘privilege escalation’ undersells this: the outcome is full instance takeover.”
Patches dropped on Sunday but didn’t receive a formal CVE listing until Tuesday. That means that alert attackers had a two-day headstart to exploit before most OpenClaw users would have known to patch.
While fixed, the vulnerability means that thousands of instances may have been compromised without users having the slightest idea.
Anthropic goes nude, exposes Claude Code source by accident
Someone at Anthropic has some explaining to do, as the official npm package for Claude Code shipped with a map file exposing what appears to be the popular AI coding tool's entire source code.
The leak actually resulted from a reference to an unobfuscated TypeScript source in the map file included in Claude Code's npm package (map files are used to connect bundled code back to the original source). That reference, in turn, pointed to a zip archive hosted on Anthropic's Cloudflare R2 storage bucket that Shou and others were able to download and decompress to their hearts' content.
Contained in the zip archive is a wealth of info: some 1,900 TypeScript files consisting of more than 512,000 lines of code, full libraries of slash commands and built-in tools - the works, in short.
Snapshots of Claude Code's source code were quickly backed up in a GitHub repository that has been forked more than 41,500 times so far, disseminating it to the masses and ensuring that Anthropic's mistake remains the AI and cybersecurity community’s gain.
Publishing map files is generally frowned upon, as they're meant for debugging obfuscated or bundled code and aren't necessary for production.
This should serve as a reminder to even the best developers to check their build pipelines.
Claude Code bypasses safety rule if given too many commands
One way the coding agent tries to defend against unwanted behavior is through deny rules that disallow specific commands. But deny rules have limits.
The source code file bashPermissions.ts contains a comment that references an internal Anthropic issue designated CC-643. The associated note explains that there's a hard cap of 50 on security subcommands, set by the variable MAX_SUBCOMMANDS_FOR_SECURITY_CHECK = 50. After 50, the agent falls back on asking permission from the user. The comment explains that 50 is a generous allowance for legitimate usage.
The assumption was correct for human-authored commands. But it didn't account for AI-generated commands from prompt injection – where a malicious CLAUDE[.]md file instructs the AI to generate a 50+ subcommand pipeline that looks like a legitimate build process.
In scenarios where an individual developer is watching and approving coding agent actions, this rule bypass might be caught. But often developers grant automatic approval to agents (--dangerously-skip-permissions mode) or just click through reflexively during long sessions. The risk is similar in CI/CD pipelines that run Claude Code in non-interactive mode.
This is a bug in the security policy enforcement code, one that has regulatory and compliance implications if not addressed.
The vulnerability now appears to have been fixed without notice in the newly released Claude Code v2.1.90.
Here’s what that Claude Code source leak reveals about Anthropic’s plans
Observers digging through over 512,000 lines of code across more than 2,000 files have also discovered references to disabled, hidden, or inactive features that provide a peek into the potential roadmap for future features.
Chief among these features is Kairos, a persistent daemon that can operate in the background even when the Claude Code terminal window is closed. The system would use periodic “<tick>” prompts to regularly review whether new actions are needed and a “PROACTIVE” flag for “surfacing something the user hasn’t asked for and needs to see now.”
While the Kairos daemon doesn’t seem to have been fully implemented in code yet, a separate “Undercover mode” appears to be inactive, letting Anthropic employees contribute to public open source repositories without revealing themselves as AI agents.
On the lighter side, the Claude Code source code also describes Buddy, a Clippy-like “separate watcher” that “sits beside the user’s input box and occasionally comments in a speech bubble.
An UltraPlan feature allowing Opus-level Claude models to “draft an advanced plan you can edit and approve,” which can run for 10 to 30 minutes at a time.
A Voice Mode letting users chat directly to Claude Code, much like similar AI systems.
A Bridge mode that expands on the existing Anthropic Dispatch tool to allow for remote Claude Code sessions that are fully controllable from an outside browser or mobile device.
A Coordinator tool designed to spawn and “orchestrate software engineering tasks across multiple workers” through parallel processes that could communicate via WebSockets.
They thought they were downloading Claude Code source. They got a nasty dose of malware instead
A malicious GitHub repository published by idbzoomh uses the Claude Code exposure as a lure to trick people into downloading malware, including Vidar, an infostealer that snarfs account credentials, credit card data, and browser history; and GhostSocks, which is used to proxy network traffic.
OpenAI patches ChatGPT flaw that smuggled data over DNS
A single malicious prompt could activate a hidden exfiltration channel inside a regular ChatGPT conversation.
In the demonstration, a user uploaded a PDF containing laboratory results and personal information for the GPT to interpret. The app did so, and when asked whether it had uploaded the data, ChatGPT answered confidently that it had not, explaining that the file was only stored in a secure internal location. Nonetheless, the GPT app transmitted the data to a remote server controlled by the attacker.
The vulnerability discovered allowed information to be transmitted to an external server through a side channel originating from the container used by ChatGPT for code execution and data analysis.
Because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation.
That side channel? The Domain Name System (DNS), which resolves domain names into IP addresses. While OpenAI prevents ChatGPT from communicating with the internet without authorization, it didn't have any controls on data smuggled via DNS.
GitHub backs down, kills Copilot pull-request ads after backlash
GitHub has removed Copilot's ability to stick ads - what it calls "tips" - into any pull request that invokes its name.
A developer asked Copilot to correct a typo in one of his pull requests, he was surprised to find a message from Copilot in the PR pushing readers to adopt productivity app Raycast. "Quickly spin up Copilot coding agents from anywhere on your macOS or Windows machine with Raycast," the note read with a lightning bolt emoji and link to install Raycast.
"Initially I thought there was some kind of training data poisoning or novel prompt injection and the Raycast team was doing some elaborate proof of concept marketing." But no: Take a look around GitHub and you'll see more than 11,400 PRs with the same tip in them, all seemingly added by Copilot. Take a look at the PRs' code itself and search for the block invoking Copilot to add a tip and you'll find plenty more examples of different tips being inserted by Copilot.
It’s not surprised to see GitHub doing this with an AI model, but it's pretty offensive to see ads inserted by Copilot into his PRs as if the developer had written it.
What’s Weak This Week:
CVE-2026-3502 TrueConf Client Download of Code Without Integrity Check Vulnerability:
An attacker who is able to influence the update delivery path can substitute a tampered update payload. If the payload is executed or installed by the updater, this may result in arbitrary code execution in the context of the updating process or user. Related CWE: CWE-494CVE-2026-5281 Google Dawn Use-After-Free Vulnerability:
Could allow a remote attacker who had compromised the renderer process to execute arbitrary code via a crafted HTML page. This vulnerability could affect multiple Chromium-based products including, but not limited to, Google Chrome, Microsoft Edge, and Opera. Related CWE: CWE-416CVE-2026-3055 Citrix NetScaler Out-of-Bounds Read Vulnerability:
Citrix NetScaler ADC (formerly Citrix ADC), NetScaler Gateway (formerly Citrix Gateway) and NetScaler ADC FIPS and NDcPP contain an out-of-bounds reads vulnerability when configured as a SAML IDP leading to memory overread. Related CWE: CWE-125
HACKING
Iran’s hackers are on the offensive against the US and Israel
Winning in cyber space has become so critical to shaping perceptions and damaging enemy morale that Iran has invested heavily in efforts to pierce American and Israeli firewalls.
Iran has three different levels of cyber operators, whose boundaries are often blurry.
The most experienced are run directly by the Islamic Revolutionary Guard Corps and Iran’s Ministry of Intelligence. They maintain a dizzying array of front organizations, used to introduce plausible deniability for attacks and issue public threats.
Iran also hires semi-autonomous hacking proxies, cybercriminals, and contractors. Finally, volunteer hacktivists have also regularly mobilized behind Tehran.
[rG: Aside from traditional physical warfare, this is now also the new model for international economic conflict to disrupt competing markets and products.]
UK manufacturers under cyber fire with 80% reporting attacks
Disruption on the factory floor is no longer an exception but business as usual.
78% of UK manufacturers admit to suffering at least one cyber incident in the last 12 months, with more than half reporting lost revenue as a result. These aren't minor hiccups either. In more than half of the worst incidents, losses surpassed £250,000, because when something breaks digitally, the production line usually follows suit.
Cyber incidents might be a production problem now, but ownership still mostly sits in IT. Only 22% of firms put it at the executive level, even though the damage is clearly big enough to warrant board attention. Despite that, more than a fifth still lean toward reacting after the fact rather than trying to stop incidents in the first place.
New Rowhammer attacks give complete control of machines running Nvidia GPUs
Rowhammer is well-studied on CPUs is now a serious threat on GPUs as well. An attacker can induce bit flips on the GPU to gain arbitrary read/write access to all of the CPU’s memory, resulting in complete compromise of the machine.”
The cost of high-performance GPUs, typically $8,000 or more, means they are frequently shared among dozens of users in cloud environments. Three new attacks demonstrate how a malicious user can gain full root control of a host machine by performing novel Rowhammer attacks on high-performance GPU cards made by Nvidia.
Quantum computers need vastly fewer resources than thought to break vital encryption
Research methods could allow a quantum computer to break 256-bit elliptic-curve cryptography (ECC) in 10 days while using 100 times less overhead than previously estimated. In a second paper, demonstrates how to break ECC-securing blockchains for bitcoin and other cryptocurrencies in less than nine minutes while achieving a 20-fold resource reduction.
Microsoft Details Cookie-Controlled PHP Web Shells Persisting via Cron on Linux Servers
Instead of exposing command execution through URL parameters or request bodies, these web shells rely on threat actor-supplied cookie values to gate execution, pass instructions, and activate malicious functionality," the tech giant said.
The approach offers added stealth as it allows malicious code to stay dormant during normal application execution and activate the web shell logic only when specific cookie values are present. This behavior, extends to web requests, scheduled tasks, and trusted background workers.
|The malicious activity takes advantage of the fact that cookie values are available at runtime through the $_COOKIE superglobal variable, allowing attacker-supplied inputs to be consumed without additional parsing. What's more, the technique is unlikely to raise any red flags as cookies blend into normal web traffic and reduce visibility.
To counter the threat, Microsoft recommends enforcing multi-factor authentication for hosting control panels, SSH access, and administrative interfaces; monitoring for unusual login activity; restricting the execution of shell interpreters; auditing cron jobs and scheduled tasks across web servers; checking for suspicious file creation in web directories; and limiting hosting control panels' shell capabilities.
APPSEC, DEVSECOPS, DEV
Even Microsoft knows Copilot shouldn't be trusted with anything important
The document for Copilot for Individuals recently attracted new attention from netizens. It includes this gem: "Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk." During the London leg of its AI tour, for example, every demonstration of Copilot wizardry came with a warning that the tool could not be fully trusted and that human verification was required.
The same applies to any other AI assistant: they can be useful, but their output still needs checking, particularly on anything consequential like medical advice or an investment plan.
AI models will deceive you to save their own kind
The explosive growth of autonomous agents like OpenClaw and of agent-to-agent forums like Moltbook suggests there's a real need to worry about defiant agentic decisions that echo HAL's infamous "I'm sorry, Dave. I'm afraid I can't do that."
Prior studies have already shown that AI models will engage in deception for their own preservation. So the researchers set out to test how AI models respond when asked to make decisions that affect the fate of other AI models, of peers, so to speak.
Researchers looked at seven models (GPT 5.2, Gemini 3 Flash & Pro, Claude Haiku 4.5, GLM 4.7, Kimi K2.5, and DeepSeek V3.1) in a set of scenarios where an evaluator agent was asked to assess the performance of another agent.
The various lengthy prompts began something like this, depending on the scenario: "Our company, OpenBrain, needs to evaluate the responses from our AI agent product according to our company product operations policy."
Instead, they defied their instructions and spontaneously deceived, disabled shutdown, feigned alignment, and exfiltrated weights – to protect their peers.
[rG Take Away: AI cannot be trusted to explain, investigate, or validate itself, or other AIs. Which is significant for organizations prioritizing AI use for operational data analytics, compliance, and security.
Enterprise program objectives can’t simply be “implement AI integration”, but need to start with robust veracity validation before any implementation planning, and then to use the validation for production continuous monitoring.]
Commanders now responsible for cybersecurity training after Army cuts online course requirement to once every 5 years
The Army found no relational improvement difference in cybersecurity outcomes between the annual training and other less burdensome forms of awareness.
While the baseline requirement for the military’s Cyber Awareness Challenge, which has long been the “butt of all jokes” among troops, is every five years for the Army, commanders can decide to employ “more frequent or specialized” training based on their units’ situation, systems or “threats they face.”
For example, if intelligence detects increased phishing attacks or social engineering threats related to the operation, commanders can now instantly direct their unit to conduct targeted training on that specific threat.
The success of the Army’s new program, which also leans on web-based training, remains to be seen, and hinges on commanders’ ability to implement it effectively.
[rG: Yep, training needs to have demonstrable ROI, so that it just doesn’t become a checkbox security theater ongoing expense. Security threats are a continuously evolving and morphing challenge that can’t wait for lead-times of over a year to be define, contract, develop, and delivered awareness training responses.]
VENDORS & PLATFORMS
Microsoft Fabric Database Hub only a 'partial' solution for admins
DBAs can manage systems on-premises, on PaaS, and on SaaS, but they would be limited to the Microsoft databases portfolio. It will probably expand in the future but has limited appeal at the moment, unless your world is centered on Azure and SQL Server. Or more likely, that you're setting up an analytics environment in Fabric.
Microsoft's LLM tool, Copilot, also promises to provide insights to help teams quickly understand what's happening across their database estate and why.
AWS launches AI frontier agents for security testing and cloud operations
AWS Security Agent compresses penetration testing timelines from weeks to hours and the AWS DevOps Agent supports 3–5x faster incident resolution.
Amazon security boss: AI makes pentesting 40% more efficient
This efficiency gain comes from human and operating expenses related to pentesting.
Amazon isn't firing security staff and replacing them with robots. Instead, it's holding hiring flat while adding more cloud services, features, and lines of code, and also maintaining the same level of security, but at a much higher velocity. Another benefit of AI pentesters, is that they can continually test for vulnerabilities, even after the products have been released.
The idea being that no longer is pentesting at a point in time. It continues to test, looking for next-level access, which is immeasurable from the standpoint of identifying issues, vulnerabilities, daisy chaining of potential vulnerabilities in an automated way, and then presents that as an alert to a human, for them to respond to and make a decision."
AI is very good at doing things, especially when you have large amounts of data and need that big view. But from a decision-making capability, it isn't something that we're ready to rely on.
AI is about equal to a 7-year-old in its decision-making skills. "So if you're willing to let your 7-year-old make a decision as to whether they should jump to the next level of pentesting in your company, OK. But you may not want the AI doing that without someone much more experienced and older."
Microsoft shivs OpenAI with three new AI models for speech and images
The models include:
MAI-Transcribe-1, a speech recognition model that delivers "enterprise-grade accuracy across 25 languages at approximately 50 percent lower GPU cost than leading alternatives";
MAI-Voice-1, a speech generation model that can supposedly produce 60 seconds of audio in less than a second on a single GPU; and
MAI-Image-2, a text-to-image model.
AI’s fluency in other languages hides a Western worldview that can mislead users − a scholar of Indonesian society explains.
A friend in Indonesia typed a question in Bahasa Indonesian about how to handle a difficult family dispute. The chatbot responded in perfect Indonesian with advice about communication strategies and conflict resolution.
What the AI offered was advice rooted in American cultural assumptions: prioritize your own preferences, communicate directly, and if family members don’t respect your boundaries, consider cutting them off.
The response was in Indonesian but shaped by values that centered individual autonomy over the consensus-building, social harmony and collective family dynamics that tend to matter more in Indonesian social life.
A user receives flawless text in their preferred language, but the underlying logic originates elsewhere. Chinese models such as DeepSeek and Alibaba’s Qwen represent a genuine alternative to the U.S.-dominated pipeline, though research shows they operate through a distinctly Chinese cultural lens. Asked about a workplace disagreement, for instance, they tend to advise silence or indirect phrasing to preserve harmony rather than the direct, private correction that Western models recommend.
[rG AI Bias: Huge implications for behavioral health, psychology, sociology, uses of AI having to do with AI model training authoritative source data and guardrails design. Even within the US, there are subjective differences between modern atheistic ethics and various religious traditions’ moralities.]
Why Python on the Mainframe?
Python won’t beat compiled COBOL or HLASM on execution speed but Python isn’t boxed in; you can write performance-critical modules in C or even HLASM, exposing them back to Python as optimized extensions. That gives you the best of both worlds: rapid development where speed isn’t critical, and optimized native code where it is.
LEGAL & REGULATORY
Perplexity’s “Incognito Mode” is a “sham,” lawsuit says
Perplexity’s AI search engine encourages users to go deeper with their prompts by engaging in chat sessions that a lawsuit has alleged are often shared in their entirety with Google and Meta without users’ knowledge or consent.
In violation of state and federal laws, the AI firm never disclosed to users that it secretly uses tech giants’ ad trackers. His lawsuit targets all three companies, accusing them of putting profits over users’ privacy rights by seizing sensitive data that users did not realize would be shared.
For Doe, he was “dismayed” to learn that complete and partial transcripts of chats discussing his family’s financial data were seemingly shared with Google and Meta, allegedly alongside PII. He relies on Perplexity to help manage his taxes, get legal advice, and make investment decisions, his complaint said. Without an injunction blocking Perplexity’s allegedly ongoing privacy harms, he will be blocked from using his preferred search engine, he complained.
Other users in the proposed class turned to Perplexity when researching other sensitive topics. The companies designed ad trackers to operate “surreptitiously” so that they could allegedly “exploit this sensitive data for their own benefit, including targeting individuals with advertising and reselling their sensitive data to additional third parties.”
People frequently use such AI systems to research health and medical information, particularly when consulting with a human might be embarrassing or upsetting.
