- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2025-04-12
Robert Grupe's AppSecNewsBits 2025-04-12
Lame List & Highlights: VibeScamming, Slopsquatting, Signalgate personal contacts fumble, AI deepfakes, supply chain attacks, Microsoft Update Outages, Oracle denials, OCC undetected hack for over a year, and more ...
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Patch Tuesday leaves unlucky Windows Hello users unable to login
The patch bundle released yesterday is going to cause problems for a specific subset of users who are running either System Guard Secure Launch or Dynamic Root of Trust for Measurement (DRTM) on Windows 11 and Server 2025. If that's you, it's possible you'll need to reset your login PIN or biometrics in Hello in order to actually login and use your computer.
[rG: Ahh yeah: thorough regression testing before release to Prod, remember that???]
M365 Family users wake up to notice 'Your subscription expired' [7.5 hr outage]
Fully paid up and licensed shared users saw the message "Your subscription expired," while others saw the message "your subscription has been canceled."
Redmond didn't get specific in its explanation of what caused the outage, but did note it was caused by a recent change that led to license information populating incorrectly. "To help prevent similar impact in the future, we're further reviewing our testing and validation processes prior to deployment," Microsoft said, suggesting it does have a testing department of some kind still.
Hackers spied on 100 US bank regulators’ e-mails for over a year
The attackers were able to monitor employee e-mails at the Office of the Comptroller of the Currency (OCC) after breaking into an administrator’s account. OCC on Feb 12 confirmed that there had been unauthorised activity on its systems after a Microsoft security team notified OCC the day before about unusual network behaviour.
The Reg translates the letter in which Oracle kinda-sorta tells customers it was pwned
Oracle's letter to customers about an intrusion into part of its public cloud empire - while insisting Oracle Cloud Infrastructure was untouched - has sparked a mix of ridicule and outrage in the infosec community. We've decided to present the letter in full, with translations-slash-annotations to make sense of it.
“No OCI service has been interrupted.”
Nice, but almost no one's saying OCI was breached. It's Oracle Cloud Classic - Big Red's older, still-active platform - that was hit.“A hacker did access and publish user names from two obsolete servers that were not part of OCI.”
We admit we were compromised, and also that we leave obsolete unpatched servers like sitting ducks on the internet. For indeed, the servers were broken into via a hole in Oracle's own middleware on its own tin that it forgot to patch.“The hacker did not expose usable passwords because the passwords on those two servers were either encrypted and/or hashed. Therefore the hacker was not able to access any customer environments or customer data.”
Given these are obsolete servers, apparently, they better not be using obsolete hashing functions. Hashed passwords are not necessarily impossible to crack.
Signalgate solved? Report claims journalist’s phone number accidentally saved under name of Trump official
Jeffrey Goldberg's phone number ended up in Mike Waltz's address book due to a mix-up during the 2024 U.S. election campaign. Goldberg had emailed the Trump campaign with questions for a story, and campaign staffer Brian Hughes forwarded the email, which included Goldberg's phone number, to Waltz. Waltz mistakenly saved Goldberg's number under Hughes' contact information. Later, when Waltz tried to add Hughes to a Signal group chat, Goldberg was inadvertently invited instead.
What’s Weak his Weak:
CVE-2025-30406 Gladinet CentreStack Use of Hard-coded Cryptographic Key Vulnerability: In the way that the application manages keys used for ViewState integrity verification. Successful exploitation allows an attacker to forge ViewState payloads for server-side deserialization, allowing for remote code execution. Related CWE: CWE-321
The shortcoming is rooted in the use of a hard-code "machineKey" in the IIS web.config file, which enables threat actors with knowledge of "machineKey" to serialize a payload for subsequent server-side deserialization in order to achieve remote code execution.CVE-2025-31161 CrushFTP Authentication Bypass Vulnerability:
In the HTTP authorization header that allows a remote unauthenticated attacker to authenticate to any known or guessable user account (e.g., crushadmin), potentially leading to a full compromise. Related CWE: CWE-305CVE-2025-29824 Microsoft Windows Common Log File System (CLFS) Driver Use-After-Free Vulnerability:
Allows an authorized attacker to elevate privileges locally. Related CWE: CWE-416CVE-2024-53197 Linux Kernel Out-of-Bounds Access Vulnerability:
Allows an attacker with physical access to the system to use a malicious USB device to potentially manipulate system memory, escalate privileges, or execute arbitrary code. Related CWE: CWE-787CVE-2024-53150 Linux Kernel Out-of-Bounds Read Vulnerability:
In the USB-audio driver that allows a local, privileged attacker to obtain potentially sensitive information. Related CWE: CWE-125
HACKING
Hackers target SSRF bugs in EC2-hosted sites to steal AWS credentials
A targeted campaign exploited Server-Side Request Forgery (SSRF) vulnerabilities in websites hosted on AWS EC2 instances to extract EC2 Metadata, which could include Identity and Access Management (IAM) credentials from the IMDSv1 endpoint.
The attacks worked because the vulnerable instances were running on IMDSv1, AWS's older metadata service that allows anyone with access to the instance to retrieve the metadata, including any stored IAM credentials. The system has been superseded by IMDSv2, which requires session tokens (authentication) to protect websites from SSRF attacks.
The report underlines that older vulnerabilities remain highly targeted, with 40% of exploited CVEs being over four years old.
To mitigate the threats, it is recommended to apply the available security updates, harden router and IoT device configurations, and replace EoL networking equipment with supported models.
Malicious VSCode extensions infect Windows with cryptominers
Marketplace shows that the extensions amassed over 300,000 installs since April 4. However, after publishing the article Yuval Ronen added another extension with close to 500,000 installations.
Prettier - Code for VSCode
Discord Rich Presence for VS Code
Rojo – Roblox Studio Sync
Solidity Compiler
Claude AI
Golang Compiler
ChatGPT Agent for VSCode
HTML Obfuscator
Python Obfuscator for VSCode
Rust Compiler for VSCode
Malicious npm Package Targets Atomic Wallet, Exodus Users by Swapping Crypto Addresses
Threat actors are continuing to upload malicious packages to the npm registry so as to tamper with already-installed local versions of legitimate libraries and execute malicious code in what's seen as a sneakier attempt to stage a software supply chain attack.
E-ZPass toll payment texts return in massive phishing wave
The messages embed links that, if clicked, take the victim to a phishing site impersonating E-ZPass, The Toll Roads, FasTrak, Florida Turnpike, or another toll authority that attempts to steal their personal information including names, email addresses, physical addresses, and credit card information. This scam is not new, with the FBI warning about it in April 2024, but is seeing resurgence.
For those concerned that they have legitimate outstanding payments, you should instead log in to your toll authority's site directly to check for any balances.
Port Neches man warns of AI voice cloning scam after frightening phone call
Jace Edgar said he answered a call from an unknown number with a 409 area code and heard what he believed was his sister in distress. “She said she had been in an accident and she needed help... so I immediately got in my truck and started moving.”
The voice on the other end called him by name and referred to him as “brother,” leading him to believe the call was genuine. “100% I believed it was her,” Edgar said. “It sounded just like her.”
Edgar said the call soon became suspicious when the person on the line stopped responding directly to his questions. “Every time I ask a question and don’t get an answer, they act like they can’t hear me. So after this went on two minutes, I went ahead and hung up and called Jayla personally,” he said. His sister answered, unharmed and unaware of the situation.
Account takeover or access:
If a scammer already has access to your account, either through stolen login credentials, a hacked email or malware, they can view the newly issued card number in the online dashboard or mobile app.Digital wallet hijack:
Some card issuers allow you to add your credit card to mobile wallets instantly, even before the physical card arrives.Phishing or data breaches:
Thieves use this stolen data, such as your name, Social Security number, address and security question answers, to impersonate you and gain access to your account dashboard or reset login credentials. Once inside, they can retrieve new card details directly from the source or request a replacement card.Mail theft:
Although charges made before a new credit card is received are rarely due to mail theft, this type of traditional fraud still poses a risk.
OpenAI helps spammers plaster 80,000 sites with messages that bypassed filters
The spam blast is the work of AkiraBot—a framework that automates the sending of messages in large quantities to promote shady search optimization services to small- and medium-size websites. AkiraBot used python-based scripts to rotate the domain names advertised in the messages. It also used OpenAI’s chat API tied to the model gpt-4o-mini to generate unique messages customized to each site it spammed, a technique that likely helped it bypass filters that look for and block identical content sent to large numbers of sites. The messages are delivered through contact forms and live chat widgets embedded into the targeted websites.
Xanthorox AI: A New Breed of Malicious AI Threat Hits the Darknet
Xanthorox is not based on existing AI platforms like GPT. Instead, it uses five separate AI models, and everything runs on private servers controlled by the creators. That means no cloud APIs, public infrastructure, and few ways for defenders to track or shut it down. This “local-first” design helps it stay hidden and makes takedowns difficult.
Lovable AI Found Most Vulnerable to VibeScamming — Enabling Anyone to Build Live Scam Pages
Lovable, a generative artificial intelligence (AI) powered platform that allows for creating full-stack web applications using text-based prompts, has been found to be the most susceptible to jailbreak attacks, allowing novice and aspiring cybercrooks to set up lookalike credential harvesting pages.
The technique has been codenamed VibeScamming – a play on the term vibe coding, which refers to an AI-dependent programming technique to produce software by describing the problem statement in a few sentences as a prompt to a large language model (LLM) tuned for coding.
Leak exposes Black Basta’s influence tactics
A leak of 190,000 chat messages traded among members of the Black Basta ransomware group shows that it’s a highly structured and mostly efficient organization staffed by personnel with expertise in various specialties, including exploit development, infrastructure optimization, social engineering, and more. “The girl should be calling men,” one Black Basta manager instructed in a chat message. “The guy should be calling women.” This reasoning behind the decision was to exploit trust biases Black Basta believed the targeted workers had. The manager went on to say employees had screened 500 prospective callers for the task. “In the end only 2-3 were competent, and we have a few others as backup. One girl is really good at calling, every fifth call converts into remote access :).”
In one instance, a member pasted an advertisement into a chat for a purported zero-day allowing remote code execution in Juniper firewalls with no authentication necessary. The member wrote: The seller “wants 200k for it, but I’ll negotiate,” likely meaning $200,000. A peer replied, “Well, 200k is a fair price for a 0day.” The other member responded, “yep.”
APPSEC, DEVSECOPS, DEV
Explosive Growth of Non-Human Identities Creating Massive Security Blind Spots
GitGuardian's State of Secrets Sprawl report for 2025 reveals the alarming scale of secrets exposure in modern software environments. 23.77 million new secrets were leaked on GitHub in 2024 alone. This is a 25% surge from the previous year. This dramatic increase highlights how the proliferation of non-human identities (NHIs), such as service accounts, microservices, and AI agents, are rapidly expanding the attack surface for threat actors.
NHI secrets, including API keys, service accounts, and Kubernetes workers, now outnumber human identities by at least 45-to-1 in DevOps environments.
Generic secrets represent 74.4% of all leaks in private repositories versus 58% in public ones
Generic passwords account for 24% of all generic secrets in private repositories compared to only 9% in public repositories
Enterprise credentials like AWS IAM keys appear in 8% of private repositories but only 1.5% of public ones
99% of GitLab API keys had either full access (58%) or read-only access (41%)
96% of GitHub tokens had write access, with 95% offering full repository access
This pattern suggests that developers are more cautious with public code but often cut corners in environments they believe are protected.
Repositories with Copilot enabled were found to have a 40% higher incidence rate of secret leaks compared to repositories without AI assistance. While accelerating code production, AI-powered development may be encouraging developers to prioritize speed over security, embedding credentials in ways that traditional development practices might avoid.
The report found that collaboration platforms like Slack, Jira, and Confluence have become significant vectors for credential exposure. Secrets found in these platforms tend to be more critical than those in source code repositories, with 38% of incidents classified as highly critical or urgent compared to 31% in source code management systems.
AI & NHI
The OWASP framework explicitly recognizes that Non-Human Identities play a key role in agentic AI security. AI agents don't operate in isolation. To function, they need access to data, systems, and resources. This highly privileged, often overlooked access happens through non-human identities: API keys, service accounts, OAuth tokens, and other machine credentials.
How Democratized Development Creates a Security Nightmare
While traditional development requires knowledge of secure coding principles, input validation, and authentication mechanisms, no-code platforms encourage rapid deployment with minimal oversight. The result? Applications are created without understanding fundamental security risks, such as:
Insecure authentication and authorization
Poor input validation
Exposed APIs and data leaks
Shadow IT proliferation
OWASP Open SAMMY
Open SAMMY is an open-source Application Security Management tool designed to help organizations systematically assess, plan, and improve their security posture. Open SAMMY provides a structured way to manage OWASP SAMM (Software Assurance Maturity Model) assessments and improvement roadmaps.
OWASP Resources:
OWASP Software Assurance Maturity Model (SAMM)
OWASP DevSecOps Maturity Model (DSOMM)
Slopsquatting: AI can't stop making up software dependencies and sabotaging everything
Exploiting hallucinated package names represents a form of typosquatting, where variations or misspellings of common terms are used to dupe people. Seth Michael Larson, security developer-in-residence at the Python Software Foundation, has dubbed it "slopsquatting" – "slop" being a common pejorative for AI model output.
With AI tools becoming the default assistant for many, 'vibe coding' is happening constantly. Developers prompt the AI, copy the suggestion, and move on. Or worse, the AI agent just goes ahead and installs the recommended packages itself.
Researchers found that about 5.2% of package suggestions from commercial models didn't exist, compared to 21.7% from open source models. Running that code should result in an error when importing a non-existent package. But miscreants have realized that they can hijack the hallucination for their own benefit. All that's required is to create a malicious software package under a hallucinated package name and then upload the bad package to a package registry or index like PyPI or npm for distribution. Thereafter, when an AI code assistant re-hallucinates the co-opted name, the process of installing dependencies and executing the code will run the malware.
Re-running the same hallucination-triggering prompt ten times resulted in 43% of hallucinated packages being repeated every time and 39% never reappearing.
Microsoft Analyzing open-source bootloaders: Finding vulnerabilities faster with AI
Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.
Performing their initial research, using Security Copilot "saved our team approximately a week's worth of time, that would have otherwise been spent manually reviewing the content.
Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings ...
VENDORS & PLATFORMS
Meta accused of Llama 4 bait-and-switch to juice AI benchmark rank
Meta submitted a specially crafted, non-public variant of its Llama 4 AI model to an online benchmark that may have unfairly boosted its leaderboard position over rivals. The LLM was uploaded to LMArena, a popular site that pits models against each other. It's admittedly more a popularity contest than a benchmark, as you can select two submitted models to compete head-to-head, give the pair an input prompt to each answer, and vote on the best output. Thousands of these votes are collected and used to draw up a leaderboard of crowd-sourced LLM performance.
Meta provided a version of Llama 4 that is not publicly available and was seemingly specifically designed to charm those human voters, potentially giving it an edge in the rankings over publicly available competitors. And we wonder where artificially intelligent systems get their Machiavellian streak from.
Apps-from-prompts Firebase Studio is a great example – of why AI can't replace devs
It depends on how you define "agentic." If you mean something where you tell it the end product and it goes off and does that for you and then pings you back to check the finished product, then no, it isn't agentic at all.
If you mean something that can search through your files and change some lines of code, or suggest terminal commands that you then have to approve, then yes, it is agentic, but then it's no different from Cursor, which has been doing this for over a year.
So I would say it's not a lie, but it's clearly overhyped, which is the modus operandi of AI companies, sadly.
Atlassian makes its Rovo AI free, for now, to reduce 'friction' holding you back from agentic nirvana
An agent would read a Jira ticket, analyzing the spec it contains, then writing a proposal for how to make the requested changes. That proposal is sent to a human for approval. Another agent be made aware of a company's brand guidelines and automatically apply them to documents.
Despite Rovo gaining its new Studio app and more connectors, Atlassian is dropping its $20/user/month cost to $0. "We can monetize it with consumption-based pricing over time.”
[rG: Classic “Drugs for Kids” strategy: give product for free to get customers using it to the point of where it becomes part of their operational process critical paths, then announce “end of free trial”.]
Google's got a hot cloud infosec startup, a new unified platform — and its eye on Microsoft's $20B+ security biz
Google Unified Security (GUS) is said to combine the search giant's existing threat intelligence, security operations, and cloud security services, plus Chrome Enterprise. Because this is 2025 it also adds agentic AI.
The ads giant’s infosec agents include one called Google Security Operations that we’re told can triage security alerts by analyzing the context of each incident and then gives the humans in charge advice about which ones merit a response. Another agent uses AI to analyzes malware and determine the extent of the threat it poses.
Buying Wiz and Mandiant means Google is an infosec player, but trails rivals. "Microsoft is already the world's largest security vendor," Gartner Research VP Neil MacDonald told The Register. "Google as a company, looks at that and goes: Why can't we also be a large enterprise security vendor?" Redmond has previously said its annual security business revenue exceeded $20 billion. Google doesn't publicly disclose its security sales, and Gartner’s MacDonald told us it’s “a fraction” of Microsoft’s market-leading revenue haul.
That groan you hear is users’ reaction to Recall going back into Windows
Snapshotting and AI processing a screen every 3 seconds. What could possibly go wrong?
First, even if User A never opts in to Recall, they have no control over the setting on the machines of Users B through Z. That means anything User A sends them will be screenshotted, processed with optical character recognition and Copilot AI, and then stored in an indexed database on the other users’ devices. That would indiscriminately hoover up all kinds of User A's sensitive material, including photos, passwords, medical conditions, and encrypted videos and messages.
The presence of an easily searchable database capturing a machine’s every waking moment would also be a bonanza for others who don’t have users’ best interests at heart. That level of detailed archival material will undoubtedly be subject to subpoena by lawyers and governments. Threat actors who manage to get their spyware installed on a device will no longer have to scour it for the most sensitive data stored there. Instead they will mine Recall just as they do browser databases storing passwords now.
'Copilot will remember key details about you' for a 'catered to you' experience
Other updates include Shopping, to track down the best deals "through our real-time catalog of trusted merchants," and Actions, which means Copilot can complete tasks for the user. Suleyman gave the example of "scoring the gig tickets to sorting the ride home." One could compare Actions to OpenAI's Operator functionality.
AmigaOS updated in 2025 for some reason
The schedule is irregular, but you can't fault Hyperion's commitment to updating its version of the OS, given that the previous release, AmigaOS 3.1, was in 1994 (1993 for the CD32). It brought version 3.1 back to life in 2016, and that was followed by version 3.2 in May 2021. Since then, the company published version 3.2.1 in December 2021, and then version 3.2.2 in March 2023.
LEGAL & REGULATORY
An AI avatar tried to argue a case before a New York court. The judges weren't having it
On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. “May it please the court,” the man began. "I come here today a humble pro se before a panel of five distinguished justices.”
“Ok, hold on,” the judge asked. “Is that counsel for the case?”
“I generated that. That’s not a real person,” the plaintiff replied. It was an AI avatar.
The judge was displeased.
“It would have been nice to know that when you made your application. You did not tell me that sir,” the judge said before yelling across the room for the video to be shut off.
Dewald said he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words.
Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing.
Arizona Supreme Court taps AI avatars to make the judicial system more publicly accessible
Producing a video usually can take hours, but an AI-generated video is ready in about 30 minutes.