Robert Grupe's AppSecNewsBits 2025-07-19

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Poor Passwords Tattle on AI Hiring Bot Maker Paradox.ai
Personal information of millions of people who applied for jobs at McDonald’s was exposed after they guessed the password (“123456”) for the fast food chain’s account at Paradox[.]ai, a company that makes artificial intelligence based hiring chatbots used by many Fortune 500 firms. Paradox[.]ai said the security oversight was an isolated incident that did not affect its other customers However, a review of stolen password data gathered by multiple breach-tracking services shows that a Paradox[.]ai administrator in Vietnam suffered a malware compromise on their device that stole usernames and passwords for a variety of internal and third-party online services. The malware exposed hundreds of mostly poor and recycled passwords (using the same base password but slightly different characters at the end). Those purloined credentials show the developer in question at one point used the same u-digit password to log in to Paradox[.]ai accounts for a number of Fortune 500 firms listed as customers on the company’s website, including Aramark, Lockheed Martin, Lowes, and Pepsi. Password-cracking systems can work out a 7 number password instantly. Paradox maintains that few of the exposed passwords were still valid, and that a majority of them were present on the employee’s personal device only because he had migrated the contents of a password manager from an old computer. Also they have been requiring single sign-on (SSO) authentication since 2020 that enforces multi-factor authentication for its partners. Still, a review of the exposed passwords shows they included the administrator’s credentials to the company’s SSO platform — hxxps://paradoxai[.]okta[.]com. The password for that account ended in 202506 — possibly a reference to the month of June 2025 — and the digital cookie left behind after a successful Okta login with those credentials says it was valid until December 2025. Also exposed were the administrator’s credentials and authentication cookies for an account at Atlassian, a platform made for software development and project management. The expiration date for that authentication token likewise was December 2025. The stolen credential data includes Web browser logs that indicate the victims repeatedly downloaded pirated movies and television shows, which are often bundled with malware disguised as a video codec needed to view the pirated content.
[rG: Having strong policies is ineffective if implementation is kaput and unmonitored.]

 

DOGE Denizen Marko Elez Leaked API Key for xAI
Mr. Elez committed a code script to GitHub called “agent[.]py” that included a private application programming interface (API) key for xAI. The exposed API key allowed access to at least 52 different LLMs used by xAI.
The code repository containing the private xAI key was removed shortly after GitGuadian notified Elez via email. However, The exposed API key still works and has not yet been revoked. Elez is not the first DOGE worker to publish internal API keys for xAI: Another DOGE employee leaked a private xAI key on GitHub for two months, exposing LLMs that were custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X. “If a developer can’t keep an API key private, it raises questions about how they’re handling far more sensitive government information behind closed doors.”
[rG: Bad security coding practice to have secrets in code, but also having code untrusted 3rd-party accessible repositories. Code exposed to AI agents also vulnerable to leaks through attacks such as last week’s “I give up.”]

 

Security vulnerability on U.S. trains that let anyone activate the brakes on the rear car was known for 13 years
A software-defined radio can derail a US train by slamming the brakes on remotely.
All American trains were equipped with an End-of-Train (EoT) module attached to the last carriage, which reports telemetry data to the front of the train wirelessly. When it was first implemented in the late 1980s, it was illegal for anyone else to use the frequencies allocated for this system. So, the system only used the BCH checksum for packet creation. Unfortunately, anyone with an SDR could mimic these packets, allowing them to send false signals to the EoT module and its corresponding Head-of-Train (HoT) partner. However, the HoT can issue a brake command to the EoT through this system. Thus, anyone with the hardware (less than $500) and know-how can easily issue a brake command without the train driver’s knowledge. Because the AAR continued to ignore the warnings, the CISA published an advisory to warn the public about it. This got the AAR to announce an update. However, implementation in 2027 being the target as the earliest year of deployment.

 

Watch out, another max-severity, make-me-root Cisco bug on the loose
Cisco has issued a patch for a critical 10 out of 10 severity bug in its Identity Services Engine (ISE) and ISE Passive Identity Connector (ISE-PIC) that could allow an unauthenticated, remote attacker to run arbitrary code on the operating system with root-level privileges. Earlier this month, Cisco scored another perfect 10 for a different vulnerability, this one in its Unified Communications Manager and Session Management Edition products. The Engineering-Special (ES) builds of both have hardcoded credentials baked in, and would allow an unauthenticated, remote attacker root access.

 

Cloudflare says 1.1.1.1 outage not caused by attack or BGP hijack
On July 14 at 21:48 UTC, a new update added a test location to the inactive DLS service, refreshing the network configuration globally and applying the misconfiguration. This withdrew 1.1.1.1 Resolver prefixes from Cloudflare’s production data centers and routed them to a single offline location, making the service globally unreachable. Less than four minutes later, DNS traffic to the 1.1.1.1 Resolver began to drop. By 22:01 UTC, Cloudflare detected the incident and disclosed it to the public. The misconfiguration was reverted at 22:20 UTC, and Cloudflare began re-advertising the withdrawn BGP prefixes. Finally, full service restoration at all locations was achieved at 22:54 UTC. The misconfiguration could have been rejected if Cloudflare had used a system that performed progressive rollout, the internet giant admits, blaming the use of legacy systems for this failure. For this reason, it plans to deprecate legacy systems and accelerate migration to newer configuration systems that utilize abstract service topologies instead of static IP bindings, allowing for gradual deployment, health monitoring at each stage, and quick rollbacks in the event that issues arise. Cloudflare also points out that the misconfiguration had passed peer review and wasn’t caught due to insufficient internal documentation of service topologies and routing behavior, an area that the company also plans to improve.

 

Chinese hackers breached National Guard to steal network configurations
The Chinese state-sponsored hacking group known as Salt Typhoon breached and remained undetected in a U.S. Army National Guard network for nine months in 2024. During this time, the hackers stole network diagrams, configuration files, administrator credentials, and personal information of service members that could be used to breach National Guard and government networks in other states. Network configuration files contain the settings, security profiles, and credentials configured on networking devices, such as routers, firewalls, and VPN gateways. This information is valuable to an attacker, as it can be used to identify paths to and credentials for other sensitive networks that are typically not accessible via the Internet.

 

What’s Weak This Week:

  • CVE-2025-25257 Fortinet FortiWeb SQL Injection Vulnerability may allow an unauthenticated attacker to execute unauthorized SQL code or commands via crafted HTTP or HTTPs requests. Related CWE: CWE-89

  • CVE-2025-47812 Wing FTP Server Improper Neutralization of Null Byte or NUL Character Vulnerability can allow injection of arbitrary Lua code into user session files. This can be used to execute arbitrary system commands with the privileges of the FTP service (root or SYSTEM by default). Related CWE: CWE-158 

 

HACKING

US Data Breaches Head for Another Record Year After 11% Surge
The number of publicly reported data compromises increased around 11% annually to reach 1732 for the first half of 2025, putting it on track to be a record-breaking year, according to the Identity Theft Resource Center (ITRC).
165.7 million notices had been sent out in the first half of 2025 – just 12% of the figure at this point in 2024. That’s due mainly to the absence of mega breaches last year, such as those stemming from the attack on Snowflake customers.
Cyber-attacks were the main cause of data breaches in the first half of the year, accounting for 1348 incidents (78%) and over 114 million victim notices (69%).
Supply chains attacks were responsible for 79 breaches, impacting 690 entities and over 78 million downstream victims.
Interestingly, the ITRC has also recorded 34 physical attacks which led to data breach or compromise – that’s more than we saw in the entirety of 2024.
The number of data breach notices without information about the root cause of the attack jumped from 65% in H1 2024 to 69% in the first six months of 2025, a trend that has continued for the past five years.
Financial services was the worst hit sector, accounting for 387 compromises. Next came healthcare (283) professional services (221), manufacturing (158) and education (105).

 

Phishers have found a way to downgrade—not bypass—FIDO MFA
FIDO requires users to provide an additional factor of authentication in the form of a security key, which can be a passkey, or physical security key such as a smartphone or dedicated device such as a Yubikey.
One of the ways a user can provide this additional factor is by using a passkey on the device being used to log in, or a different device, such as a smartphone. In these cases, the site being logged into will display a QR code. The user then scans the QR code with the phone, and the FIDO MFA process proceeds.
Hackers found a clever sleight of hand to bypass this crucial step. As the user enters the username and password into the fake Okta site, it is relayed to the real Okta login page.
However, this only works if the administered organization’s Okta login page had deliberately been configured to allow fallback to a weaker form of MFA.
Relying solely on FIDO can be risky since, at this point in the FIDO evolution, it’s still impractical to manage and export passkeys in the same way as passwords and other forms of credentials.
End users should take pains to use only FIDO-compliant forms of authentication, although the distinction between the two in the attack described may not be easy for some.

 

GitHub abused to distribute payloads on behalf of malware-as-a-service
While organizations can block GitHub in their environment to curb the use of open-source offensive tooling and other malware, many organizations with software development teams require GitHub access in some capacity.
In these environments, a malicious GitHub download may be difficult to differentiate from regular web traffic.

 

Hackers exploit a blind spot by hiding malware inside DNS records
Domain name system (DNS) records that map domain names to their corresponding numerical IP addresses.
A strain of malware was converted from binary format into hexadecimal, which was then broken up into hundreds of chunks.
Each chunk was stashed inside the DNS record of a different sub-domain inside the TXT record which is often used to prove ownership of a site when setting up services like Google Workspace.
An attacker who managed to get a toehold into a protected network could then retrieve each chunk using an innocuous-looking series of DNS requests, reassembling them, and then converting them back into binary format.
The technique allows the malware to be retrieved through traffic that can be hard to closely monitor.
As encrypted forms of IP lookups—known as DOH (DNS over HTTPS) and DOT (DNS over TLS)—gain adoption, the difficulty will likely grow.
DNS records have also been used to store AI chatbot prompt injection attacks.

 

Google Gemini flaw hijacks email summaries for phishing
Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links.
Such an attack leverages indirect prompt injections that are hidden inside an email and obeyed by Gemini when generating the message summary.
The process involves creating an email with an invisible directive for Gemini. An attacker can hide the malicious instruction in the body text at the end of the message using HTML and CSS that sets the font size to zero and its color to white.
If the recipient opens the email and asks Gemini to generate a summary of the email, Google's AI tool will parse the invisible directive and obey it.
[rG: This vulnerability wouldn’t be specific to Google/Gemini only, but can apply to other GenAI enabled desktop applications.]

 

RenderShock 0-Click Vulnerability Executes Payloads via Background Process Without User Interaction
A sophisticated zero-click attack methodology called RenderShock that exploits passive file preview and indexing behaviors in modern operating systems to execute malicious payloads without requiring any user interaction.
The vulnerability affects Windows Explorer Preview Pane, macOS Quick Look, email client preview systems, and file indexing services, including Windows Search Indexer and Spotlight.
Security teams should implement comprehensive defenses, including disabling preview panes, blocking outbound SMB traffic (TCP 445) to untrusted networks, and enforcing macro blocking through Group Policy.

 

Malicious VSCode extension in Cursor IDE led to $500K crypto theft
Cursor AI IDE is an AI-powered development environment based on Microsoft's Visual Studio Code. It includes support for Open VSX, an alternative to the Visual Studio Marketplace, that allows you to install VSCode-compatible extensions to expand the software's functionality.
The extension was named "Solidity Language" and was published on the Open VSX registry, claiming to be a syntax highlighting tool for working with Ethereum smart contracts.
Although the plugin impersonated the legitimate Solidity syntax highlighting extension, it actually executed a PowerShell script from a remote host at angelic[.]su to download additional malicious payloads.

 

LameHug malware uses AI LLM to craft Windows data-theft commands in real-time
The malware is written in Python and relies on the Hugging Face API to interact with the Qwen 2.5-Coder-32B-Instruct LLM, which can generate commands according to the given prompts. These AI-generated commands were used by LameHug to collect system information and save it to a text file (info.txt), recursively search for documents on key Windows directories (Documents, Desktop, Downloads), and exfiltrate the data using SFTP or HTTP POST requests.

 

Arch Linux pulls AUR packages that installed Chaos RAT malware
Each package, "librewolf-fix-bin", "firefox-patch-bin", and "zen-browser-patched-bin," all contained a source entry in the PKGBUILD file called "patches" that pointed to a GitHub repository under the attacker's control.
When the BUILDPKG is processed, this repository is cloned and treated as part of the package's patching and building process. However, instead of being a legitimate patch, the GitHub repository contained malicious code that was executed during the build or installation phase.

 

Popular npm linter packages hijacked via phishing to drop malware
Popular JavaScript libraries were hijacked this week and turned into malware droppers, in a supply chain attack achieved via targeted phishing and credential theft.
The npm package eslint-config-prettier, downloaded over 30 million times weekly, was compromised after its maintainer fell victim to a phishing attack.
Another package eslint-plugin-prettier from the same maintainer was also targeted. The attacker(s) used stolen credentials to publish multiple unauthorized versions of the packages with malicious code to infect Windows machines. 

 

APPSEC, DEVSECOPS, DEV
Password Strength Guide
Brute force attack times for length and complexity

 

OWASP’s cure for a sick AI supply chain
Artificial intelligence isn’t built from scratch. It’s assembled.
Models, plug-ins, training data, and adapters. Developers stitch together these components from all over the internet, then ship them into production.
In its official guidance, OWASP outlines not just the threat, but the solution: a practical, operational checklist for securing every layer of the AI supply chain. These aren’t vague best practices. They’re concrete, field-tested actions that help organizations verify what goes into their models, defend against tampering, and rebuild trust in the pipeline.

 

Tiobe language popularity index (based on jobs and training)
Ada and Dephi/Object Pascal now #9 and 10.
PYPL PopularitY of Programming Language (based on Google searches)
GitHut 2.0 (based on GitHub pull requests)

 

Exhausted man defeats AI model in world coding championship
Programmer Przemysław Dębiak (known as "Psyho"), a former OpenAI employee, narrowly defeated the custom AI model in the AtCoder World Tour Finals 2025 Heuristic contest in Tokyo. The competition required contestants to solve a single complex optimization problem over 10 hours.
The final contest results showed Psyho finishing with a score of 1,812,272,558,909 points, while OpenAI's model (listed as "OpenAIAHC") scored 1,654,675,725,406 points— a margin of roughly 9.5%. 

 

VENDORS & PLATFORMS
AI bubble is worse than the dot-com crash that erased trillions, economist warns — overvaluations could lead to catastrophic consequences
The only real difference between AI businesses today and the dotcom companies of the late 90s and early 2000s is that AI businesses are even more overvalued. The major AI firms, that's Apple, Microsoft, OpenAI, Meta, Google/Alphabet, Amazon, and a range of other companies, have seen huge upticks in their valuations and stock prices in recent years, driven by investments in AI ventures.
This is completely out of whack with the earnings potential of these companies, and the majority of the gains made to the stock market during this AI boom have been because of the overperformance of these top stocks.

 

 

Study finds AI tools made open source software developers 19 percent slower
Before performing the study, the developers in question expected the AI tools would lead to a 24% reduction in the time needed for their assigned tasks.
Even after completing those tasks, the developers believed that the AI tools had made them 20% faster, on average.
In reality, though, the AI-aided tasks ended up being completed 19% slower than those completed without AI tools.

 

Quantum code breaking? You'd get further with an 8-bit computer, an abacus, and a dog
“If large-scale quantum computers are ever built, they will be able to break many of the public-key cryptosystems currently in use," NIST explains in its summary of Post-Quantum Cryptography (PQC).
Peter Gutmann, a professor of computer science at the University of Auckland New Zealand, thinks PQC "nonsense" for our American readers – and said as much in a 2024 presentation, "Why Quantum Cryptanalysis is Bollocks."
Gutmann's argument is simple: to this day, quantum computers – which he regards as "physics experiments" rather than pending products – haven't managed to factor any number greater than 21 without cheating.

 

Nvidia chips become the first GPUs to fall to Rowhammer bit-flip attacks
The researchers’ proof-of-concept exploit was able to tamper with deep neural network models used in machine learning for things like autonomous driving, healthcare applications, and medical imaging for analyzing MRI scans. GPUHammer flips a single bit in the exponent of a model weight—for example in y, where a floating point is represented as x times 2y.
The single bit flip can increase the exponent value by 16. The result is an altering of the model weight by a whopping 216, degrading model accuracy from 80% to 0.1%.

 

New Windows 11 build adds self-healing “quick machine recovery” feature
Quick machine recovery (QMR) enables Windows 11 PCs to boot into the Windows Recovery Environment (WinRE, also used by Windows install media and IT shops for various recovery and diagnostic purposes), connect to the Internet, and download Microsoft-provided fixes for "widespread boot issues" that could be keeping the PC from booting properly.

 

New emoji coming in Unicode 17.0.
Besides "Hairy Creature," (Bigfoot) here's some of the other new emoji getting added with Unicode 17.0: Trombone, Treasure Chest, Distorted Face, Fight Cloud, Apple Core, Orca, Ballet Dancers

 

Kubernetes 2.0 Might Kill YAML — Here’s the Private Beta That Changed Everything (2025)
Someone posted a blurry dashboard photo on r/kubernetes and deleted it 17 minutes later. But it was too late — DevOps Twitter had already grabbed it, zoomed in, and noticed something strange: No Helm charts, No YAML anywhere, Just one clean command.

 

LEGAL & REGULATORY

  • 39% expanded their disclosure of risks related to criminals or nefarious folk potentially using AI for threats such as digital impersonation, the creation and spread of disinformation, and to generate malicious code.

  • 11% have explicitly cautioned that they may never recoup their spending on AI, or actually realize the expected benefits. Quantifying tangible gains remains difficult at this stage, to the extent that continued investment at current levels may be unsustainable.

  • 19% expanded their mentions of data privacy and intellectual property risks associated with the use of AI technologies, although these risks are said to be "particularly acute" for companies relying on third-party AI vendors.

    • For example, GE Healthcare warns that it may have limited rights to access the intellectual property underpinning the generative AI model, which could impair its ability to "independently verify the explainability, transparency, and reliability" of the model itself.

    • Many companies are concerned about operational dependencies that could disrupt their operations if an outage should occur with their AI provider, while legal entanglements involving AI vendors are seen as a potential risk, alongside cybersecurity. In the latter case, the concentration of AI capabilities into the hands of a few providers is seen as a growing threat, as those entities become attractive targets for attackers.

 

As companies race to add AI, terms of service changes are going to freak a lot of people out
WeTransfer this week denied claims it uses files uploaded to its ubiquitous cloud storage service to train AI, and rolled back changes it had introduced to its Terms of Service after they deeply upset users.
“You hereby grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license to use your Content for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process.”
Coming in for special ire was the phrase: "You will not be entitled to compensation for any use of Content by us under these Terms."