- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2025-04-04
Robert Grupe's AppSecNewsBits 2025-04-04
Highlights This Week: Oracle, Europcar, Samsung, Coinbase, AU Pension Funds attacks, Fast Flux, AI passes Turing Test?, and more ...
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Oracle privately confirms Cloud breach to customers
Oracle has finally acknowledged to some customers that attackers have stolen old client credentials after breaching a "legacy environment" last used in 2017. However, while Oracle told clients this is old legacy data that is not sensitive, the threat actor behind the attack has shared data from the end of 2024 and posted newer records from 2025 on a hacking forum.
Oracle faces Texas-sized lawsuit over alleged cloud snafu and radio silence
One of the primary claims made by the plaintiffs, among many others, is that Oracle violated Texas state data breach notification laws in not informing the alleged victims of a breach within 60 days of becoming aware of one.
The case specifically refers to an alleged breach of Oracle Cloud, and it also alludes to health information being affected in the alleged Oracle Health breach, although this doesn't appear to be the main focus of the case. Oracle's alleged security failings were blamed for the loss of personally identifiable information (PII) and a "wide variety" of personal health data, and Oracle's silence on the matter exacerbates these claims.
Lawyers cited various articles highlighting data breach victims' negative financial experiences, arguing that Toikach anticipates "spending considerable time and money on an ongoing basis to try to mitigate and address harms caused by the data breach. As a result of the data breach, plaintiff is at a present risk and will continue to be at increased risk of identity theft and fraud for years to come."
Hackers strike Australia's largest pension funds in coordinated attacks
AustralianSuper: Hackers accessed accounts and stole passwords from 600 members, draining a combined A$500,000.
Australian Retirement Trust: Detected unusual activity affecting hundreds of accounts but reported no financial losses.
Rest Super: 20,000 accounts were impacted, leading to the shutdown of its Member Access portal.
Hostplus and Insignia Financial also confirmed breaches but no financial impact was reported.
Australian Pension Funds Hacked – Members to LOSE Money from Their Accounts
The cyberattacks on Australian pension funds exploited several security vulnerabilities:
Credential Stuffing:
Attackers used stolen credentials from previous data breaches to gain unauthorized access to member accounts. This method relies on users reusing passwords across multiple platforms2.OAuth Token Manipulation:
Hackers exploited vulnerabilities in authentication frameworks, specifically targeting API weaknesses in member portals.SQL Injection:
Some attacks utilized SQL injection techniques to exploit database vulnerabilities, bypassing standard web application firewall protections.Session Hijacking:
The attackers executed their campaigns during early morning hours to avoid detection, using sophisticated tools to hijack user sessions.
Europcar GitLab breach exposes data of up to 200,000 customers
The actor tried to extort the company by threatening to publish 37GB of data that includes backups and details about the company’s cloud infrastructure and internal applications.
They claimed to have copied from the repositories more than 9,000 SQL files with backups that have personal data, and at least 269 .ENV files - used to store configuration settings for applications, environment variables, and sensitive information.
To prove that the breach is not a hoax, the threat actor published screenshots of credentials present in the source code they stole.
Last year, Europcar was the target of a fake breach, when someone claimed on a hacker forum to possess the personal info (names, addresses, birth dates, driver's license numbers) of nearly 50 million customers.
In 2022, a researcher discovered an admin token in the code of Europcar’s apps for mobile devices (Android and iOS), which could be used to access customers’ biometric details. The issue was due to a development error and affected multiple mobile applications from other service providers.
Hacker Leaks 270,000 Samsung Customer Records—Stolen Credentials Were Left Unchecked for Years
The hack was made possible by a set of stolen credentials compromised in 2021. This malware, known as “Raccoon Infostealer,” took these credentials after infecting an employee of Spectos GmbH, a company that works with Samsung to monitor service quality.
Although Hudson Rock flagged the credentials years ago, Samsung reportedly failed to rotate or secure them, allowing the hacker to access the system years later, in 2025, and release the data. When companies fail to monitor or rotate credentials, it’s game over.
Cybersecurity experts warn that this data could be weaponized in several dangerous ways, including:
Hyper-targeted phishing scams: With names, emails, and order details, hackers can send highly convincing fake emails pretending to be Samsung customer support.
Warranty fraud: Criminals can use leaked order numbers to file fake warranty claims for product replacements.
Identity theft and account takeover: By impersonating customers using leaked support tickets, hackers can gain unauthorized access to accounts.
Physical theft (Porch piracy): Attackers could track high-value orders using leaked tracking numbers and intercept deliveries.
Kink and LGBT dating apps exposed 1.5m private user images online
Researchers have discovered nearly 1.5 million pictures from specialist dating apps – many of which are explicit – being stored online without password protection, leaving them vulnerable to hackers and extortionists. These services are used by an estimated 800,000 to 900,000 people.
M.A.D Mobile was first warned about the security flaw on 20 January but didn't take action until the BBC emailed on Friday. They have since fixed it but not said how it happened or why they failed to protect the sensitive images.
Generative AI app goes dark after child-like deepfakes found in open S3 bucket
Jeremiah Fowler uncovered a publicly accessible Amazon S3 bucket containing 93,485 AI-generated sexually explicit images, including child-like deepfakes and unauthorized face-swapped images of celebrities. The platform allowed users to generate "nudified" images, including those prohibited by its own guidelines, such as content involving children.
After Fowler reported the exposed database, AI-NOMIS and GenNomis promptly took down the S3 bucket and websites, but offered no explanation or acknowledgment.
This discovery illustrates how this technology could potentially be abused by users, and how developers must do more to protect themselves and others.
Coinbase to fix 2FA account activity entry freaking out users
Over the past couple of weeks, numerous people have concerns that they think Coinbase has a serious security issue. After receiving Coinbase phishing emails or texts, they logged into their accounts and checked the activity log, finding numerous entries stating "second_factor_failure" or "2-step verification failed" with login attempts from unusual locations.
However, it turns out that the "second_factor_failure" or "2-step verification failed" account activity messages are shown in two different scenarios—when a user incorrectly enters the wrong 2FA code or when someone tries to log into their account with the wrong password.
Coinbase has stated that they are looking into changing the error message when an incorrect password is entered but that there is no time frame as to when this occurs.
Unfortunately, threat actors use these erroneous error messages as part of social engineering attacks that attempt to breach Coinbase accounts by making targets think their credentials are compromised.
SpotBugs Access Token Theft Identified as Root Cause of GitHub Supply Chain Attack
The cascading supply chain attack that initially targeted Coinbase before becoming more widespread to single out users of the "tj-actions/changed-files" GitHub Action has been traced further back to the theft of a personal access token (PAT) related to SpotBugs.
The attackers obtained initial access by taking advantage of the GitHub Actions workflow of SpotBugs, a popular open-source tool for static analysis of bugs in code. This enabled the attackers to move laterally between SpotBugs repositories, until obtaining access to reviewdog.
There is evidence to suggest that the malicious activity began as far back as late November 2024, although the attack against Coinbase did not take place until March 2025.
What’s Weak This Weak:
CVE-2025-22457 Ivanti Connect Secure, Policy Secure and ZTA Gateways Stack-Based Buffer Overflow Vulnerability:
Allows a remote unauthenticated attacker to achieve remote code execution.Related CWE: CWE-121CVE-2025-24813 Apache Tomcat Path Equivalence Vulnerability:
Allows a remote attacker to execute code, disclose information, or inject malicious content via a partial PUT request. Related CWEs: CWE-44| CWE-502CVE-2024-20439 Cisco Smart Licensing Utility Static Credential Vulnerability:
Allows an unauthenticated, remote attacker to log in to an affected system and gain administrative credentials. Related CWE: CWE-912
HACKING
CISA Advisory: Fast Flux: A National Security Threat
Fast flux works by cycling through a range of IP addresses and domain names that these botnets use to connect to the Internet. In some cases, IPs and domain names change every day or two; in other cases, they change almost hourly. The constant flux complicates the task of isolating the true origin of the infrastructure. It also provides redundancy. By the time defenders block one address or domain, new ones have already been assigned.
Detection and Mitigation Strategies:
DNS analysis and network monitoring.
Blocking malicious domains and IPs.
Sharing threat intelligence across organizations.
Raising phishing awareness through employee training.
Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue
Fine-tuned LLMs, such as FraudGPT, GhostGPT, and DarkGPT, are being sold on the dark web for as little as $75/month. These LLMs are being packaged much like legitimate businesses package and sell SaaS apps, and are tailored for malicious activities like phishing, exploit generation, and credit card validation. Leasing a weaponized LLM often includes access to dashboards, APIs, regular updates and, for some, customer support.
Fine-tuning LLMs increases their vulnerability, making them 22 times more likely to produce harmful outputs compared to base models. This process weakens safety controls and opens the door to attacks like jailbreaks and data poisoning.
Attackers can manipulate open-source training datasets for as little as $60, influencing downstream LLMs and compromising AI supply chains.
These attacks exploit LLMs to reconstruct sensitive or copyrighted content, bypassing guardrails and creating compliance risks for industries like healthcare and finance.
Implications for Enterprises: LLMs are becoming a significant attack surface, requiring stronger security measures, real-time monitoring, and adversarial testing to mitigate risks.
Cyberattacks by AI agents are coming
Researchers at Palisade Research have detected two confirmed AI agent attacks on their honeypot servers. These agents originated from Hong Kong and Singapore and were tasked with probing vulnerabilities.
Experts anticipate that cybercriminals could soon use AI agents for large-scale ransomware attacks, intelligence gathering, and exploiting vulnerabilities. The timeline for widespread adoption is uncertain but could be rapid.
To counter these risks, researchers suggest real-time monitoring, agentic AI benchmarks, and the use of friendly agents to identify and fix vulnerabilities before malicious agents exploit them.
To counter these risks, researchers suggest real-time monitoring, agentic AI benchmarks, and the use of friendly agents to identify and fix vulnerabilities before malicious agents exploit them.
[rG: Complete SSDF/SSDL practices for all enterprise applications is becoming increasingly urgent, with the additional needs that organizations fully secure and continuously monitor their LLM/ML applications from abuse.]
Troy Hunt: How I was Phished [YouTube]
Cyber security celebrity Troy Hunt recounts a recent experience where he fell victim to a phishing attack targeting his MailChimp account. The incident received a significant public and media response.
The phishing email was sophisticated and circumvented two-factor authentication, enabling the attackers to access and export his mailing list. Troy reflects on the human and technical factors that contributed to the breach.
Security Lessons:
Emphasis on the vulnerabilities of one-time passwords (OTPs) in two-factor authentication.
Advocacy for the adoption of passkeys as a more secure alternative to traditional authentication methods.
Recommendations for improving security measures and awareness, especially to prevent phishing attacks.
APPSEC, DEVSECOPS, DEV
Multi-Perspective Issuance Corroboration (MPIC):
Certification Authorities (CAs) must validate domain control from multiple geographic locations or Internet Service Providers (ISPs) to prevent routing attacks like Border Gateway Protocol (BGP) hijacks.
This requirement enhances security by reducing the risk of fraudulently issued certificates due to single-location validation vulnerabilities.Linting:
Automated analysis of X.509 certificates to detect errors, inconsistencies, and non-compliance with industry standards.
Linting ensures certificates are well-formatted, secure, and interoperable, reducing the risk of mis-issuance and improving overall reliability.
CAB Latest Baseline Requirements
The CP for the Issuance and Management of Publicly-Trusted TLS Server Certificates describe a subset of the requirements that a Certification Authority must meet in order to issue Publicly Trusted TLS Server Certificates.
[rG: These requirements, as outlined by the CA/Browser Forum, are primarily directed at Certification Authorities (CAs) and organizations within the certificate issuance chain, including Root and Subordinate CAs. However, the security practices they promote can also inform developers of Internet-facing applications to ensure better compliance with secure web standards.]
AI programming copilots are worsening code security and leaking more secrets
GitHub Copilot-enabled software repositories are more likely to have exposed secrets than standard repos, with 6.4% of the sampled repositories containing API keys, passwords, or tokens at risk of theft compared to the 4.6% of all repos exposing secrets.
This translates to a 40% higher incident rate of secret leakage.
“The sooner everyone is comfortable treating their code-generating LLMs as they would interns or junior engineers pushing code, the better. The underlying models behind LLMs are inherently going to be just as flawed as the sum of the human corpus of code, with an extra serving of flaw sprinkled on top due to their tendency to hallucinate, tell lies, misunderstand queries, process flawed queries, etc.”
Risk Areas
Misuse:
Occurs when users intentionally direct AI systems to cause harm, such as aiding in cyberattacks. The risk arises from malicious actors exploiting the system’s capabilities.Misalignment:
Involves AI systems knowingly acting against developer intentions, potentially causing harm. Examples include providing deceptive outputs or engaging in unintended behaviors.Mistakes:
Happens when AI systems unintentionally cause harm due to a lack of understanding, such as mismanaging a power grid due to unawareness of maintenance needs.Structural Risks:
Arise from interactions among multiple actors (people, organizations, or AI systems), creating dynamics that lead to harm beyond individual control.
Precautions [rG: SSDF/SSDLC best practices with AI continuous functional monitoring]
Mitigating Misuse:
Conduct dangerous capability evaluations to assess risks tied to AI functionalities.
Implement safety post-training to prevent harmful outputs and make models resistant to jailbreak attempts.
Restrict access to sensitive AI capabilities through monitoring and controlled user permissions.
Strengthen security measures to prevent model theft, including hardened access interfaces.
Addressing Misalignment:
Employ amplified oversight, where AI systems help identify flaws or issues in their reasoning.
Use robust training techniques to improve the alignment of models across diverse situations and inputs.
Adopt system-level defenses like access controls and anomaly detection to mitigate damage from potential misaligned actions.
General Strategies:
Invest in interpretability research to better understand AI reasoning and detect potential misalignments.
Incorporate uncertainty estimation to improve monitoring and training accuracy.
Leverage safer design patterns, such as requiring AI to consult users before irreversible actions and providing clear explanations for decisions.
Don’t believe reasoning models’ Chains of Thought
Models like Claude 3.7 Sonnet and DeepSeek-R1 were tested with hints. They admitted using these hints less than 20% of the time in most scenarios. When given incorrect or ethically dubious information, the models often failed to disclose this while explaining their reasoning.
Longer explanations from models were sometimes less reliable, with fabricated rationales masking misaligned behaviors.
Since early 2024, AI companies have dramatically increased automated scraping through direct crawling, APIs, and bulk downloads to feed their hungry AI models. This exponential growth in non-human traffic has imposed steep technical and financial costs—often without the attribution.
VENDORS & PLATFORMS
GitHub expands security tools after 39 million secrets leaked in 2024
GitHub announced updates to its Advanced Security platform after it detected over 39 million leaked secrets in repositories during 2024, including API keys and credentials, exposing users and organizations to serious security risks.
In a new report by GitHub, the development company says the 39 million secrets were found through its secret scanning service, a security feature that detects API keys, passwords, tokens, and other secrets in repositories.
"Secret leaks remain one of the most common—and preventable—causes of security incidents.”
Gmail unveils end-to-end encrypted messages. Only thing is: It’s not true E2EE.
In true E2EE, only the sender and recipient have access to the keys, with no intermediaries. Google’s implementation doesn’t meet this standard since key management is handled centrally.
It’s not intended for individuals or users seeking absolute privacy, as the organization administering the key server can access messages. Admins with full access can snoop on the communications at any time. The new feature is of potential value to organizations that must comply with onerous regulations mandating end-to-end encryption.
An AI Model Has Officially Passed the Turing Test
In a new preprint study awaiting peer review, researchers report that in a three-party version of a Turing test, in which participants chat with a human and an AI at the same time and then evaluate which is which, OpenAI's GPT-4.5 model was deemed to be the human 73% of the time when it was instructed to adopt a persona. That's significantly higher than a random chance of 50%, suggesting that the Turing test has resoundingly been beaten.
Without persona prompting, GPT-4.5 achieved an overall win rate of merely 36%, significantly down from its Turing-trumping 73%.
As a baseline, GPT-4o, which powers the current version of ChatGPT and only received no-persona prompts, achieved an even less convincing 21%.
Somehow, the ancient ELIZA (developed 1964-1967) marginally surpassed OpenAI's flagship model with a 23% success rate.
MCP: The new “USB-C for AI” that’s bringing fierce rivals together
Anthropic released an open specification called Model Context Protocol (MCP) which establishes a royalty-free protocol that allows AI models to connect with outside data sources and services without requiring unique integrations for each service.
Microsoft has integrated MCP into its Azure OpenAI service, and Anthropic competitor OpenAI is on board.
Google’s NotebookLM can now discover data sources for you
Previously, to create a new notebook, you had to feed the AI documents, web links, YouTube videos, or raw text. You can still do that, but you don't have to with the addition of Discover functionality. Simply click the new button and tell NotebookLM what you're interested in learning.
With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos
The model purports to solve several key problems with AI video generation. Chief among those is the notion of consistent characters and objects across shots. If you've watched any short films made with AI, you've likely noticed that they're either dream-like sequences or thematically but not realistically connected images—mood pieces more than consistent narratives.
Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete
Yann LeCun, Meta's Chief AI Scientist and a pioneer in artificial intelligence, predicts that current large language models (LLMs) like OpenAI's ChatGPT and Meta's Llama will become obsolete within five years. LLMs operate reactively, producing token outputs based on statistical patterns. They lack deliberative reasoning (System 2 thinking) and cannot plan actions or solve novel problems effectively.
Copilot code review now generally available
To request a code review from Copilot, you can set up automatic reviews in a repo through repository rules. Or, you could ask Copilot to review a pull request on demand. Copilot code review is available to all paid Copilot subscribers.
GitHub Copilot is trained on all languages that appear in public repositories. For each language, the quality of suggestions you receive may depend on the volume and diversity of training data for that language. For example, JavaScript is well-represented in public repositories and is one of GitHub Copilot’s best supported languages.
Microsoft announces a revolutionary keyboard designed for vibe coding!
“We noticed that 97% of all keystrokes made by real vibe coders were just pressing Tab over and over.” The TabBoard is a masterpiece of engineering. Encased in an stainless steel frame with a premium, touch-sensitive Tab key. This keyboard is designed for efficiency, style, and maximum vibes. Microsoft’s engineers have spent years perfecting the key’s tactile response, ensuring the ultimate tabbing experience while vibe coding with Copilot.
[rG: April Fools sarcasm; but then again there is that new Copilot key …]
LEGAL & REGULATORY
Phone inspections when crossing the U.S. border: What you need to know about your rights and security
U.S. citizens can decline device searches without being denied entry. However, lawful permanent residents or foreign visitors may face harsher scrutiny or even denial of entry for refusing a search.
Limit the amount of digital information carried across the border by leaving unnecessary devices at home.
Encrypt devices and turn off biometric passwords (e.g., Face ID) to strengthen security.
Delete sensitive data, including personal photos, licenses, or credit card details, and ensure deleted items are removed from "trash" folders.
Make social media accounts private and avoid traveling with devices containing critical information.
Trump’s tariffs killed his TikTok deal
TikTok was on the verge of avoiding a U.S. ban with a deal led by Oracle and ByteDance investors to create a U.S.-based entity. President Trump’s announcement of new tariffs derailed the progress, as the deal relied on approval from the Chinese government, which became less likely due to the trade war escalation.
Trump extended the deadline for TikTok’s ban by 75 days to allow more time for negotiations, but the situation remains uncertain.
And Now For Something Completely Different …
AI proves that human fingerprints are not unique, upending 100 years of law enforcement
Traditional methods rely on minutiae, which refer to branching patterns and endpoints in the ridges.
AI instead used related to the angles and curvatures of the swirls and loops in the center of the fingerprint.