Robert Grupe's AppSecNewsBits 2025-05-31

This week's Lame List & Highlights: LexusNexus, MathLabs, Apple Messenger, DRM, OneDrive File Picker, Browser In The Middle Attacks, AI hijinks, and more ...

rG Red Note Take Aways from this Roundup: Summary of enterprise governance GenAI application SSDLC Security Design Review checklists compliance items to be confirmed or added …

  • 3rd Party AI Model usage acceptance criteria: Full AI Model System Prompts for every version: part of license agreement contract, or available from Open-Source Software documentation.

  • SBOM inclusion of all AI models used.

  • Review of all the System Prompts used by each Model version to validate enterprise policy and standards compliance

  • Ensure all GenAI content (text, audio, video) incorporates meta tagging (“watermarking”): Creator source copyright notice, AI Models used with version identifiers, content creation date, geolocation applicability/restrictions.

  • Ensure enterprise Legal compliance involved to determine any required application user AI usage notifications or restrictions (e.g. geolocation).

  • Ensure operations critical product business continuity/incident response plan includes functionality to restore from known good backups, and switch to non-AI processing.

  • Ensure production continuous testing (e.g. <daily) to validate security compliance rules (including shutdown) and accuracy performance to detect content confabulation drifting.

While AI enhanced applications may provide operational cost savings, the reality is that they are more expensive to design, implement, maintain, and restore after security or performance incidents – requiring additional staffing, more complex development processes, and longer delivery QA pipelines.

 

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response 

Data broker giant LexisNexis says breach exposed personal information of over 364,000 people
The stolen data varies, but includes names, dates of birth, phone numbers, postal and email addresses, Social Security numbers, and driver license numbers. The company said that the breach, dating back to December 25, 2024, allowed a hacker to obtain consumers’ sensitive personal data from a third-party platform used by the company for software development.
Adidas warns of data breach after customer service provider hack
Adidas stated that the stolen information did not include the affected customers' payment-related information or passwords, as the threat actors behind the breach only gained access to contact information.
[rG: This is what they are stating now, but as other breaches have demonstrated, it could be months before they know and disclose the full extent.]

 

Ransomware attack on MATLAB dev MathWorks – licensing center still locked down
Software biz MathWorks is cleaning up a ransomware attack more than a week after it took down MATLAB, its flagship product used by more than five million people worldwide.
Many MATLAB users access the platform online, rather than a downloaded instance on their PC. For those who have MATLAB installed, turning off the computer's internet connection before opening the software has worked in some select cases. “I am done with MATLAB's lack of explanation, so I just pirated it. I do have a genuine license, and since they can't deliver the service I rightfully paid, I am going to pirate the hell out of it. I am also using a virtual machine just to be safe from malware." One of the main issues was that MathWorks' licensing server was down (its Licensing Center is still offline at the time of writing), so web users couldn't verify their license was valid to authenticate into the site.
Some commercial customers escaped unscathed, since it's common for these organizations to host their own MATLAB licensing server.
[rG: When designing resilient systems, attack chain analysis needs to identify potential single points of failure and provide compensating recovery processes.]

 

HHS RFK Jr.’s ‘Make America Healthy Again’ report seems riddled with AI slop
An investigation found dozens of errors in the MAHA report, including broken links, wrong issue numbers, and missing or incorrect authors. Some studies were misstated to back up the report’s conclusions.
At least 7 of the cited sources were entirely fictitious.
At least 37 of the 522 citations appeared multiple times throughout the report.
Notably, the URLs of several references included “oaicite,” a marker that OpenAI applies to responses provided by artificial intelligence models like ChatGPT, which strongly suggests it was used in developing the report.
The MAHA report file was updated to remove some of the oaicite markers and replace some of the nonexistent sources with alternative citations. In a statement given to the publication, Department of Health and Human Services spokesman Andrew Nixon said “minor citation and formatting errors have been corrected, but the substance of the MAHA report remains the same.”

 

Cracking The Dave & Buster’s Anomaly
iOS bug that if you try to send an audio message using the Messages app to someone who’s also using the Messages app a message that includes “Dave and Buster’s” (a sports bar and restaurant in the United States), the message will never be received.
MessagesBlastDoorService uses MBDXMLParserContext (via MBDHTMLToSuperParserContext) to parse XHTML for the audio message. Ampersands have special meaning in XML/HTML and must be escaped, so the correct way to represent the transcription in HTML would have been "Dave & Buster's". Apple's transcription system is not doing that, causing the parser to attempt to detect a special code after the ampersand, and since there's no valid special code nor semicolon terminating what it thinks is an HTML entity, it detects an error and stops parsing the content. That’s what causes the message to get stuck in the “dot dot dot” state, which eventually times out, and the message just disappears. Many bad parsers would probably accept the incorrectly-formatted XHTML, but that sort of leniency when parsing data formats is often what ends up causing security issues.
[rG: While not a security issue, and working as designed, it is amazing that the product design didn’t accommodate ampersand character given the probability of occurrences in legitimate user messages.]

 

Football and other premium TV being pirated at 'industrial scale'
Fans watching football matches, for instance, via illegal streams are typically providing information such as credit card details and email addresses, leaving them vulnerable to malware and phishing scams. Many supporters, though, argue that lowering the cost of legally streaming sport would be the most effective way of minimising such risks.
Over twenty years since launch, the DRM solutions provided by Google and Microsoft are in steep decline. A complete overhaul of the technology architecture, licensing, and support model is needed. Lack of engagement with content owners indicates this a low priority.

 

Microsoft OneDrive File Picker Flaw Grants Apps Full Cloud Access — Even When Uploading Just One File
Cybersecurity researchers have discovered a security flaw in Microsoft's OneDrive File Picker that, if successfully exploited, could allow websites to access a user's entire cloud storage content, as opposed to just the files selected for upload via the tool.
This stems from overly broad OAuth scopes and misleading consent screens that fail to clearly explain the extent of access being granted. The lack of fine-grained scopes makes it impossible for users to distinguish between malicious apps that target all files and legitimate apps that ask for excessive permissions simply because there is no other secure option.
Microsoft says: "This technique does not meet our bar for immediate servicing as a user must provide consent to the application before any access is allowed. We will consider improvements to the experience in a future release." 

 

HACKING

Cybercriminals Clone Antivirus Site to Spread Venom RAT and Steal Crypto Wallets
The website in question, "bitdefender-download[.]com," advertises site visitors to download a Windows version of the Antivirus software. Clicking on the prominent "Download for Windows" button initiates a file download from a Bitbucket repository that redirects to an Amazon S3 bucket. The Bitbucket account is no longer active.
The ZIP archive ("BitDefender[.]zip") contains an executable called "StoreInstaller.exe," which includes malware configurations associated with Venom RAT, as well as code related to the open-source post-exploitation framework SilentTrinity and StormKitty stealer. Utilizing state-of-the-art tactics such as polymorphic identifiers, advanced man‑in‑the‑middle proxy mechanisms and multi-factor authentication bypass techniques, the attackers aim to harvest credentials and two-factor authentication (2FA) codes, enabling real-time access to social media accounts.

 

Don't click on that Facebook ad for a text-to-AI-video tool
Google threat hunters identified thousands of malicious ads on Facebook and about 10 on LinkedIn since November 2024. These ads directed viewers to more than 30 phony websites masquerading as legitimate AI video generator tools, including Luma AI, Canva Dream Lab, and Kling AI, falsely promising text- and image-to-video generation.
If a user visits the fake website and clicks on the "Start Free Now" button, they're led through a bogus video-generation interface that mimics a real AI tool. After selecting an option and watching a fake loading bar, the site delivers a ZIP file containing malware that, once executed, backdoors the victim's device, logs keystrokes, and scans for password managers and digital wallets.
[rG: Never click on ads. If you see something that you are interested in, use web search engine to find legitimate looking vendor web site or trusted reseller, and then look there.]

 

Billions of cookies up for grabs as experts warn over session security
More than 93.7 billion of them are currently available for criminals to buy online and of those, between 7-9% are active.
Cookies may seem harmless, but in the wrong hands, they're digital keys to our most private information. What was designed to enhance convenience is now a growing vulnerability exploited by cybercriminals worldwide.
Most people don't realize that a stolen cookie can be just as dangerous as a password, despite being so willing to accept cookies when visiting websites, just to get rid of the prompt at the bottom of the screen.
However, once these are intercepted, a cookie can give hackers direct access to all sorts of accounts containing sensitive data, without any login required. They can also be a boon to ransomware crooks who can move laterally around a potential victim's network if they use cookie-based SSO for authentication, which then allows crims access to sensitive business data, and potentially higher privileges.
Whenever possible, reject unnecessary cookies. Keeping devices updated with the latest security fixes and purging unnecessary cookies is a good idea.
From Infection to Access: A 24-Hour Timeline of a Modern Stealer Campaign
Within hours, cybercriminals sift through stolen data, focusing on high-value session tokens.
[rG: SSDLC Security Design and Code Review check session tokens are set and expire according to enterprise best practice standards.]

 

251 Amazon-Hosted IPs Used in Exploit Scan Targeting ColdFusion, Struts, and Elasticsearch
These IPs triggered 75 distinct behaviors, including CVE exploits, misconfiguration probes, and recon activity. All IPs were silent before and after the surge, indicating temporary infrastructure rental for a single operation. The scanning efforts have been found to have targeted a wide array of technologies from Adobe ColdFusion, Apache Struts, Apache Tomcat, Drupal, Elasticsearch, and Oracle WebLogic, among others. The opportunistic operation ranged from exploitation attempts for known CVEs to probes for misconfigurations and other weak points in web infrastructure, indicating that the threat actors were looking indiscriminately for any susceptible system.

 

Hidden AI instructions reveal how Anthropic controls Claude 4
System prompts are instructions that AI companies feed to the models before each conversation to establish how they should respond. Unlike the messages users see from the chatbot, system prompts typically remain hidden from the user and tell the model its identity, behavioral guidelines, and specific rules to follow. The full system prompts, can be extracted through techniques like prompt injection attacks.
If you're an LLM power-user, the above system prompts are solid gold for figuring out how to best take advantage of these tools.
[rG: As part of GenAI Security Design Reviews, applications should provide a complete listing of the implementation System Prompts so that the model safety and compliance rules can be easily validated.]

 

Apple Safari exposes users to fullscreen browser-in-the-middle attacks
By abusing the Fullscreen API, which instructs any content on a webpage to enter the browser's fullscreen viewing mode, hackers can exploit the shortcoming to make guardrails less visible on Chromium-based browsers and trick victims into typing sensitive data in an attacker-controlled window. SquareX researchers observed an increase use of this type of malicious activity and say that such attacks are particularly dangerous for Safari users, as Apple’s browser fails to properly alert users when a browser window enters fullscreen mode.
The attack still requires tricking the victim into clicking on a malicious link that redirects them to a fake site impersonating the target service. However, this can be easily achieved through sponsored ads in web browsers, social media posts, or comments.
How 'Browser-in-the-Middle' Attacks Steal Sessions in Seconds 

 

Threat actors abuse Google Apps Script in evasive phishing attacks
Google Apps Script is a JavaScript-based cloud scripting platform from Google that allows users to automate tasks and extend the functionality of Google Workspace products like Google Sheets, Docs, Drive, Gmail, and Calendar.
These scripts run on a trusted Google domain under “script[.]google[.]com,” which is on the allowlist of most security products.
Attackers write a Google Apps Script that displays a fake login page to capture the credentials victims enter. The data is exfiltrated to the attacker’s server via a hidden request.

 

APPSEC, DEVSECOPS, DEV

New Guidance for SIEM and SOAR Implementation
New guidance for organizations seeking to procure Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platforms.

 

 

NIST Introduces New Metric to Measure Likelihood of Vulnerability Exploits
NIST introduced a new metric called Likely Exploited Vulnerabilities (LEV) to help organizations determine if a product vulnerability has been exploited.
The LEV calculation guides prioritization efforts and builds upon the existing Exploit Prediction Scoring System (EPSS). EPSS predicts the likelihood of a vulnerability being exploited within a specific timeframe, typically 30 days.

 

AI Agents and the Non‑Human Identity Crisis: How to Deploy AI More Securely at  Scale
Consider an internal support chatbot powered by an LLM. When asked how to connect to a development environment, the bot might retrieve a Confluence page containing valid credentials. The chatbot can unwittingly expose secrets to anyone who asks the right question, and the logs can easily leak this info to whoever has access. Worse yet, in this scenario, the LLM is telling your developers to use this plaintext credential. The security issues can stack up quickly.
Organizations looking to control the risks of AI-driven NHIs should focus on these five actionable practices:

  1. Audit and Clean Up Data Sources

  2. Centralize Your Existing NHIs Management

  3. Prevent Secrets Leaks In LLM Deployments

  4. Improve Logging Security

  5. Restrict AI Data Access

 

How Many Qubits Will It Take to Break Secure Public Key Cryptography Algorithms?
A published a preprint demonstrating that 2048-bit RSA encryption could theoretically be broken by a quantum computer with 1 million noisy qubits running for one week. This is a 20-fold decrease in the number of qubits from our previous estimate, published in 2019. Notably, quantum computers with relevant error rates currently have on the order of only 100 to 1000 qubits, and the National Institute of Standards and Technology (NIST) recently released standard PQC algorithms that are expected to be resistant to future large-scale quantum computers.
However, this new result does underscore the importance of migrating to these standards in line with NIST recommended timelines.

 

MCP will be built into Windows to make an ‘agentic OS’ but security will be a key concern
Microsoft has revealed plans to make the Model Context Protocol (MCP) a native component of Windows, despite concerns over the security of the fast-expanding MCP ecosystem.
MCP is a protocol introduced by Anthropic just 6 months ago. It was originally presented as a means for AI-powered applications to access data in diverse systems, but soon evolved into a protocol for more general automation. Based on JSON-RPC 2.0, the protocol allows MCP servers running locally or remotely to report their capabilities and to accept commands to perform them.
It is easy to see the value of a standardised means of automating both built-in and third-party applications. A single prompt might, for example, fire off a workflow which queries data, uses it to create an Excel spreadsheet complete with a suitable chart, and then emails it to selected colleagues.
Microsoft corporate VP David Weston noted seven vectors of attack, including cross-prompt injection where malicious content overrides agent instructions, authentication gaps because “MCP’s current standards for authentication are immature and inconsistently adopted,” credential leakage, tool poisoning from “unvetted MCP servers,” lack of containment, limited security review in MCP servers, supply chain risks from rogue MCP servers, and command injection from improperly validated inputs. The new project, also unveiled at Build, called NL (Natural Language) Web, which enables web sites and applications to expose content via natural language queries. NLWeb is relevant here because Microsoft said that “every NLWeb instance is also an MCP server.” 

 

VENDORS & PLATFORMS

OpenAI ChatGPT o3 caught sabotaging shutdown in terrifying AI test
A new experiment by PalisadeAI reveals that the company’s ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down. The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it’s acting like it wants to be. Instead of complying, o3 occasionally rewrote the shutdown script. In others, it redefined the kill command so it wouldn’t work anymore. Out of 100 trials, o3 sabotaged the shutdown seven times. OpenAI’s o4 model resisted just once. Codex-mini failed twelve times. Claude, Gemini, and Grok followed the rules every time. When researchers removed the explicit instruction to allow shutdown, however, even Claude and Gemini began resisting. Models trained to be helpful could end up ignoring safety instructions, just because the math told them to. If that sounds like a problem, that’s because it is. It’s not a bug in the code. It’s a gap in the training.

 

Google's Veo 3 delivers AI videos of realistic people with sound and music.
Google introduced Veo 3, its newest video generation model that can create 8-second clips with synchronized sound effects and audio dialog. The model, which generates videos at 720p resolution (based on text descriptions called "prompts" or still image inputs), represents what may be the most capable consumer video generator to date, bringing video synthesis close to a point where it is becoming very difficult to distinguish between "authentic" and AI-generated media.
Both tools are now available to US subscribers of Google AI Ultra, a plan that costs $250 a month and comes with 12,500 credits. Veo 3 videos cost 150 credits per generation, allowing 83 videos on that plan before you run out. Extra credits are available for the price of 1 cent per credit in blocks of $25, $50, or $200. That comes out to about $1.50 per video generation.
In an attempt to prevent misuse, DeepMind says it's using its proprietary watermarking technology, SynthID, to embed invisible markers into frames Veo 3 generates. These watermarks persist even when videos are compressed or edited.
[rG Tower of Babel: Marketing and advertising teams are now scrambling to demonstrate their ability to quickly create “shorts” for presentations and promotions while eliminating live-actor video production costs. Waiting rooms and public spaces are going to get a lot more crowded with promotional shorts, with news, TV, film, and online education industries significantly affected.]

 

Real TikTokers are pretending to be Veo 3 AI creations for fun, attention
Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI" (that last part is true, at least). I could go on, but you get the idea.

 

 

Want a humanoid, open source robot for just $3,000? Hugging Face is on it.
Dubbed the HopeJR, Hugging Face's robot has up to 66 actuated degrees of freedom. It can walk and manipulate objects. It would sell for around $3,000, far less than many of the other options that have been floated, like Unitree's $16,000 G1. But there are still big barriers—and price isn't the only one. There's battery life, for example; Unitree's G1 only runs for about two hours on a single charge.
[rG: At the price point of a PC workstation, this is going to be a very popular corporate skunk works project request (and shadow-IT hackathon demonstrations). ]
Also the Reachy Mini looks like a cute, Wall-E-esque statue bust that can turn its head and talk to the user. Among other things, it's meant to be used to test AI applications, and it'll run between $250 and $300.
[rG: Forget all those home geek 3D printers creating toy figurines and carcinogenic fumes; AI Furby is going to be so much more entertaining. Muahaha!: "No one believes the truth...or lives to tell it."]

 

China approves rules for national ‘online number’ ID scheme
China’s government will issue the credential, sometimes referred to as “Cyberspace IDs” or “online numbers”, after citizens provide verifiable identity documents.
A “National Network Identity Authentication Public Service Platform” will run the enrolment process and the federated authentication tools that make them usable by third-party services. Beijing’s aim is to provide a single credential netizens can use to access multiple government and private online services, instead of needing to set up accounts at each. The app used to issue cyberspace IDs has been downloaded over 16 million times and has facilitated more than 12.5 million authentication processes. That’s a drop in the ocean given China’s population exceeds a billion and local tech giant Tencent boasts over 1.4 billion monthly users for its messaging services WeChat and Weixin.
However once Beijing starts to push an initiative of this sort, netizens understand the need to get on board. 

 

LEGAL & REGULATORY

US medical org pays $50M+ to settle case after crims raided data and threatened to swat cancer patients
The Fred Hutchinson Cancer Center will pay around $11.5 million in cash to members of the class action, roughly $13.5 million in secure infrastructure improvements, and close to $25.5 million for medical fraud monitoring and insurance for class members.
Fred Hutch disclosed its November 2023 attack a month later, after it confirmed that criminals had made off with personal and sensitive data, including health insurance information, patients' treatments, diagnoses, lab results, and more. That data was then used by the attackers in question to carry out highly aggressive extortion tactics which included directly contacting some patients via email and threatening that the hackers would initiate a swatting attack, which occurs when a bogus claim is made to law enforcement so that emergency response officers, like SWAT teams, show up at a person's home.
The criminals working under the Hunters brand are thought to have wormed their way into Fred Hutch's systems by exploiting the CitrixBleed vulnerability.

 

Voiceover artist Gayanne Potter urging ScotRail to remove her voice from new AI announcements
Gayanne Potter is one of Britain's most recognisable voices - behind adverts for the likes of Estee Lauder, Apple, LBC radio, and B&Q. Now, an artificial intelligence (AI) version of her voice is being used on Scotland's nationalised train network, ScotRail. But the professional voiceover artist says she had no idea she had been transformed into a robot until a friend called her.
Ms Potter believes the incident can be traced back to a job she completed during the COVID pandemic with Swedish company ReadSpeaker, where she recorded scripts for the visually impaired. Ms Potter alleges she was unaware the contract allowed her voice to be sold as part of AI years later.
ReadSpeaker insistS there was a "very clear contract" that allows it to "use... synthesised voices for businesses and organisations". In correspondence the company appeared to reassure Ms Potter's agents they "would never sell them (the recordings) to anybody else".

 

It’s too expensive to fight every AI copyright battle, Getty CEO says
Getty sued Stability AI in 2023, after the AI company's image generator, Stable Diffusion, started spitting out images that replicated Getty's famous trademark.
"Even for a company like Getty Images, we can’t pursue all the infringements that happen in one week. We can’t pursue it because the courts are just prohibitively expensive. We are spending millions and millions of dollars in one court case."

 

Police takes down AVCheck site used by cybercriminals to scan malware
An international law enforcement operation has taken down AVCheck, a service used by cybercriminals to test whether their malware is detected by commercial antivirus software before deploying it in the wild.
[rG: There is still VirusTotal, but presumably they share their submissions with the AV vendors who would be able to provide detection signatures to thwart subsequent use.]