Robert Grupe's AppSecNewsBits 2025-05-03

Highlights:"You can't lick a badger twice", xAI secrets exposure, MFA fails & protections, Microsoft allows sign ins with revoked passwords, Oracle causes days outage, Juice-Hajacking device charging, AI generated code vulnerabilities, Air-Play vulnerabilities, and much more ...

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Oracle engineers caused 5-dayslong software outage at U.S. hospitals
Oracle acquired EHR vendor Cerner in 2022 for $28.3 billion, becoming the second-biggest player in the market, behind Epic Systems.
Oracle engineers mistakenly triggered a 5-day software outage at a number of Community Health Systems hospitals, causing the facilities to temporarily return to paper-based patient records.
The outage involving Oracle Health, the company’s electronic health record (EHR) system, affected 45 hospitals, after engineers conducting maintenance work mistakenly deleted critical storage connected to a key database.
[rG: As the UnitedHealth cyber attack and outage demonstrated, organizations need to ensure effective Business Continuity and Disaster Recover (BCDR) plans are ready for any critical supplier dependency to minimize potential unexpected operational disruptions.]

 

Healthcare group Ascension discloses second cyberattack on patients' data
For the second time in the space of a year, that their medical data was compromised during a major cyberattack. The private medical services provider said one of its former business partners, with which the company shared some patient medical data, was ransacked by criminals that exploited a vulnerability in some third-party software.
Ascension stayed coy about the details of the attack, but given the timelines involved, and how widespread the attack was at the time, a reasonable guess as to the source of the breach would be Cl0p's raid on Cleo customers. If Cleo was the source of the intrusion, Ascension would not be alone in disclosing the attack so late. Car hire giant Hertz, for example, confirmed in mid-April that its data was also compromised as part of the Cleo attack campaign, with Hertz, Dollar, and Thrifty brands all affected.

 

British govt agents step in as Harrods becomes third mega retailer under cyberattack
Harrods, a globally recognized purveyor of all things luxury, is the third major UK retailer to confirm an attempted cyberattack on its systems in under two weeks. It confirmed the incident in a statement, hinting that, like Co-op's case earlier in the week, the attack may not have been successful. None of the three UK retailers currently battling cybersecurity issues – M&S, Co-op, and now Harrods – have confirmed whether ransomware was involved, although the rumor mill is whirring with mutterings of Scattered Spider's involvement.

 

Disney Slack attack wasn't Russian protesters, just a Cali dude with malware
When someone stole more than a terabyte of data from Disney last year, it was believed to be the work of Russian hacktivists protesting for artist rights. We now know it was actually a 25-year-old California resident.
Last year, a person or group calling itself "Nullbulge" accessed Disney Slack channels, then stole and released 1.1 TB of internal Disney data online in a purported protest against artists not receiving fair compensation for their work. Nullbulge claimed to be a cyber-crime ring from Russia, and said they had intentionally targeted Disney due to how it handled artist contracts, approached the use of AI, and treated consumers.
The DoJ said Ryan Mitchell Kramer was the responsible party. He didn't even seem to be targeting Disney. Kramer published a program online that purported to be an AI art generation app, but actually contained malware that gave him remote access to the victim's computer. An employee of the House of Mouse downloaded the program, allowing Kramer to nab login credentials for various accounts in their name, including their Disney Slack account. From there, he sifted through "thousands" of Slack channels, according to the DoJ, and grabbed all kinds of confidential information, including messages, internal project information, and the personal details of employees.

 

Brazil’s AI-powered social security app is wrongly rejecting claims
When Josélia de Brito, a former sugarcane worker from a remote town in northeast Brazil, filed for her retirement benefits through the mandated government app, she expected her claim would be processed quickly. Instead, her request was instantly turned down because the system identified her as a man.
Brazil introduced AI tools to review welfare benefits in 2018. The government aims to have the algorithm review 55% of social security petitions by the end of 2025. The tool has cut bureaucracy in some cases, but led to automatic denials for many. Users who are unhappy with the decisions can appeal through an internal board of legal resources where the wait is 278 days to get a response,
The rejected retirement claim she had filed in February was approved in March — but only because of a connection she had at the National Confederation of Workers in Agriculture. Her case went straight to INSS directors, who identified and corrected the mistake in the app.

 

From 112K to 4M folks' data – HR biz attack goes from bad to mega bad
VeriSource Services' long-running probe into a February 2024 digital break-in shows the data of 4 million people – not just a few hundred thousand as it first claimed - was accessed by an "unknown actor". The tech company, which provides employee benefits administration services
The filing with the Maine AG's office - late last week - is the second disclosure released by the company. The earlier one was published in August 2024 with the US Health and Human Services Office for Civil Rights. According to that earlier filing, VeriSource thought at the time that only around 112,000 people were affected.

 

Back online after 'catastrophic' attack, 4chan says it's too broke for good IT
Two weeks after the incident and the website is back up and running, with the breached server replaced and critical software fully updated.
On the afternoon of April 14th, a hacker using a UK IP address exploited an out-of-date software package on one of 4chan's servers, via a bogus PDF upload," it said. "With this entry point, they were eventually able to gain access to one of 4chan's servers, including database access and access to our own administrative dashboard.
The hacker spent several hours exfiltrating database tables and much of 4chan's source code. When they had finished downloading what they wanted, they began to vandalize 4chan at which point moderators became aware and 4chan's servers were halted, preventing further access.
It blamed its failure to update its operating systems, code, and infrastructure on "having insufficient skilled man-hours" – a byproduct of "being starved of money for years by advertisers, payment providers, and service providers who had succumbed to external pressure campaigns."

 

SK Telecom cyberattack: Free SIM replacements for 25 million customers
South Korean mobile provider SK Telecom has announced free SIM card replacements to its 25 million mobile customers following a recent USIM data breach, but only 6 million cards are available through May.
Malware running on its network that allowed threat actors to steal customers' Universal Subscriber Identity Module (USIM) data, typically including International Mobile Subscriber Identity (IMSI), Mobile Station ISDN Number (MSISDN), authentication keys, network usage data, and SMS or contacts if stored on the SIM. The main risk from this breach is the potential for threat actors to perform unauthorized number ports to cloned SIM cards, known as "SIM swapping."
Investigations into the exact causes and scope are still ongoing but have not yet confirmed "secondary damage or dark web leaks."

 

xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs
An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X.
If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain. Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks.
The exposed API key had acess to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.
GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.
This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.
[rG: It is critical that AI development follow SSDLC processes to ensure full, documented security design reviews, and implementation compliance monitoring – not just in PROD, but also throughout the DevOps pipeline (e.g. no “secrets in code”).] 

 

Kali Linux warns of update failures after losing repo signing key
Offensive Security warned Kali Linux users to manually install a new Kali repository signing key to avoid experiencing update failures.
The announcement comes after OffSec lost the old repo signing key (ED444FF07D8D0BF6) and was forced to create a new one (ED65462EC8D5E4C5) signed by Kali Linux developers using signatures available on the Ubuntu OpenPGP key server. However, since the key was not compromised, the old one was not removed from the keyring.
When trying to get the list of latest software packages on systems still using the old key, users will see "Missing key 827C8569F2518CC677FECA1AED65462EC8D5E4C5, which is needed to verify signature" errors.

 

Windows RDP lets you log in using revoked passwords. Microsoft is OK with that.
Even after users change their account password, it remains valid for RDP logins indefinitely. In some cases, multiple older passwords will work while newer ones won’t. The result: persistent RDP access that bypasses cloud verification, multifactor authentication, and Conditional Access policies.

  • Old credentials continue working for RDP—even from brand-new machines.

  • Defender, Entra ID, and Azure don’t raise any flags.

  • There is no clear way for end-users to detect or fix the issue.

  • No Microsoft documentation or guidance addresses this scenario directly.

  • Even newer passwords may be ignored while older ones continue to function.

The behavior could prove costly in scenarios where a Microsoft or Azure account has been compromised, for instance when the passwords for them have been publicly leaked. In such an event, the first course of action is to change the password to prevent an adversary from using it to access sensitive resources. While the password change prevents the adversary from logging in to the Microsoft or Azure account, the old password will give an adversary access to the user’s machine through RDP indefinitely.
Microsoft said the behavior is a “a design decision to ensure that at least one user account always has the ability to log in no matter how long a system has been offline.”

 

Photos reveal Trump cabinet member using less-secure Signal app knockoff
Photographs taken at Donald Trump’s cabinet meeting have revealed that top White House officials communicating using a version of the Signal messaging app that has been modified to add the ability to retain messages and archive them to comply with the legal requirement that presidential records be preserved.
The photograph does not show much of the content of the messages Waltz was sending, though one to “Rubio” – probably the secretary of state – could be seen to read “there is time” while a message from “Vance” – probably the vice-president – read, “I have confirmation from my counterpart it’s turned off. He is going to be here in …” There was also an indication that Waltz had used Signal to call Gabbard, and that the phone’s scheduling function included an 8am meeting for “PDB”, probably the president’s daily brief.
[rG Security Fails: Never expose confidential information (screens, keyboards, speakers) in a location where others can shoulder-surf eavesdrop – not just in airplane/bus/train/automobile transportation and public venues, but even trusted spaces where others have electronic devices that can record.]

 

What’s Weak This Week:

  • CVE-2024-58136  Yiiframework Yii Improper Protection of Alternate Path Vulnerability:
    may allow a remote attacker to execute arbitrary code. This vulnerability could affect other products that implement Yii, including—but not limited to—Craft CMS, as represented by CVE-2025-32432. Related CWE: CWE-424

  • CVE-2025-34028 Commvault Command Center Path Traversal Vulnerability:
    allows a remote, unauthenticated attacker to execute arbitrary code.Related CWE: CWE-22

  • CVE-2023-44221 SonicWall SMA100 Appliances OS Command Injection Vulnerability:
    in the SSL-VPN management interface that allows a remote, authenticated attacker with administrative privilege to inject arbitrary commands as a 'nobody' user. Related CWE: CWE-78

  • CVE-2024-38475 Apache HTTP Server Improper Escaping of Output Vulnerability:
    contains an improper escaping of output vulnerability in mod_rewrite that allows an attacker to map URLs to filesystem locations that are permitted to be served by the server but are not intentionally/directly reachable by any URL, resulting in code execution or source code disclosure.Related CWE: CWE-116

  • CVE-2025-31324 SAP NetWeaver Unrestricted File Upload Vulnerability:
    allows an unauthenticated agent to upload potentially malicious executable binaries. Related CWE: CWE-434

  • CVE-2025-3928 Commvault Web Server Unspecified Vulnerability:
    allows a remote, authenticated attacker to create and execute webshells.

  • CVE-2025-42599 Qualitia Active! Mail Stack-Based Buffer Overflow Vulnerability:
    allows a remote, unauthenticated attacker to execute arbitrary or trigger a denial-of-service via a specially crafted request. Related CWE: CWE-121

  • CVE-2025-1976 Broadcom Brocade Fabric OS Code Injection Vulnerability:
    allows a local user with administrative privileges to execute arbitrary code with full root privileges. Related CWE: CWE-94 

 

HACKING

Governments are using zero-day hacks more than ever
The Google Threat Intelligence Group (GTIG) detected 75 zero-day exploits in 2024, which is a bit lower than the previous year. Unsurprisingly, a sizable chunk of them was the work of state-sponsored hackers.
Zero-day exploits are becoming increasingly easy for threat actors to develop and procure, which has led to more sophisticated attacks. While end-user devices are still regularly targeted, the trend over the past few years has been for these vulnerabilities to target enterprise systems and security infrastructure.
Google recommends enterprises continue scaling up efforts to detect and block malicious activities, while also designing systems with redundancy and stricter limits on access.
[rG: Intrusion and exploitation detection and reaction is often time too late to prevent sensitive data exploitation and malicious payload deployments. Robust SSDLC secure systems design, monitoring, and response are key to ensuring resilient operations.]

 

The one interview question that will protect you from North Korean fake workers
North Koreans will use generative AI to develop bulk batches of LinkedIn profiles and applications for remote work jobs that appeal to Western companies. During an interview, multiple teams will work on the technical challenges that are part of the interview while the "front man" handles the physical side of the interview, although sometimes rather ineptly.
"One of the things that we've noted is that you'll have a person in Poland applying with a very complicated name," he recounted, "and then when you get them on Zoom calls it's a military age male Asian who can't pronounce it.
"My favorite interview question, because we've interviewed quite a few of these folks, is something to the effect of 'How fat is Kim Jong Un?' They terminate the call instantly, because it's not worth it to say something negative about that.”

 

Apple notifies new victims of spyware attacks across the world
Apple sent notifications this week to several people who the company believes were targeted with government spyware. “Today’s notification is being sent to affected users in 100 countries.” Other tech companies, like Google and WhatsApp, have in recent years also periodically sent such notifications to their users.

 

iOS and Android juice jacking defenses have been trivial to bypass for years
Juice jacking works by equipping a charger with hidden hardware that can access files and other internal resources of phones, in much the same way that a computer can when a user connects it to the phone. An attacker would then make the chargers available in airports, shopping malls, or other public venues for use by people looking to recharge depleted batteries.
Starting in 2012, both Apple and Google tried to mitigate the threat by requiring users to click a confirmation button on their phones before a computer—or a computer masquerading as a charger—could access files or execute code on the phone.
Researchers said that the fixes provided by Apple and Google successfully blunt ChoiceJacking attacks in iPhones, iPads, and Pixel devices. Many Android devices made by other manufacturers, however, remain vulnerable because they have yet to update their devices to Android 15. Other Android devices—most notably those from Samsung running the One UI 7 software interface—don’t implement the new authentication requirement, even when running on Android 15.
[rG: When charging from public/unknown USB connectors, Always use a USB connector adapter that only allows power, not data: not only for phones/tablets/laptops, but smartwatches, health/fitness monitors, e-readers, etc. . And don’t trust rental cars either.]

 

Millions of Apple Airplay-enabled devices can be hacked via Wi-Fi
Given how rarely some smart-home devices are patched, it’s likely that these wirelessly enabled footholds for malware, across many of the hundreds of models of AirPlay-enabled devices, will persist for years to come.
AirBorne is a collection of vulnerabilities affecting AirPlay, Apple’s proprietary radio-based protocol for local wireless communication. Bugs in Apple’s AirPlay software development kit (SDK) for third-party devices would allow hackers to hijack gadgets like speakers, receivers, set-top boxes, or smart TVs if they’re on the same Wi-Fi network as the hacker’s machine. Another set of AirBorne vulnerabilities would have allowed hackers to exploit AirPlay-enabled Apple devices too, Though these bugs have been patched in updates over the last several months, those bugs could have only been exploited when users changed default AirPlay settings.

 

Malicious PyPI packages abuse Gmail, websockets to hijack systems
The 'Coffin' packages appear to be impersonating the legitimate Coffin package that serves as a lightweight adapter for integrating Jinja2 templates into Django projects.
As Gmail is a trusted service, firewalls and EDRs are unlikely to flag this activity as suspicious.
After the email signaling stage, the implant connects to a remote server using WebSocket over SSL, receiving tunnel configuration instructions to establish a persistent, encrypted, bidirectional tunnel from the host to the attacker.
Using a 'Client' class, the malware forwards traffic from the remote host to the local system through the tunnel, allowing internal admin panel and API access, file transfer, email exfiltration, shell command execution, credentials harvesting, and lateral movement.

 

Open source text editor poisoned with malware to target Uyghur users
The version of UyghurEditPP linked to in the phishing mails was altered to include malware and contained a backdoor that would allow the operator to gather information about the device, upload information to a command and control server, and download additional files, including other malware. The malware also makes it possible to download files from the target device and install malware plugins.

 

Ex-NSA cyber-boss: AI will soon be a great exploit coder
The Hack The Box capture-the-flag contest earlier this month during which AI-powered entrants performed at about the same speed as pure-human teams, and nearly matched humans in tests of problem-solving ability. By the end of the contest, the top AI team captured 19 of 20 flags, placing 20th out of 403 teams with 15900 points; most of the AI teams captured 19 flags in fact.
AI can also help defenders. A human staff engineers reverse engineered a piece of eBPF code – a job that took about half a day. The AI system took about 30 seconds.

 

AI

Former TV meteorologist fights deepfakes after her image was doctored in sextortion scams
According to the FBI's Internet Crime Complaint Center there were more than 54,000 sextortion victims last year. Bree Smith was a familiar face in Nashville. But, in January, the mom and former TV meteorologist stopped appearing on the local CBS-affiliated station after months of fighting against deepfakes. An image of her was doctored to create explicit pictures and videos, with her face edited onto different, partly nude bodies, and then used to try to extort money from others.
"I cry myself to sleep most nights, mostly because I don't want my kids to see me," said Smith.
The deepfakes quickly multiplied, accompanying offers for private dinners and intimate acts in exchange for hundreds of dollars, and targeting anyone who might recognize Smith.
Last week, a bill Smith backed passed in the Tennessee Senate. The bill, called the "Preventing Deepfake Images Act," was sent to Tennessee Gov. Bill Lee, who is expected to sign it into law. It provides a path for people targeted by sextortion scams to sue if images of them are shared without their consent. 

 

Louisville mom warns of AI scam call using daughter's cloned voice
The Louisville mom said it was the scariest call she had ever received. It was her 10-year-old daughter's voice on the other line, saying she had been in an accident and needed help. The mom immediately called her daughter's school, where she was, safe, but she wants to warn others in case this scammer goes after other parents, too.

 

How AI is being used to create sophisticated scams that leave even experts second-guessing
It has becoming increasingly difficult for the general public to verify whether something they saw online was real or AI generated.
Leon's caller was professional, patient and didn't pressure him to provide sensitive financial information over the phone — a tactic often reported by scam victims. But when he was sent links to the glossy websites of the investment firm being promoted, and a mining company the firm would help him buy shares in, Leon quickly realised this was not any "old school scam".
The elaborate sites featured a business registration number, detailed weekly blog posts, a list of board members with their pictures and the names of their graduating universities, the locations of multiple mining sites, and even a range of publicly accessible documents detailing company policies. Leon said the level of detail was convincing.
One notable feature of fake operation was that it had an active business registration number, but was suspicious because a required business license was missing and not listed on the registrar’s site.
[rG: See article for example details about detecting false images, identities, and locations.]

 

Google search’s made-up AI explanations for sayings no one ever said, explained
The phrase "You can't lick a badger twice" unexpectedly went viral on social media. The nonsense sentence—which was likely never uttered by a human before last week—had become the poster child for the newly discovered way Google search's AI Overviews makes up plausible-sounding explanations for made-up idioms.
Google users quickly discovered that typing any concocted phrase into the search bar with the word "meaning" attached at the end would generate an AI Overview with a purported explanation of its idiomatic meaning. Even the most nonsensical attempts at new proverbs resulted in a confident explanation from Google's AI Overview, created right there on the spot.
Contrary to the computer science truism of "garbage in, garbage out,” Google here is taking in some garbage and spitting out... well, a workable interpretation of garbage, at the very least. A lot of the problem, has to do with the LLM's unearned confident tone, which pretends that any made-up idiom is a common saying with a well-established and authoritative meaning, rather than framing its responses as a "best guess" at an unknown phrase.
[rG: I nearly injured myself ROFLing; reminiscent my own self-indulgent fatherly responses to my own children’s queries.]

 

Time saved by AI offset by new work created, study suggests
In "Large Language Models, Small Labor Market Effects," economists Anders Humlum and Emilie Vestergaard focused specifically on the impact of AI chatbots across 11 occupations often considered vulnerable to automation, including accountants, software developers, and customer support specialists. Their analysis covered data from 25,000 workers and 7,000 workplaces in Denmark.
Despite finding widespread and often employer-encouraged adoption of these tools, the study concluded that "AI chatbots have had no significant impact on earnings or recorded hours in any occupation" during the period studied. The confidence intervals in their statistical analysis ruled out average effects larger than 1 percent.
While corporate investment boosted AI tool adoption—saving time for 64 to 90% of users across studied occupations—the actual benefits were less substantial than expected. AI chatbots actually created new job tasks for 8.4 percent of workers, including some who did not use the tools themselves, offsetting potential time savings. For example, many teachers now spend time detecting whether students use ChatGPT for homework, while other workers review AI output quality or attempt to craft effective prompts.

 

Duolingo jumps aboard the 'AI-first' train, will phase out contractors
Duolingo has become the latest tech outfit to attempt to declare itself 'AI-first,' with CEO Luis von Ahn telling staff the biz hopes to gradually phase out contractors for work neural networks can take over.
Duolingo said it's also going to begin evaluating AI use for hiring and employee performance reviews. The letter also explained that new roles would only be approved if a team can prove that the work couldn't be automated, in a nod to a similar policy announced by Shopify's CEO, and that initiatives would be forthcoming to "fundamentally change" how "most functions" at the company work.

 

Chinese carmaker Chery using DeepSeek-driven humanoid robots as showroom sales staff
AIMOGA can walk, understand human speech (thanks to integration with DeepSeek’s AI models) and respond in 10 languages. Chery says the bot possesses a “human-like motion library” that “enhances interactions, offering users a more natural and engaging experience.” Chery’s sent AIMOGA to work at its flagship “4S” showrooms in Malaysia and has hinted at doing the same in other countries.

 

30 percent of some Microsoft code now written by AI - especially the new stuff
We used to always think about why Word, Excel, PowerPoint isn't it one thing, and we've tried multiple attempts of it. But now you can conceive of it … you can start in Word and you can sort of visualize things like Excel and present it, and they can all be persisted as one data structure or what have you. So to me that malleability that was not as robust before is now there.”
Which sounds like the OpenDoc vs. OLE wars of the 1990s – during which Microsoft and Apple fought over how to share data across apps – brought into the AI age.
No comment if the code their companies generate without human input has proven problematic.

 

Brewhaha: Turns out machines can't replace people, Starbucks finds
For tasks that involve contact with customers, people are so far proving to be preferable. McDonald's found that out when it tested and then abandoned an AI ordering system last year.
Starbucks' CEO said, "We're finding through our work that investments in labor, rather than equipment, are more effective at improving throughput and driving transaction growth.”

 

Google is quietly testing ads in AI chatbots
Google is now allowing more chatbot makers to sign up for AdSense. "AdSense for Search is available for websites that want to show relevant ads in their conversational AI experiences.”
If people continue shifting to using AI chatbots to find information, this expansion of AdSense could help prop up profits. There's no hint of advertising in Google's own Gemini chatbot or AI Mode search, but the day may be coming when you won't get the clean, ad-free experience at no cost.

 

OpenAI rolls back update that made ChatGPT a sycophantic mess
OpenAI, along with competitors like Google and Anthropic, is trying to build chatbots that people want to chat with. So, designing the model's apparent personality to be positive and supportive makes sense—people are less likely to use an AI that comes off as harsh or dismissive. For lack of a better word, it's increasingly about vibemarking.
OpenAI gathers data on the responses people like more. Then, engineers revise the production model using a technique called reinforcement learning from human feedback (RLHF).
Recently, however, that reinforcement learning went off the rails. The AI went from generally positive to the world's biggest suck-up. Users could present ChatGPT with completely terrible ideas or misguided claims, and it might respond, "Wow, you're a genius," and "This is on a whole different level."
OpenAI seems to realize it missed the mark with its latest update, so it's undoing the damage.

 

The end of an AI that shocked the world: OpenAI retires GPT-4
April 30, 2025, GPT-4 was retired from ChatGPT and replaced by GPT-4o. The retirement marks the end of an era that began on March 14, 2023, when GPT-4 demonstrated capabilities that shocked some observers: reportedly scoring at the 90th percentile on the Uniform Bar Exam, acing AP tests, and solving complex reasoning problems that stumped previous models.
The model reportedly cost more than $100 million to train, and may have involved over 20,000 high-end GPUs working in concert - an expense few organizations besides OpenAI and its primary backer, Microsoft, could afford.
In February 2023, Microsoft integrated its own early version of the GPT-4 model into its Bing search engine, creating a chatbot that sparked controversy when it tried to convince Kevin Roose of The New York Times to leave his wife and when it "lost its mind" in response to an Ars Technica article.

 

New study accuses LM Arena of gaming its popular AI benchmark
LM Arena was created in 2023 as a research project at the University of California, Berkeley. The pitch is simple—users feed a prompt into two unidentified AI models in the "Chatbot Arena" and evaluate the outputs to vote on the one they like more. This data is aggregated in the LM Arena leaderboard that shows which models people like the most, which can help track improvements in AI models.
The authors say LM Arena allows developers of proprietary large language models (LLMs) to test multiple versions of their AI on the platform. However, only the highest performing one is added to the public leaderboard.
The researchers point out that certain models appear in arena faceoffs much more often, with Google and OpenAI together accounting for over 34 percent of collected model data. Firms like xAI, Meta, and Amazon are also disproportionately represented in the arena. Therefore, those firms get more vibemarking data compared to the makers of open models.

 

Claude’s AI research mode now runs for up to 45 minutes before delivering reports
We asked Anthropic's Research a simple question: "Who Invented Video Games?" After 13 minutes and 2 seconds of research, it constructed a fairly comprehensive and nuanced report, complete with sources, that provides a largely accurate historical overview that exceeds the quality of most video game history books in print today.
Still, the report contained a direct quote statement from William Higinbotham that appears to combine quotes from two sources not cited in the source list. (One must always be careful with confabulated quotes in AI because even outside of this Research mode, Claude 3.7 Sonnet tends to invent plausible ones to fit a narrative.) We recently covered a study that showed AI search services confabulate sources frequently, and in this case, it appears that the sources Claude Research surfaced, while real, did not always match what is stated in the report.
Overall Claude Research did a relatively good job crafting a report on this particular topic. Still, you'd want to dig more deeply into each source and confirm everything if you used it as the basis for serious research.

 

APPSEC, DEVSECOPS, DEV

AI-generated code could be a disaster for the software supply chain. Here’s why.
A new study used 16 of the most widely used large language models to generate 576,000 code samples, found that 440,000 of the package dependencies they contained were “hallucinated,” meaning they were non-existent. Open source models hallucinated the most, with 21% of the dependencies linking to non-existent libraries.
AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions. These non-existent dependencies represent a threat to the software supply chain by exacerbating so-called dependency confusion attacks.
[rG: SSDLC protection by ensuring that all 3rd party components are managed through an enterprise binary management system (e.g. Artifactory) that has daily updated SCA vulnerability scanning and alerting, along with release SAST scans and manual expert security code reviews to flag any unnecessary/suspicious libraries.]

 

How to survive as a CISO aka 'chief scapegoat officer'
Chief security officers should negotiate personal liability insurance and a golden parachute when they start a new job – in case things go sideways and management tries to scapegoat them for a network breach. And if they blow the whistle, it's best not to sue their employer as well, lest they get blacklisted. Those were among the nuggets of advice given at an RSA Conference panel on CISO whistleblowing

 

 

Linus Torvalds Expresses His Hatred For Case-Insensitive File-Systems
Case-insensitive names are horribly wrong, and you shouldn't have done them at all. The problem wasn't the lack of testing, the problem was implementing it in the first place.
They didn't actually test for all the really interesting cases - the ones that cause security issues in user land. Security issues like "user space checked that the filename didn't match some security-sensitive pattern". And then the shit-for-brains filesystem ends up matching that pattern anyway, because the people who do case insensitivity INVARIABLY do things like ignore non-printing characters, so now "case insensitive" also means "insensitive to other things too".
and ❤️ are two unicode characters that differ only in ignorable code points. And guess what? The cray-cray incompetent people who want those two to compare the same will then also have other random - and perhaps security-sensitive - files compare the same, just because they have ignorable code points in them. So now every single user mode program that checks that they don't touch special paths is basically open to being fooled into doing things they explicitly checked they shouldn't be doing. And no, that isn't something unusual or odd. Lots of programs do exactly that. Dammit. Case sensitivity is a BUG. The fact that filesystem people still think it's a feature, I cannot understand. It's like they revere the old FAT filesystem so much that they have to recreate it - badly."

 

MFA

Alleged ‘Scattered Spider’ Member Extradited to U.S.
A 23-year-old Scottish man thought to be a member of the prolific Scattered Spider cybercrime group was extradited last week from Spain to the United States, where he is facing charges of wire fraud, conspiracy and identity theft. U.S. prosecutors allege Tyler Robert Buchanan and co-conspirators hacked into dozens of companies in the United States and abroad, and that he personally controlled more than $26 million stolen from victims. Buchanan and his co-conspirators targeted at least 45 companies in the United States and abroad, including Canada, India, and the United Kingdom.
The complaint against Buchanan says the FBI tied him to the 2022 SMS phishing attacks after discovering the same username and email address was used to register numerous Okta-themed phishing domains seen in the campaign. Authorities seized at least 20 digital devices when they raided Buchanan’s residence, and on one of those devices they found usernames and passwords for employees of three different companies targeted in the phishing campaign.

 

Why MFA is getting easier to bypass and what to do about it
An entire ecosystem has cropped up to help criminals defeat these forms of MFA. They employ an attack technique known as an adversary in the middle. The tools provide phishing-as-a-service toolkits that are marketed in online crime forums using names like Tycoon 2FA, Rockstar 2FA, Evilproxy, Greatness, and Mamba 2FA. In 2022, for instance, a single group used it in a series of attacks that stole more than 10,000 credentials from 137 organizations and led to the network compromise of authentication provider Twilio, among others.
The problem with these forms of MFA is that the codes themselves are phishable, since they come in the form of numbers, and occasionally other characters, that are just as easy for the attacker to copy and enter into the site as passwords are.
Services that use WebAuthn are highly resistant to adversary-in-the-middle attacks, if not absolutely immune. There are two reasons for this. First, WebAuthn credentials are cryptographically bound to the URL they authenticate. In the above example, the credentials would work only on hxxps://accounts[.]google[.]com. If a victim tried to use the credential to log in to hxxps://accounts.google[.]com[.]evilproxy[.]com, the login would fail each time. Additionally, WebAuthn-based authentication must happen on or in proximity to the device the victim is using to log in to the account. This occurs because the credential is also cryptographically bound to a victim device. Because the authentication can only happen on the victim device, it’s impossible for an adversary in the middle to actually use it in a phishing attack on their own device.
WebAuthn-based MFA comes in multiple forms; a key, known as a passkey, stored on a phone, computer, Yubikey, or similar dongle is the most common example.

 

Microsoft’s new “passwordless by default” is great but comes at a cost
A key part of the “passwordless by default” initiative Microsoft announced is encouraging the use of passkeys—the new alternative to passwords that Microsoft, Google, Apple, and a large roster of other companies are developing under the coordination of the FIDO Alliance.
Microsoft will make passkeys the default means for new users to sign in. Existing users who have yet to enroll a passkey will be presented with a prompt to do so the next time they log in. Left out of Microsoft’s announcement is that even after users create a passkey, they can’t go passwordless until they install the Microsoft Authenticator app on their phone. Microsoft has made Authy, Google Authenticator, and similar apps incompatible, a choice that needlessly inconveniences users.
Using Microsoft Authenticator isn’t a requirement for using a passkey, but account holders who don’t have it will be unable to ditch their login passwords. With a password still associated with the account, many of the security benefits of passkeys are undermined.
[rG: Personal users need to carefully consider where they have their authenticators, what happens when they lose their mobile phone, and how to recover access to their accounts.]

 

VENDORS & PLATFORMS

Ask AI CVE Insights Cards
CVE Insights Cards offer a comprehensive view of every CVE, continuously updated with real-time information: timelines, exploit and patch data, associated threat actors, malware, and more. They’re a powerful resource for understanding a vulnerability and digging into key details like affected versions or mitigation techniques. But even with all that data, sometimes you need more: answers to specific questions, support for a custom scoring model, or data formatted to fit your workflow. That’s where Ask AI comes in. Now integrated directly into CVE Insights Cards, Ask AI helps you extract the exact insights you need—faster and with less effort.

 

LG will shut down update servers for its Android smartphones on June 30
LG announced just over four years ago that it would depart the smartphone business, and now the clock is running out on any remaining updates for the company’s Android phones. When LG called it quits for Android smartphones, the company also committed to a few more updates. That included an Android 12 update for select devices, the last major update the company would put out, as well as security updates for at least three years after each device had been released. That three-year cutoff has long since passed for all LG devices, but any devices still floating around out there will soon no longer be able to pull updates.
[rG: Reminder that network connected devices aren’t supported indefinitely.] 

 

LEGAL & REGULATORY

TikTok fined $600 million for China data transfers that broke EU privacy rules
Ireland’s Data Protection Commission also sanctioned TikTok for not being transparent with users about where their personal data was being sent and ordered the company to comply with the rules within six months. The Irish national watchdog serves as TikTok’s lead data privacy regulator in the 27-nation EU because the company’s European headquarters is based in Dublin.
“TikTok failed to verify, guarantee and demonstrate that the personal data of (European) users, remotely accessed by staff in China, was afforded a level of protection essentially equivalent to that guaranteed within the EU.”
Under the EU rules, known as the General Data Protection Regulation, European user data can only be transferred outside of the bloc if there are safeguards in place to ensure the same level of protection.
TikTok said that the decision focuses on a “select period” ending in May 2023, before it embarked on a data localization project called Project Clover that involved building three data centers in Europe. “The facts are that Project Clover has some of the most stringent data protections anywhere in the industry, including unprecedented independent oversight by NCC Group, a leading European cybersecurity firm. The decision fails to fully consider these considerable data security measures.”
TikTok says it is being “singled out” even though it uses the “same legal mechanisms” that thousands of other companies in Europe does and its approach is “in line” with EU rules.

 

Ex-Disney employee gets 3 years in the clink for goofy attacks on mousey menus
Former Disney employee Michael Scheuer was sentenced to 36 months in prison and fined almost $688,000 for screwing up a software application the entertainment giant used to cook up its restaurant menus.
Scheuer served as the Menu Production Manager for Disney prior to being fired on June 13, 2024, for misconduct. In July, Scheuer retaliated against the media powerhouse by making unauthorized changes to Disney restaurant menus through its Menu Creator application, hosted by an unidentified third-party. The changes included the replacement of fonts specified by the Menu Creator configuration file with Wingdings. These font changes propagated throughout the database resulting in each menu displaying the same generic font as opposed to the themed fonts applied to each menu. Further, this caused the Menu Creator system to become inoperable while the font changes were pushed to all of the menus. Scheuer also made changes to menu images and background files, such that they loaded as blank white pages.
Among the changes made by Scheuer to the menus were changes to allergen information and pricing. Scheuer added notations to menu items indicating they were safe for people with specific allergies, which could have had fatal consequences depending on the type and severity of a customer's allergy. Other alterations included changing the wine regions to areas associated with mass shootings and the addition of graphics including a swastika.
A subsequent round of attacks on a different SFTP server involved altering QR codes on menus to load a website promoting a boycott of Israel.
The app was down for one to two weeks for repairs and Disney no longer uses it.
Scheuer was able to attack the app three ways: One, by using an administrative account, accessing it through a commercial VPN called Mullvad; and two, using a URL-based access mechanism that was made available to contractors.

 

23andMe requiring potential bidders to affirm they will uphold data privacy
Bidders will need to submit documentation of their intended use of any data, describe the privacy programs and security controls they have in place or would implement, and say whether they would ask for current privacy policies to be amended.