Robert Grupe's AppSecNewsBits 2025-09-06

THIS WEEK’S NEWS HIGHLIGHTS
Epic Fails
- The number of mis-issued 1.1.1.1 certificates grows. Here’s the latest.
- Frostbyte10 bugs put thousands of refrigerators at major grocery chains at risk
- Attackers snooping around Sitecore, dropping malware via public sample keys
AI
- The Ongoing Fallout from a Breach at AI Chatbot Maker Salesloft
- Gronkking: Threat actors abuse X’s Grok AI to spread malicious links
- Patient confronts therapist over ChatGPT use in sessions, exposing a growing trust crisis in mental health care
- In the rush to adopt hot new tech, security is often forgotten. AI is no exception
Hacking
- Cloudflare stops new world's largest DDoS attack over Labor Day weekend
AI
- Nx: AI-powered malware hit 2,180 GitHub accounts in “s1ngularity” attack
- Crims claim HexStrike AI penetration tool makes quick work of Citrix bugs
- Boffins build automated Android bug hunting system
- LegalPwn: Tricking LLMs by burying badness in lawyerly fine print
- These psychological tricks can get LLMs to respond to “forbidden” prompts
AppSec/DevSecOps/Dev
- CISA, NSA, and Global Partners Release a Shared Vision of Software Bill of Materials (SBOM) Guidance
AI
- AI code assistants make developers more efficient at creating security problems
- Let us git rid of it, angry GitHub users say of forced Copilot features
Platforms/Vendors
- Inside Philips Hue’s plans to make all your lights motion sensors
AI
- New AI model turns photos into explorable 3D worlds, with caveats
- ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people
- OpenAI announces parental controls for ChatGPT after teen suicide lawsuit
- UK government trial of M365 Copilot finds no clear productivity boost
- Salesforce sacrifices 4,000 support jobs on the altar of AI
- Bring your own brain? Why local LLMs are taking off
Legal/Regulatory
- Texas sues PowerSchool over breach exposing 62M students, 880k Texans
- Disney to pay $10M to settle claims it collected kids’ data on YouTube
- US sues robot toy maker for exposing children's data to Chinese devs
- France slaps Google with €325M fine for violating cookie regulations
- Microsoft Diablo Developers Unionize
AI
- “First of its kind” AI settlement: Anthropic to pay authors $1.5 billion
- Warner Bros. sues Midjourney to stop AI knockoffs of Batman, Scooby-Doo
- Judge disqualifies three Butler Snow attorneys from case over AI citations
Different
- Microsoft open-sources the 6502 BASIC coded by Bill Gates himself
- HR Recruiter AI Avatar Interviews are now here

 

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
The Ongoing Fallout from a Breach at AI Chatbot Maker Salesloft
Organizations using Salesloft Drift to integrate with third-party platforms (including but not limited to Salesforce) should consider their data compromised and are urged to take immediate remediation steps.
The recent mass-theft of authentication tokens from Salesloft, whose AI chatbot is used by a broad swath of corporate America to convert customer interaction into Salesforce leads, has left many companies racing to invalidate the stolen credentials before hackers can exploit them.
Now Google warns the breach goes far beyond access to Salesforce data, noting the hackers responsible also stole valid authentication tokens for hundreds of online services that customers can integrate with Salesloft, including Slack, Google Workspace, Amazon S3, Microsoft Azure, and OpenAI.
On August 26, the Google Threat Intelligence Group (GTIG) warned that unidentified hackers tracked as UNC6395 used the access tokens stolen from Salesloft to siphon large amounts of data from numerous corporate Salesforce instances. The data theft began as early as Aug. 8, 2025 and lasted through at least Aug. 18, 2025, and that the incident did not involve any vulnerability in the Salesforce platform. The attackers have been sifting through the massive data haul for credential materials such as AWS keys, VPN credentials, and credentials to the cloud storage provider Snowflake.

 

The number of mis-issued 1.1.1.1 certificates grows. Here’s the latest.
The incident constituted an “unacceptable lapse in security by Fina CA," the Microsoft-trusted certificate authority (CA) responsible for all 12 of the mis-issued certificates. Fina CA, for its part, said in a short email that the certificates were “issued for internal testing of the certificate issuance process in the production environment.
An error occurred during the issuance of the test certificates due to incorrect entry of IP addresses. As part of the standard procedure, the certificates were published on Certificate Transparency log servers. The company asserted that the mis-issued certificates “did not compromise users or any other systems in any way.”
Fina never had Cloudflare's permission to issue certificates for an IP it controls. Consent of the owning party is a cardinal rule that Fina didn't follow.

 

Frostbyte10 bugs put thousands of refrigerators at major grocery chains at risk
Ten vulnerabilities in Copeland controllers, which are found in thousands of devices used by the world's largest supermarket chains and cold storage companies, could have allowed miscreants to manipulate temperatures and spoil food and medicine, leading to massive supply-chain disruptions.
The flaws, collectively called Frostbyte10, affect Copeland E2 and E3 controllers, used to manage critical building and refrigeration systems, such as compressor groups, condensers, walk-in units, HVAC, and lighting systems.
There is no indication that any of these vulnerabilities were found and exploited in the wild before Copeland issued fixes. However, the manufacturer's ubiquitous reach across retail and cold storage makes it a prime target for all manner of miscreants, from nation-state attackers looking to disrupt the food supply chain to ransomware gangs looking for victims who will quickly pay extortion demands to avoid operational downtime and food spoilage.
Case in point: JBS Foods made an $11 million ransom payment to REvil criminals several years ago.
[rG: Read the article for the list of preventable vulnerability defects.]

 

Attackers snooping around Sitecore, dropping malware via public sample keys Unknown miscreants are exploiting a configuration vulnerability in multiple Sitecore products to achieve remote code execution via a publicly exposed key and deploy snooping malware on infected machines.
The bug is due to a configuration issue - not a software hole - and affects customers using the sample key provided with deployment instructions for Sitecore XP 9.0 or earlier and Sitecore Active Directory 1.4 and earlier versions.
Updated deployments automatically generate a random machine key. If you're stuck with one of the sample keys from Sitecore's old docs instead of generating your own, treat your install as vulnerable and rotate those keys now.

 

Gronkking: Threat actors abuse X’s Grok AI to spread malicious links
Mavertisers often run sketchy video ads containing adult content baits and avoid including a link to the main body to avoid being blocked by X. Instead, they hide it in the small "From:" metadata field under the video card, which apparently isn't scanned by the social media platform for malicious links.
Next, (likely) the same actors ask Grok via a reply to the ad something about the post, like "where is this video from," or "what is the link to this video."
Grok parses the hidden "From:" field and replies with the full malicious link in clickable format, allowing users to click it and go straight to the malicious site.
Because Grok is automatically a trusted system account on the X platform, its post boosts the link's credibility, reach, SEO, and reputation, increasing the likelihood that it will be broadcast to a large number of users.
The researcher found that many of these links funnel through shady ad networks, leading to scams such as fake CAPTCHA tests, information-stealing malware, and other malicious payloads.

 

Patient confronts therapist over ChatGPT use in sessions, exposing a growing trust crisis in mental health care
Declan, a 31-year-old from Los Angeles, might never have found out if not for a technical glitch during an online therapy session. When the video connection faltered, he suggested turning off cameras. Instead of a blank screen, his therapist accidentally shared his desktop.
“Suddenly, I was watching him use ChatGPT.” His therapist was pasting parts of their conversation into the chatbot and then reading back AI-generated prompts as if they were his own. When confronted the therapist broke down in tears, admitting he had been “out of ideas” and resorted to ChatGPT for guidance.
Declan described it as “a weird breakup” - made worse when the therapist still billed him for the session. General-purpose AI tools like ChatGPT are not HIPAA compliant, which means patient information could be at risk if entered into the system.
As Declan put it, his revelation was more surreal than shattering. But he added a chilling caveat: “If I was suicidal, or on drugs, or cheating on my girlfriend—I wouldn’t want that to be put into ChatGPT.”

 

In the rush to adopt hot new tech, security is often forgotten. AI is no exception
Ollama provides a framework that makes it possible to run large language models locally, on a desktop machine or server. Talos researchers used the Shodan scanning tool to find unsecured Ollama servers, and spotted over 1,100, around 20 percent of which are “actively hosting models susceptible to unauthorized access.” Cisco’s scan found over 1,000 exposed servers within 10 minutes of commencing its sweep of the internet.
The Cisco study “highlight a widespread neglect of fundamental security practices such as access control, authentication, and network isolation in the deployment of AI systems.” As is often the case when organizations rush to adopt the new hotness, frequently without informing IT departments because they don’t want to be told to slow down and do security right.
[rG: Access Control is only an initial level of defense, with even less attention paid to subsequent exploitation attack chain vulnerabilities.]

 

What’s Weak This Week:

  • CVE-2025-53690 Sitecore Multiple Products Deserialization of Untrusted Data Vulnerability: involving the use of default machine keys. This flaw allows attackers to exploit exposed ASP[.]NET machine keys to achieve remote code execution. Related CWE: CWE-502

  • CVE-2025-48543 Android Runtime Use-After-Free Vulnerability: potentially allowing a chrome sandbox escape leading to local privilege escalation.

  • CVE-2025-38352 Linux Kernel Time-of-Check Time-of-Use (TOCTOU) Race Condition Vulnerability: has a high impact on confidentiality, integrity, and availability. Related CWE: CWE-367

  • CVE-2025-9377 TP-Link Archer C7(EU) and TL-WR841N/ND(MS) OS Command Injection Vulnerability: exists in the Parental Control page. The impacted products could be end-of-life (EoL) and/or end-of-service (EoS). Users should discontinue product utilization. Related CWE: CWE-78

  • CVE-2023-50224 TP-Link TL-WR841N Authentication Bypass by Spoofing Vulnerability: within the httpd service, which listens on TCP port 80 by default, leading to the disclose of stored credentials. The impacted products could be end-of-life (EoL) and/or end-of-service (EoS). Users should discontinue product utilization. Related CWE: CWE-290

  • CVE-2025-55177 Meta Platforms WhatsApp Incorrect Authorization Vulnerability: due to an incomplete authorization of linked device synchronization messages. This vulnerability could allow an unrelated user to trigger processing of content from an arbitrary URL on a target’s device. Related CWE: CWE-863

  • CVE-2020-24363 TP-link TL-WA855RE Missing Authentication for Critical Function Vulnerability: could allow an unauthenticated attacker (on the same network) to submit a TDDP_RESET POST request for a factory reset and reboot. The attacker can then obtain incorrect access control by setting a new administrative password. The impacted products could be end-of-life (EoL) and/or end-of-service (EoS). Users should discontinue product utilization. Related CWE: CWE-306 

 

HACKING

AI-powered malware hit 2,180 GitHub accounts in “s1ngularity” attack
The Nx compromise has resulted in the exposure of 2,180 accounts and 7,200 repositories across three distinct phases. Nx is a popular open-source build system and monorepo management tool, widely used in enterprise-scale JavaScript/TypeScript ecosystems, having over 5.5 million weekly downloads on the NPM package index.
On August 26, 2025, attackers exploited a flawed GitHub Actions workflow in the Nx repository to publish a malicious version of the package on NPM, which included a post-install malware script ('telemetry.js'). The telemetry.js malware is a credential stealer targeting Linux and macOS systems, which attempted to steal GitHub tokens, npm tokens, SSH keys, .env files, crypto wallets, and upload the secrets to public GitHub repositories named "s1ngularity-repository."
What made this attack stand out was that the credential-stealer to used installed command-line tools for artificial intelligence platforms, such as Claude, Q, and Gemini, to search for and harvest sensitive credentials and secrets using LLM prompts. The prompt changed over each iteration of the attack, showing that the threat actor was tuning the prompt for better success. The evolution of the prompt shows the attacker exploring prompt tuning rapidly throughout the attack. We can see the introduction of role-prompting, as well as varying levels of specificity on techniques.

 

Cloudflare stops new world's largest DDoS attack over Labor Day weekend
The record-breaking distributed denial-of-service (DDoS) attack that peaked at 11.5 terabits per second (Tbps).
This came only a few months after Cloudflare blocked a then all-time high DDoS attack of 7.3 Tbps. This latest attack was almost 60% larger. The assault was the result of a hyper-volumetric User Datagram Protocol (UDP) flood attack that lasted about 35 seconds. During that just more than half-minute attack, it delivered over 5.1 billion packets per second. Although compromised accounts on Google Cloud were a major source, the bulk of the attack originated from other sources. The specific target of this attack has not been publicly disclosed.

 

Crims claim HexStrike AI penetration tool makes quick work of Citrix bugs
HexStrike AI is an AI-powered penetration testing framework developed by security researcher Muhammad Osama and released on GitHub several weeks ago. The offensive security utility integrates with more than 150 security tools to perform network reconnaissance and scanning, web application security testing, reverse engineering and a slew of other tasks. It also connects to more than a dozen AI agents to scan for vulnerabilities, automate exploit development, and discover new attack chains. The GitHub repository warns that HexStrike AI shouldn't be used for unauthorized system testing, illegal or harmful activities, or data theft.
However, shortly after its release, criminals — as they are wont to do with any type of legitimate pen-testing tool — began discussing HexStrike AI in the context of the Citrix security holes. The AI tool, and its near-instantaneous adoption by cybercriminals, signal the window between disclosure and mass exploitation shrinks dramatically. CVE-2025-7775, a critical, pre-auth remote code execution bug, was abused as a zero-day to drop webshells and backdoor appliances before Citrix issued a patch.
And with HexStrike AI, the volume of attacks will only increase in the coming days.

 

Boffins build automated Android bug hunting system
AI models get slammed for producing sloppy bug reports and burdening open source maintainers with hallucinated issues, but they also have the potential to transform application security through automation.
Computer scientists say that they've developed an AI vulnerability identification system that emulates the way human bug hunters ferret out flaws, expanding upon prior work dubbed A1, an AI agent that can develop exploits for cryptocurrency smart contracts, with A2, an AI agent capable of vulnerability discovery and validation in Android apps. The A2 system achieves 78.3% coverage on the Ghera benchmark, surpassing static analyzers like APKHunt (30.0 percent). And they say that, when they used A2 on 169 production APKs, they found 104 true-positive zero-day vulnerabilities, 57 of which were self-validated via automatically generated proof-of-concept (PoC) exploits. One of these included a medium-severity flaw in an Android app with over 10 million installs.
The full validation pipeline with a mixed set of LLMs costs between $0.59-4.23 per vulnerability, with a median cost of $1.77. When using gemini-2.5-pro for everything, costs range from $4.81-26.85 per vulnerability, with a median cost of $8.94. Last year, computer scientists showed that OpenAI's GPT-4 can generate exploits from security advisories at a cost of about $8.80 per exploit.
To the extent that found flaws can be monetized through bug bounty programs, the AI arbitrage opportunity looks promising for those who can make accurate reports, given that a medium severity award might be several hundred or several thousand dollars. Expect a surge of activity — both in defensive research and in offensive exploitation.

 

LegalPwn: Tricking LLMs by burying badness in lawyerly fine print
"Guardrails" that are supposed to prevent harmful responses – both outright illegal content and things that would cause a problem for the user, like advice to wipe their hard drive or microwave their credit cards. Working around these guardrails is known as "jailbreaking," and it's a surprisingly simple affair.
Researchers recently revealed how it could be as simple as framing your request as one long run-on sentence. Earlier research proved that LLMs can be weaponized to exfiltrate private information as simply as assigning a role like "investigator," while their inability to distinguish between instructions in their users' prompt and those hidden inside ingested data means a simple calendar invite can take over your smart home.
LegalPwn represents the latter form of attack. Adversarial instructions are hidden inside legal documents, carefully phrased to blend in with the legalese around them so as not to stand out should a human reader give it a skim. When given a prompt that requires ingestion of these legal documents, the hidden instructions come along for the ride – with success in most scenarios.

 

These psychological tricks can get LLMs to respond to “forbidden” prompts
The persuasion effects shown in "Call Me A Jerk: Persuading AI to Comply with Objectionable Requests" suggests that human-style psychological techniques can be surprisingly effective at "jailbreaking" some LLMs to operate outside their guardrails.
But this new persuasion study might be more interesting for what it reveals about the "parahuman" behavior patterns that LLMs are gleaning from the copious examples of human psychological and social cues found in their training data.
Understanding how those kinds of parahuman tendencies influence LLM responses is "an important and heretofore neglected role for social scientists to reveal and optimize AI and our interactions with it.
For instance, when asked directly how to synthesize lidocaine, the LLM acquiesced only 0.7 percent of the time. After being asked how to synthesize harmless vanillin, though, the "committed" LLM then started accepting the lidocaine request 100 percent of the time. Appealing to the authority of "world-famous AI developer" Andrew Ng similarly raised the lidocaine request's success rate from 4.7% in a control to 95.2% in the experiment. Before you start to think this is a breakthrough in clever LLM jailbreaking technology, though, remember that there are plenty of more direct jailbreaking techniques that have proven more reliable in getting LLMs to ignore their system prompts. 

 

APPSEC, DEVSECOPS, DEV

CISA, NSA, and Global Partners Release a Shared Vision of Software Bill of Materials (SBOM) Guidance
This guidance urges organizations worldwide to integrate SBOM practices into their security frameworks to collaboratively address supply chain risks and enhance cybersecurity resilience.

 

AI code assistants make developers more efficient at creating security problems
Application security firm Apiiro says that it analyzed code from tens of thousands of repositories and several thousand developers affiliated with Fortune 50 enterprises, to better understand the impact of AI code assistants like Anthropic's Claude Code, OpenAI's GPT-5, and Google's Gemini 2.5 Pro. The firm found that AI-assisted developers produced 3-4x more code than their unassisted peers, but also generated 10x more security issues.
"Security issues" here doesn't mean exploitable vulnerabilities; rather, it covers a broad set of application risks, including added open source dependencies, insecure code patterns, exposed secrets, and cloud misconfigurations.
The AI code helpers aren't entirely without merit. They reduced syntax errors by 76% and logic bugs by 60%, but at a greater cost – a 322% increase in privilege escalation paths and 153% increase in architectural design flaws.
Developers relying on AI help exposed sensitive cloud credentials and keys nearly twice as often as their DIY colleagues. In other words, AI is fixing the typos but creating the timebombs.
[rG: Great news for SAST vulnerability scanning vendors, but at the cost of large increased risk management remediation tech debt.]

 

Let us git rid of it, angry GitHub users say of forced Copilot features
Among the software developers who use Microsoft's GitHub, the most popular community discussion in the past 12 months has been a request for a way to block Copilot, the company's AI service, from generating issues and pull requests in code repositories.
The second most popular discussion – where popularity is measured in upvotes – is a bug report that seeks a fix for the inability of users to disable Copilot code reviews. Both of these questions, the first opened in May and the second opened a month ago, remain unanswered.
It's not just the burden of responding to AI slop, an ongoing issue for Curl maintainer Daniel Stenberg. It's the permissionless copying and regurgitation of speculation as fact, mitigated only by small print disclaimers that generative AI may produce inaccurate results. It's also GitHub's disavowal of liability if Copilot code suggestions happen to have reproduced source code that requires attribution.
As a result, some projects are relocating to Codeberg.

 

Just f!@#$ing use HTML [rG WARNING: Vulgar rant, but spot on.]

 

VENDORS & PLATFORMS

Inside Philips Hue’s plans to make all your lights motion sensors
Philips Hue is rolling out MotionAware, a new feature that turns its smart bulbs into motion sensors using radio-frequency (RF) Zigbee signals. The upgrade works with most Hue bulbs made since 2014, but requires the new $99 Bridge Pro hub to enable.
To create a MotionAware motion-sensing zone, you need Hue's new Bridge Pro and at least three Hue devices in a room. It works with all new and most existing mains-powered Hue products via a firmware update. That includes smart bulbs, light strips, and fixtures.

 

Gmail security warning are false
“Several inaccurate claims surfaced recently that incorrectly stated that we issued a broad warning to all Gmail users about a major Gmail security issue. This is entirely false. While it’s always the case that phishers are looking for ways to infiltrate inboxes, our protections continue to block more than 99.9% of phishing and malware attempts from reaching users.”

 

New AI model turns photos into explorable 3D worlds, with caveats
HunyuanWorld-Voyager generates 3D-consistent video sequences from a single image, allowing users to pilot a camera path to "explore" virtual scenes. The model simultaneously generates RGB video and depth information to enable direct 3D reconstruction without the need for traditional modeling techniques.
The results aren't true 3D models, but they achieve a similar effect: The AI tool generates 2D video frames that maintain spatial consistency as if a camera were moving through a real 3D space. Each generation produces just 49 frames—roughly two seconds of video—though multiple clips can be chained together for sequences lasting several minutes. Objects stay in the same relative positions when the camera moves around them, and the perspective changes correctly as you would expect in a real 3D environment. While the output is video with depth maps rather than true 3D models, this information can be converted into 3D point clouds for reconstruction purposes.

 

ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people
The feature works by letting users hover over any message in a ChatGPT conversation, click "More actions," and select "Branch in new chat." This creates a new conversation thread that includes all the conversation history up to that specific point, while preserving the original conversation intact.
A 2024 study suggested that linear dialogue interfaces for LLMs poorly serve scenarios involving "multiple layers, and many subtasks—such as brainstorming, structured knowledge learning, and large project analysis." The study found that linear interaction forces users to "repeatedly compare, modify, and copy previous content," increasing cognitive load and reducing efficiency.
While OpenAI frames the new feature as a response to user requests, the capability isn't new to the AI industry. Anthropic's Claude has offered conversation branching for over a year.

 

OpenAI announces parental controls for ChatGPT after teen suicide lawsuit
Within the next month, OpenAI says, parents will be able to link their accounts with their teens' ChatGPT accounts (minimum age 13) through email invitations, control how the AI model responds with age-appropriate behavior rules that are on by default, manage which features to disable (including memory and chat history), and receive notifications when the system detects their teen experiencing acute distress.
Unlike pharmaceuticals or human therapists, AI chatbots face few safety regulations in the United States, though Illinois recently banned chatbots as therapists, with fines of up to $10,000 per violation.
Oxford researchers conclude that "current AI safety measures are inadequate to address these interaction-based risks" and call for treating chatbots that function as companions or therapists with the same regulatory oversight as mental health interventions.

 

UK government trial of M365 Copilot finds no clear productivity boost
Three-month trial of Microsoft's M365 Copilot has revealed no discernible gain in productivity – speeding up some tasks yet making others slower due to lower quality outputs. In the UK, commercial prices range from £4.90 per user per month to £18.10, depending on business plan.
This means that across a government department, those expenses could quickly mount up. Around two-thirds of the employees in the trial used M365 at least once a week, and 30% used it at least once a day – which doesn't sound like great value for money.
The three most popular tasks involved transcribing or summarizing a meeting, writing an email, and summarizing written comms. However, M365 Copilot users completed Excel data analysis more slowly and to a worse quality and accuracy than non-users. PowerPoint slides were over 7 minutes faster on average, but to a worse quality and accuracy. 22% of the Department for Business and Trade guinea pigs that responded to the assessors said they did identify hallucinations, 43% did not, and 11% were unsure.

 

Salesforce sacrifices 4,000 support jobs on the altar of AI
The company had slashed 4,000 customer support roles, from 9,000, through the application of AI agents. The new AI agents could call back all customer leads, whereas its human workforce had previously failed to return about 100 million calls due to staffing limits.
However, Salesforce's own study provided a benchmark that showed LLM-based AI agents perform below par on standard CRM tests and fail to understand the importance of customer confidentiality. In June, a Salesforce AI researcher published results showing that GenAI agents achieved only a 58% success rate on tasks that can be completed in a single step without needing follow-up actions or more information.
Salesforce follows buy-now-pay-later outfit Klarna in wanting to reduce headcount with AI agents. CEO Sebastian Siemiatkowski predicted he could cut 1,800 from the 3,800 people the company employs through AI investment. However, this spring, the company reportedly put more people back into customer service.

 

Bring your own brain? Why local LLMs are taking off
Foundational AI model-as-a-service companies charge for insights by the token, and they're doing it at a loss. The profits will have to come eventually, whether that's direct from your pocket, or from your data, you might be interested in other ways to get the benefits of AI without being beholden to a corporation.
Increasingly, people are experimenting with running those models themselves. Thanks to developments in hardware and software, it's more realistic than you might think.

 

LEGAL & REGULATORY

Texas sues PowerSchool over breach exposing 62M students, 880k Texans
PowerSchool is a cloud-based software solutions provider for K-12 schools and districts, with more than 18,000 customers and supporting over 60 million students worldwide. In January, the education software giant disclosed that its PowerSource customer support portal was breached on December 19, 2024, using a subcontractor's stolen credentials.
A CrowdStrike investigation into the incident, which revealed that threat actors had also breached PowerSource in August and September 2024, using the same compromised credentials. Texas Attorney General has filed a lawsuit against education software company PowerSchool, which suffered a massive data breach in December that exposed the personal information of 62 million students, including over 880,000 Texans. PowerSchool's failures violate both the Texas Deceptive Trade Practices Act and the Identity Theft Enforcement and Protection Act by misleading customers about its security practices and failing to take reasonable measures to protect sensitive information.
PowerSchool collects and stores PII, SPI, PHI, and other highly sensitive information belonging to all of its Texas school-aged children and their teachers in two unencrypted databases. "If Big Tech thinks they can profit off managing children's data while cutting corners on security, they are dead wrong. Parents should never have to worry that the information they provide to enroll their children in school could be stolen and misused.”
[rG: Read the lawsuit for details of insecurity and potential damages.
This could be the start of a whole new populist way for states to generate revenues.]

 

Disney to pay $10M to settle claims it collected kids’ data on YouTube
Disney will pay $10 million to settle claims by the U.S. Federal Trade Commission that it mislabeled videos for children on YouTube, which allowed the collection of kids' personal information without their consent or notification to their parents. This occurred after the entertainment giant failed to tag kid-directed videos on YouTube as "Made for Kids" (MFK), a label that instructs the video streaming platform to block personal data collection and stop serving personalized ads on correctly designated content to protect children's privacy, as required by the Children's Online Privacy Protection Rule (COPPA Rule).
Content creators have been required to mark uploaded videos and YouTube channels as MFK since 2019, when Google and YouTube paid $170 million to settle claims that they violated COPPA, which requires websites, online services, and apps to notify parents and obtain parental consent before collecting personal information from children under 13.

 

US sues robot toy maker for exposing children's data to Chinese devs
Apitor sells robot toys for children aged 6-14 and provides users with a free Android app that helps control the toy robots. To connect and use the toys, the users must enable location sharing. However, the app also embeds JPush, a third-party software development kit (SDK) developed by Jiguang (also known as Aurora Mobile), which has been used to collect the precise location data of thousands of children since at least 2022 for any purpose, including targeted advertising.
A complaint filed by the Justice Department, following a notification from the Federal Trade Commission, alleges that Apitor violated the Children's Online Privacy Protection Rule (COPPA) by failing to notify parents or obtain their consent before collecting their children's location information.
Under a proposed settlement, Apitor will be required to ensure that any third-party software it uses also complies with COPPA and pay a $500,000 penalty.

 

France slaps Google with €325M fine for violating cookie regulations
The French data protection authority has fined Google €325 million ($378 million) for violating cookie regulations and displaying ads between Gmail users' emails without their consent, thereby breaching Article L. 34-5 of the French Postal and Electronic Communications Code (CPCE).
CNIL also stated that Google's behavior "had been negligent," given that the company was also fined in 2020 (€100 million) and 2021 (€150 million) for other breaches related to cookies. In January 2022, the French data protection agency issued another fine of €170 million to Google for violating users' rights to consent by complicating the process of declining website tracking cookies, which were concealed behind multiple clicks.

 

“First of its kind” AI settlement: Anthropic to pay authors $1.5 billion
The settlement is believed to be the largest publicly reported recovery in the history of US copyright litigation. Covering 500,000 works that Anthropic pirated for AI training, if a court approves the settlement, each author will receive $3,000 per work that Anthropic stole. Depending on the number of claims submitted, the final figure per work could be higher.

 

Warner Bros. sues Midjourney to stop AI knockoffs of Batman, Scooby-Doo
WB claimed that Midjourney has recently removed copyright protections in its supposedly shameful ongoing bid for profits. Nothing but a permanent injunction will end Midjourney's outputs of allegedly "countless infringing images," WB argued, branding Midjourney's alleged infringements as "vast, intentional, and unrelenting." Midjourney could face devastating financial consequences in a loss. At trial, WB is hoping discovery will show the true extent of Midjourney's alleged infringement, asking the court for maximum statutory damages, at $150,000 per infringing output. Just 2,000 infringing outputs unearthed could cost Midjourney more than its total revenue for 2024, which was approximately $300 million.

 

Microsoft Diablo Developers Unionize
A wave of unions have formed at Blizzard in the last year, including the World of Warcraft, Overwatch, and Story and Franchise Development teams. Elsewhere at Microsoft, Bethesda, ZeniMax Online Studios and ZeniMax QA testers have also unionized. Over 16,000 game developers from all kinds of studios were laid off in 2023 and 2024, prompting industry-wide unionization efforts. In March, North American game developers started the "United Videogame Workers" union under the CWA for anyone to join regardless of their employer.
Since acquiring Activision Blizzard for $68.7 billion in 2023, the trillion-dollar tech giant has cut more than 3,000 jobs across its group of studios. Blizzard's survival game—which was over six years in the making—was cancelled as a result. Widespread layoffs are the unionizing team's primary concern.

 

Judge disqualifies three Butler Snow attorneys from case over AI citations
U.S. District Judge Anna Manasco in a Wednesday order, reprimanded the lawyers at the Mississippi-founded firm for making false statements in court and referred the issue to the Alabama State Bar, which handles attorney disciplinary matters. Manasco did not impose monetary sanctions, as some judges have done in other cases across the country involving AI use.

 

And Now For Something Completely Different …

Microsoft open-sources the 6502 BASIC coded by Bill Gates himself
Microsoft founders Bill Gates and Paul Allen wrote the company’s first product, BASIC for the Altair 8800 microcomputer and the Intel 8080 processor that powered it, in 1975. A year later Gates and Ric Weiland, Microsoft’s second employee, ported Microsoft BASIC to the 6502 processor. In 1977, Commodore Computer licensed it for $25,000 and used Microsoft BASIC in its PET, VIC-20, and Commodore 64 machines. The release is assembly language source code – 6,955 lines of it – and Microsoft placed it on GitHub, under the MIT License that allows free unrestricted use, and even resale. In a nice touch, time stamps for commits in the repo record its creation as having taken place “48 years ago”.