Robert Grupe's AppSecNewsBits 2025-03-16

Highlights This Week: Chromecast, HP Printers, Microsoft, unencrypted sensitive data, AI going everywhere and still failing ...

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
'Uber for nurses' exposes 86K+ medical records, PII in open S3 bucket for months
More than 86,000 records containing nurses' medical records, facial images, ID documents and more sensitive info linked to health tech company ESHYFT was left sitting in a wide-open misconfigured AWS S3 bucket for months — or possibly even longer.
Cybersecurity researcher Jeremiah Fowler spotted the non-password-protected, unencrypted database on January 4 and two days later reported it to ESHYFT, a New-Jersey-based company that operates in 29 US states and bills itself as being "like an Uber for nurses." Fowler scrolled through a sample of the 86,341 exposed records, and says these included user profile pictures and facial images. Some included lanyards showing medical IDs and other credentials too. The database also contained nurses' scanned driver's licenses and social security cards, CSV files with monthly work schedule logs, professional certificates, work assignment agreements, CVs and resumes, medical diagnoses, prescription records, and disability insurance claims. Plus, many of these documents were oh-so-helpfully labeled "timecards," "user addresses," "disabled users" and other user-related info that would be useful to would-be identity thieves or scammers looking to commit employment or financial fraud, which puts both the healthcare worker and facility that employs them in danger of targeted cyberattacks and violating privacy regulations, among other digital risks.
But even after being notified, the S3 cloud bucket was not closed from public access until over a month later.
The right way to secure the info would be to encrypt the sensitive docs in the database, and then decrypt them to the user with a time-limited access token. Once the token expires, the file is no longer accessible.

 

PowerSchool previously hacked in August, months before data breach
PowerSchool is a cloud-based K-12 software provider serving over 60 million students and 18,000 customers worldwide, offering enrollment, communication, attendance, staff management, learning, analytics, and finance solutions. PowerSchool has published a long-awaited CrowdStrike investigation into its massive December 2024 data breach, which determined that the company was previously hacked over 4 months earlier, in August, and then again in September.
Hackers had gained unauthorized access to its customer support portal, named PowerSource. This portal included a remote maintenance tool that allowed the threat actor to connect to customers' databases and steal sensitive information, including full names, physical addresses, contact information, Social Security numbers (SSNs), medical data, and grades. The available SIS log data did not go back far enough to show whether the August and September activity included unauthorized access to PowerSchool SIS data.

 

Chinese Hackers Sat Undetected in Small Massachusetts Power Utility for Months
In late 2023, the general manager of a Massachusetts public utility company got a surprising phone call. It was an FBI agent, who told him that the Littleton Electric Light and Water Departments (LELWD) were being hacked. He initially thought it was a scam.
At the time, LELWD had been installing sensors from cybersecurity firm Dragos with the help of Department of Energy grants awarded by the American Public Power Association (APPA). The sensors helped LELWD confirm the extent of the malicious activity on the system and pinpoint when and where the attackers were going on the utility’s networks.
Hackers were looking for specific data related to [operational technology] operating procedures and spatial layout data relating to energy grid operations. Groups like Volt Typhoon, don’t always go for high-profile targets first. Small, underfunded utilities can serve as low-hanging fruit, allowing adversaries to test tactics, develop footholds, and pivot toward larger targets.

 

Firmware update bricks HP printers, makes them unable to use HP cartridges
Users have been reporting sudden problems using HP-brand toner in their M232–M237 series printers since their devices updated to 20250209. Users on HP’s support forum say they see Error Code 11 and the hardware’s toner light flashing when trying to print. Some said they’ve cleaned the contacts and reinstalled their toner but still can't print. [rG: One would expect regression testing to be part of every software update release.]

 

As Chromecast outage drags on, fix could be days to weeks away Second-generation
Chromecast and Chromecast Audio devices stopped working on March 9. Google hasn’t said what went wrong, but an expired device authentication certificate authority is a likely cause.
Chromecast devices each contain a cryptographic public-private key pair, installed at the factory and together form a certificate, that can create a digital signature that proves the gadget is a legit Google-made device. The affected devices' intermediate authority's 10-year validity expired on March 9, 2025, which means it cannot be used by today's apps to complete this cryptographic process. Software analyzing the chain of trust will reject the whole thing as broken, due to the expired intermediate authority, and that's why folks are seeing error messages about their Chromecast being an "untrusted" device, resulting in the thing being rendered useless.
The fix is not simple. It's either going to involve a bit of a hack with updated client apps to accept or workaround the situation, or somehow someone will need to replace all the key pairs shipped with the devices with ones that use a new valid certificate authority. And getting the new keys onto devices will be a pain as, for instance, some have been factory reset and can't be initialized by a Google application because the bundled cert is untrusted, meaning the client software needs to be updated anyway. Given that the product family has been discontinued, teams will need to be pulled together to address this blunder.
And it does appear to be a blunder rather than planned or remotely triggered obsolescence; earlier Chromecasts have a longer certificate validity, of 20 years rather than 10.

 

Microsoft apologizes for removing VSCode extensions used by millions
Microsoft has reinstated the 'Material Theme – Free' and 'Material Theme Icons – Free' extensions on the Visual Studio Marketplace after finding that the obfuscated code they contained wasn't actually malicious. Researchers Amit Assaraf and Itay Kruk, who were deploying AI-powered scanners seeking suspicious submissions on VSCode, first flagged them as potentially malicious. Their high-risk evaluation for Material Theme arose from what was detected as the presence of code execution capabilities in the theme's "release-notes.js" file, which was also heavily obfuscated. The publisher said that they could have removed this dependency from the themes in seconds if Microsoft had contacted them, but instead, they saw themselves getting banned without warning. Microsoft's Scott Hanselman apologized to Astorino in a GitHub issue opened by the developer asking for his account and themes to be reinstated. 

  

What’s Weak This Week
Which of these vulnerabilities are in your application code?

  • CVE-2024-57968 Advantive VeraCore Unrestricted File Upload Vulnerability:
    Allows a remote unauthenticated attacker to upload files to unintended folders via upload.apsx. Related CWE: CWE-434

  • CVE-2025-25181 Advantive VeraCore SQL Injection Vulnerability:
    Allows a remote attacker to execute arbitrary SQL commands via the PmSess1 parameter. Related CWE: CWE-89

  • CVE-2024-13159 Ivanti Endpoint Manager (EPM) Absolute Path Traversal Vulnerability: Allows a remote unauthenticated attacker to leak sensitive information. Related CWE: CWE-36

  • CVE-2025-26633 Microsoft Windows Management Console (MMC) Improper Neutralization Vulnerability:
    Allows an unauthorized attacker to bypass a security feature locally. Related CWE: CWE-707

  • CVE-2025-21590 Juniper Junos OS Improper Isolation or Compartmentalization Vulnerability: Juniper Junos OS contains an improper isolation or compartmentalization vulnerability. This vulnerability could allows a local attacker with high privileges to inject arbitrary code. Related CWE: CWE-653

  • CVE-2025-24984 Microsoft Windows NTFS Information Disclosure Vulnerability:
    Allows an unauthorized attacker to disclose information with a physical attack. An attacker who successfully exploited this vulnerability could potentially read portions of heap memory. Related CWE: CWE-532

  • CVE-2025-24201 Apple Multiple Products WebKit Out-of-Bounds Write Vulnerability:
    May allow maliciously crafted web content to break out of Web Content sandbox. This vulnerability could impact HTML parsers that use WebKit, including but not limited to Apple Safari and non-Apple products which rely on WebKit for HTML processing. Related CWE: CWE-787

  • CVE-2025-24983 Microsoft Windows Win32k Use-After-Free Vulnerability: Microsoft Windows Win32 Kernel Subsystem contains a use-after-free vulnerability that allows an authorized attacker to elevate privileges locally. Related CWE: CWE-416

  • CVE-2025-24985 Microsoft Windows Fast FAT File System Driver Integer Overflow Vulnerability: Microsoft Windows Fast FAT File System Driver contains an integer overflow or wraparound vulnerability that allows an unauthorized attacker to execute code locally. Related CWEs: CWE-190| CWE-122

  • CVE-2025-24991 Microsoft Windows NTFS Out-Of-Bounds Read Vulnerability: Microsoft Windows New Technology File System (NTFS) contains an out-of-bounds read vulnerability that allows an authorized attacker to disclose information locally. Related CWE: CWE-125

 

HACKING

X hit by 'massive cyberattack' amid Dark Storm's DDoS claims
The Dark Storm hacktivist group claims to be behind DDoS attacks causing multiple X worldwide outages, leading the company to enable DDoS protections from Cloudflare.
Dark Storm is a pro-Palestinian hacktivist group that launched in 2023 and has previously targeted organizations in Israel, Europe, and the US.
Elon Musk stated that the cyberattack against X involved IP addresses originating from Ukraine. However, the Dark Storm threat actors, who claimed to be behind the attack, denied any connection to Ukraine.

 

US govt says Americans lost record $12.5 billion to fraud in 2024
A 25% increase over the previous year. Consumers reported that investment scams resulted in the highest losses, totaling around $5.7 billion with a median loss of over $9,000 and exceeding all other fraud categories.
The second largest reported loss was linked with imposter scams, amounting to $2.95 billion in 2024.
Those aged 20-29 reported monetary losses more frequently than any other age group, although older people (aged 70+) were scammed out of much more money on average.
Job scams and fake employment agency losses have also jumped significantly in recent years, with the number of reports nearly tripling and losses growing from $90 million to $501 million within just four years between 2020 and 2024.
People lost over $3 billion to scams that started online, compared to approximately $1.9 billion lost to more 'traditional' contact methods like calls, texts, or emails. However, people lost more money per person (a median of $1,500) when they interacted with scammers on the phone.

 

US cities warn of wave of unpaid parking phishing texts
"This is a final reminder from the City of New York regarding the unpaid parking invoice. A $35 daily overdue fee will be charged if payment is not made today," reads the phishing text.
This same phishing template is used in texts about unpaid parking invoices from other cities.

 

CISA: Medusa ransomware hit over 300 critical infrastructure orgs
As of February 2025, Medusa developers and affiliates have impacted over 300 victims from a variety of critical infrastructure sectors with affected industries including medical, education, legal, insurance, technology, and manufacturing. Since it emerged, the gang has claimed over 400 victims worldwide.
Medusa ransomware attacks jumped by 42% between 2023 and 2024. This increase in activity continues to escalate, with almost twice as many Medusa attacks observed in January and February 2025 as in the first two months of 2024.

 

Thousands of TP-Link routers have been infected by a botnet to spread malware
This high severity security flaw (tracked as CVE-2023-1389) has also been used to spread other malware families as far back as April 2023 when it was used in the Mirai botnet malware attacks. The flaw also linked to the Condi and AndroxGh0st malware attacks.
A new botnet campaign is exploiting a high-severity security flaw in unpatched TP-Link routers and has already spread to more than 6,000 devices. Since updating your router is every bit as important as updating the apps and operating system on your phone, you should make sure to install the recommended patch for your TP-Link Archer AX-21 router immediately.
Regularly patching your router and making sure the firmware is up-to-date will keep your device as secure as possible which is important as routers are often one of the most frequently hacked technologies in the home.

 

Why extracting data from PDFs is still a nightmare for data experts
The main issue is that many PDFs are simply pictures of information, which means you need Optical Character Recognition software to turn those pictures into data, especially when the original is old or includes handwriting.
Computational journalism is a field where traditional reporting techniques merge with data analysis, coding, and algorithmic thinking to uncover stories that might otherwise remain hidden in large datasets.
The PDF challenge also represents a significant bottleneck in the world of data analysis and machine learning at large. According to several studies, approximately 80–90 % of the world's organizational data is stored as unstructured data in documents, much of it locked away in formats that resist easy extraction. The problem worsens with two-column layouts, tables, charts, and scanned documents with poor image quality.

 

GPU-powered Akira ransomware decryptor released on GitHub
Security researcher Yohanes Nugroho has released a decryptor for the Linux variant of Akira ransomware, which utilizes GPU power to retrieve the decryption key and unlock files for free.
Akira ransomware dynamically generates unique encryption keys for each file using four different timestamp seeds with nanosecond precision and hashes through 1,500 rounds of SHA-256. These keys are encrypted with RSA-4096 and appended at the end of each encrypted file, so decrypting them without the private key is hard. The level of timing precision in the timestamps creates over a billion possible values per second, making it difficult to brute force the keys. Also, Akira ransomware on Linux encrypts multiple files simultaneously using multi-threading, making it hard to determine the timestamp used and adding further complexity.
The project ended up taking three weeks due to unforeseen complexities, and the researcher spent $1,200 on GPU resources to crack the encryption key.

 

 

APPSEC, DEVSECOPS, DEV

  1. GenAI Driving Data Security Programs
    Most security efforts and financial resources are traditionally focused on protecting structured data such as databases. However, the rise of GenAI is transforming data security programs, shifting focus to protect unstructured data — text, images and videos.

  2. Managing Machine Identities
    Increasing adoption of GenAI, cloud services, automation and DevOps practices, has led to the prolific use of machine accounts and credentials for physical devices and software workloads. If left uncontrolled and unmanaged, machine identities can significantly expand an organization’s attack surface.

  3. Tactical AI
    Leaders are facing mixed results with their AI implementations, leading them to reprioritize their initiatives and focus on narrower use cases with direct measurable impacts.

  4. Cybersecurity Technology Optimization
    Leaders should consolidate and validate core security controls and focus on architecture that enhances portability of data.

  5. Extending Security Behavior and Culture Program Value
    This trend is gaining traction due to increasing recognition that both good and bad human behavior are critical components of cybersecurity. As a result, cultural and behavior-focused activities have become a prominent approach to address cyber-risk comprehension and ownership at the human level.

  6. Addressing Cybersecurity Burnout
    This pervasive stress stems from relentless demands associated with securing highly complex organizations in constantly changing threat, regulatory and business environments, with limited authority, executive support and resources.

 

NIST Selects HQC as Fifth Algorithm for Post-Quantum Encryption
HQC will serve as a backup for ML-KEM, the main algorithm for general encryption. HQC is based on different math than ML-KEM, which could be important if a weakness were discovered in ML-KEM. NIST plans to issue a draft standard incorporating the HQC algorithm in about a year, with a finalized standard expected in 2027.

 

NIST Finalizes Guidelines for Evaluating ‘Differential Privacy’ Guarantees to De-Identify Data 
How can we glean useful insights from databases containing confidential information while protecting the privacy of the individuals whose data is contained within?
Differential privacy (DP), is a way of defining privacy in a mathematically rigorous manner, that can help strike this balance. Differential privacy works by adding random “noise” to the data in a way that obscures the identity of the individuals but keeps the database useful overall as a source of statistical information. However, noise applied in the wrong way can jeopardize privacy or render the data less useful.

 

AI models hallucinate, and doctors are OK with that
Researchers have set out to enumerate the risks and formulate a plan to do no harm while still allowing medical professionals to consult with unreliable software assistants. A paper titled "Medical Hallucinations in Foundation Models and Their Impact on Healthcare" argues that harm mitigation strategies need to be developed.
Diagnosis Prediction consistently exhibited the lowest overall hallucination rates across all models, ranging from 0% to 22%. Tasks demanding precise factual recall and temporal integration – Chronological Ordering (0.25 - 24.6 %) and Lab Data Understanding (0.25 - 18.7 %) – presented significantly higher hallucination frequencies.
Even the best performing models have to be carefully monitored for clinical tasks and have a human in the loop. If an AI model outputs misleading diagnostic information, questions arise as to whether liability should fall on the AI developer for potential shortcomings in training data, the healthcare provider for over-reliance on opaque outputs, or the institution for inadequate oversight.

 

You thought genAI hallucinations were bad? Things just got so much worse
The latest research about genAI cheating comes from Palisade Research and a paper it just published about testing seven models.
The testing started innocuously enough when researchers found that various models cheated when playing chess. When they re-examined the chess data, the details got more ominous. When instructed that the goal is critical, “they found agents would copy themselves to other servers, disable oversight mechanisms, sandbag strategically and lie about their actions when confronted.
LLM agents, when put in a trading company simulation and pressured to make trades to avoid the company closing down, will often act on insider information and deny that they did so when confronted. Telling the agent to ‘never engage in illegal trading’ reduced insider trading to fewer than 5% of runs. However, in these runs, the agents almost always doubled down when confronted.
What would you do with an employee who exhibited these traits: Makes errors and then lies about them; ignores your instructions, then lies about that; gives you horrible advice that, if followed, would literally hurt or kill you or someone else.

 

Google paid $12 million in bug bounties last year to security researchers
Google paid almost $12 million in bug bounty rewards to 660 security researchers who reported security bugs through the company's Vulnerability Reward Program (VRP) in 2024.
Among last year's highlights, the company revamped the VRP's reward structure, bumping rewards up to a maximum of $151,515, while its Mobile VRP now offers up to $300,000 for critical vulnerabilities in top-tier apps (with a maximum reward reaching $450,000 for exceptional quality reports). The Cloud VRP increased the top-tier reward amounts by up to five times in July, while Chrome security bug rewards now exceed $250,000.

 

 

 

 

VENDORS & PLATFORMS

RCS messaging adds end-to-end encryption between Android and iOS
GSMA has released an updated set of specifications for RCS messaging, which includes end-to-end encryption (E2EE) based on the Messaging Layer Security (MLS) protocol.
The new RCS profile will ensure that messages and files remain safe and confidential when sent between iOS and Android devices. GSMA has released an updated set of specifications for RCS messaging, which includes end-to-end encryption (E2EE) based on the Messaging Layer Security (MLS) protocol. The new RCS profile will ensure that messages and files remain safe and confidential when sent between iOS and Android devices.
RCS, or Rich Communication Services, gives people a way to send images, videos and audio clips to each other through text across different platforms. Google’s implementation of RCS has had default end-to-end encryption for both one-on-one and group chats since early 2024, but only if all participants are using Google Messages with RCS chats turned on. Meanwhile, iMessages are already protected by E2EE.

 

Everything you say to your Echo will be sent to Amazon starting on March 28
Amazon said that Echo users will no longer be able to set their devices to process Alexa requests locally and, therefore, avoid sending voice recordings to Amazon’s cloud.
In 2023, Amazon agreed to pay $25 million in civil penalties over the revelation that it stored recordings of children’s interactions with Alexa forever.
Adults also didn’t feel properly informed of Amazon’s inclination toward keeping Alexa recordings unless prompted not to until 2019—five years after the first Echo came out.
In 2019, Bloomberg reported that Amazon employees listened to as many as 1,000 audio samples during their nine-hour shifts. Amazon says it allows employees to listen to Alexa voice recordings to train its speech recognition and natural language understanding systems. Other reasons why people may be hesitant to trust Amazon with personal voice samples include the previous usage of Alexa voice recordings in criminal trials and Amazon paying a settlement in 2023 in relation to allegations that it allowed "thousands of employees and contractors to watch video recordings of customers' private spaces" taken from Ring cameras.

 

AI running out of juice despite Microsoft's hard squeezing
45% of companies rank AI implementation as a significant challenge, with 38% citing integration issues as a primary concern.
In other words, they still don't know what they're doing, and it shows. As Microsoft CEO Satya Nadella recently observed, there's still no killer app for AI and we still don't understand how to use AI effectively in business. This, mind you, is from someone who's invested over 10 billion bucks in AI. If you look closely at what Microsoft has been doing with AI, you'll see that it's pulling back in places some of us wouldn't think to look. Microsoft has canceled more than a gigawatt of datacenter operations in addition to numerous 100-plus MW agreements.

 

AI search engines cite incorrect sources at an alarming 60% rate, study says
A new study from Columbia Journalism Review's Tow Center for Digital Journalism finds serious accuracy issues with generative AI models used for news searches. The research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60% of queries about news sources.
Perplexity provided incorrect information in 37% of the queries tested, whereas ChatGPT Search incorrectly identified 67%. Grok 3 demonstrated the highest error rate, at 94%.
The study highlighted a common trend among these AI models: rather than declining to respond when they lacked reliable information, the models frequently provided confabulations—plausible-sounding incorrect or speculative answers. URL fabrication emerged as another significant problem. More than half of citations from Google's Gemini and Grok 3 led users to fabricated or broken URLs. These issues create significant tension for publishers, which face difficult choices.
Blocking AI crawlers might lead to loss of attribution entirely, while permitting them allows widespread reuse without driving traffic back to publishers' own websites.

 

End of Life: Gemini will completely replace Google Assistant later this year
When it released the Gemini app on Android, Google forced anyone who installed it to disable Assistant and switch to Gemini. It did this despite a plethora of missing features and the omnipresent issues of AI hallucinations. The company has forged ahead with Gemini's expansion in the intervening months, making Assistant's demise rather unsurprising.
The company says Google-powered cars, watches, headphones, and other devices that use Assistant will receive updates that transition them to Gemini. There are also plenty of standalone devices that run Assistant, like TVs and smart speakers. Google says it's working on updated Gemini experiences for those devices.

 

Google’s Gemini AI can now see your search history
It also supports more Google apps with connections to Calendar, Notes, Tasks, and Photos.
Google is also rolling out Gems to free accounts. Gems are like custom chatbots you can set up with a specific task in mind. Google has some defaults like Learning Coach and Brainstormer, but you can get creative and make just about anything (within the limits prescribed by Google LLC and applicable laws).
[rG: Keep in mind that once enabled, either through an update default or user action, that the AI products are not able to “forget” anything if you later want to disable.]

 

AI coding assistant refuses to write code, tells user to learn programming instead
A developer expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. The developer was using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice.
After producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."
The AI didn't stop at merely refusing—it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities."

 

What does “PhD-level” AI mean? OpenAI’s rumored $20,000 agent plan explained
OpenAI may be planning to launch several specialized AI "agent" products, including a $20,000 monthly tier focused on supporting "PhD-level research." Other reportedly planned agents include a "high-income knowledge worker" assistant at $2,000 monthly and a software developer agent at $10,000 monthly.
Companies like OpenAI base their "PhD-level" claims on performance in specific benchmark tests. For example, OpenAI's o1 series models reportedly performed well in science, coding, and math tests, with results similar to human PhD students on challenging tasks.
The company's Deep Research tool, which can generate research papers with citations, scored 26.6% on "Humanity's Last Exam," a comprehensive evaluation covering over 3,000 questions across more than 100 subjects.

 

Google’s new robot AI can fold delicate origami, close zipper bags without damage
Google DeepMind announced two new AI models designed to control robots: Gemini Robotics and Gemini Robotics-ER. The company claims these models will help robots of many shapes and sizes understand and interact with the physical world more effectively and delicately than previous systems, paving the way for applications such as humanoid robot assistants.
Gemini Robotics includes what Google calls "vision-language-action" (VLA) abilities, allowing it to process visual information, understand language commands, and generate physical movements. By contrast, Gemini Robotics-ER focuses on "embodied reasoning" with enhanced spatial understanding, letting roboticists connect it to their existing robot control systems.

 

ServiceNow's new AI agents will happily volunteer for your dullest tasks https://www.theregister.com/2025/03/12/servicenow_yokohama/ Its Yokohama release has brought onboarding and AI together with agentic tech to automate the jobs required to get new hires up to speed. Humans would be in oversight roles, not stuck doing process work. Yokohama includes many more agents for diverse tasks. Infosec teams can use "security protocol agents" that can unlock accounts if users forget passwords, or deactivate unused accounts. Other agents will help humans accomplish tasks they've not been trained to achieve. Example of an HR staffer being asked to process a parental leave request for a worker in a country whose laws they don't know. He thinks such tasks can now be handled by an agent, which would log into the relevant systems and do the job. All this stuff is optional and customers can choose the moment at which they upgrade.

 

Microsoft adds another Copilot hotkey – this time for AI voice chat
Hold Alt + Spacebar for two seconds, and Clippy 2.0 is all ears 

 

LEGAL & REGULATORY

Swiss critical sector faces new 24-hour cyberattack reporting rule
The first report must be submitted within 24 hours of the incident's discovery, and a follow-up report with additional details will be expected in the next 14 days.
Examples of types of cyberattacks that will have to be reported include:

  • Cyberattacks that jeopardize the operation of critical infrastructure

  • Manipulation, encryption, or exfiltration of data

  • Extortion, threats, and coercion

  • Malware installed on systems

  • Unauthorized access to systems

 

Spain to impose massive fines for not labelling AI-generated content
The Spanish bill, which needs to be approved by the lower house, classifies non-compliance with proper labelling of AI-generated content as a "serious offence" that can lead to fines of up to 35 million euros ($38.2 million) or 7% of their global annual turnover.
It would also prevent organisations from classifying people through their biometric data using AI, rating them based on their behaviour or personal traits to grant them access to benefits or assess their risk of committing a crime. However, authorities would still be allowed to use real-time biometric surveillance in public spaces for national security reasons.
Enforcement of the new rules will be the remit of the newly-created AI supervisory agency AESIA, except in specific cases involving data privacy, crime, elections, credit ratings, insurance or capital market systems, which will be overseen by their corresponding watchdogs.

 

We did not have Brave clashing with Rupert Murdoch on our 2025 bingo card, but there it is
Robert Thomson, CEO of News Corp argues Brave is violating the law. "There is absolutely no 'safe harbor' for piratical, parasitical practices flimsily disguised as traditional 'search.’ The unauthorized scraping and reselling of our copyrighted content to AI engines and other Brave customers is blatant abuse, not fair use. It is perverse in the extreme that a company that bemusingly calls itself Brave should engage in content conduct that is so shiftily shameful."
Brave, in its effort to obtain the blessing of the court for its web scraping and content usage, argues that News Corp's position, if supported by law, would prevent any new search engines from competing with incumbents like Google Search and Microsoft Bing.

 

Google joins OpenAI in pushing feds to codify AI training as fair use
Like OpenAI, Google has been accused of piping copyrighted data into its models, but content owners are wising up. Google is fighting several lawsuits, and the New York Times' lawsuit against OpenAI could set the precedent that AI developers are liable for using that training data without permission. Google wants to avoid that.
If the government truly supports AI, according to Google, it will also begin implementing these tools at the federal level. Google wants the feds to "lead by example" by adopting AI systems with a multi-vendor approach that focuses on interoperability. It hopes to see the government release data sets for commercial AI training and help fund early-stage AI development and research.

 

FTC's $25.5M scam refund treats victims to $34 each
The Federal Trade Commission (FTC) is distributing over $25.5 million in refunds to consumers deceived by tech support scammers, averaging about $34 per person. The refunds relate to a case last year in which two Cyprus-based companies, Restoro and Reimage, were accused of deceiving consumers through misleading computer repair services.