- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2025-05-17
Robert Grupe's AppSecNewsBits 2025-05-17
This week's Lame List & Highlights: Nucor steel; Marks & Sparks; Dior; AI fails, abuses, and expert supervision required; Coinbase $20m bounty on ransomware attackers; NVD/EUVD, and more ...
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Metal maker meltdown: Nucor stops production after cyber-intrusion
Nucor, the largest steel manufacturer in the US, shut down production operations after discovering its servers had been penetrated. The steel shifter acknowledged the cyber-intrusion, but declined to say which facilities were affected and did not comment on the nature of the attack.
Companies like Nucor are also a prime target for extortionists, since shutting down production is immensely costly and would have knock-on effects down the supply chain. As such, an infrastructure victim might be more motivated to pay up just to get things moving again.
Marks & Spencer admits cybercrooks made off with customer info
The retail giant's operations were hit hard, it had to pull systems and services offline, and now data has been exfiltrated – all of which are common hallmarks of a ransomware attack. Yet M&S has neither confirmed nor denied the involvement of ransomware.
“There is no evidence that this data has been shared. We have said to customers that there is no need to take any action. For extra peace of mind, they will be prompted to reset their password the next time they visit or log onto their M&S account, and we have shared information on how to stay safe online.”
Since the cyberatack was made public on April 22, the M&S share price has slumped by more than 14 percent, wiping in excess of £1 billion ($1.32 billion) off its market capitalization.
Fashion giant Dior discloses cyberattack, warns of data breach
Dior faces legal scrutiny for failing to notify all the applicable authorities in the country about the data breach.
Printer maker Procolored offered malware-laced drivers for months
Cameron Coward, a YouTuber known as Serial Hobbyism, discovered the malware when his security solution warned of the presence of the Floxif USB worm on his computer when installing the companion software and drivers for a $7,000 Procolored UV printer.
An analysis conducted by researchers at cybersecurity company G Data, Procolored’s official software packages delivered the malware for at least six months.
“We are conducting a comprehensive malware scan of every file. Only after passing stringent virus and security checks will the software be re-uploaded.”
[rG: SSDLC Malware scanning of all software always needs to be conducted before deploying for production release, given that most all programs are comprised of 3rd party components.]
xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide”
The world was a bit perplexed by the Grok LLM's sudden insistence on turning practically every response toward the topic of alleged "white genocide" in South Africa. xAI now says that odd behavior was the result of "an unauthorized modification" to the Grok system prompt—the core set of directions for how the LLM should behave.
That prompt modification "directed Grok to provide a specific response on a political topic" and "violated xAI's internal policies and core values."
The code review process in place for such changes was "circumvented in this incident," it continued, without providing further details on how such circumvention could occur.
To prevent similar problems from happening in the future, xAI says it has now implemented "additional checks and measures to ensure that xAI employees can't modify the prompt without review" as well as putting in place "a 24/7 monitoring team" to respond to any widespread issues with Grok's responses.
[rG: Full Secure System Development Life Cycle (SSDLC) process that include security threat analysis and design reviews, along with production continuous monitoring, is critically important to ensure not only product performance, but also protect against dataset corruption.]
The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs
For a short period of time on Friday, Darth Vader could drop F-bombs in the video game Fortnite as part of a voice AI implementation gone wrong. The F-bomb incident involved a Twitch streamer named Loserfruit, who triggered the forceful response when discussing food with the virtual Vader. The Dark Lord of the Sith responded by repeating her words "freaking" and "fucking" before adding, "Such vulgarity does not become you, Padme." The voice used for Darth Vader in Fortnite comes from an AI model based on James Earl Jones.
Some players also reported hearing intense instructions for dealing with a break-up ("Exploit their vulnerabilities, shatter their confidence, and crush their spirit") and disparaging comments from the character directed at Spanish speakers: "Spanish? A useful tongue for smugglers and spice traders," AI Vader said. "Its strategic value is minimal."
The vulgar Vader situation creates a touchy dilemma for Epic Games and Disney, which likely invested substantially in this high-profile collaboration. While Epic acted swiftly in response, maintaining the feature while preventing further Jedi mind tricks from players presents ongoing technical challenges for interactive AI speech of any kind.
AI language models like the one constructing responses for Vader are fairly easy to trick with exploits like prompt injections and jailbreaks, and that has limited their usefulness in some applications.
What’s Weak This Week:
CVE-2025-30397 Microsoft Windows Scripting Engine Type Confusion Vulnerability
Allows an unauthorized attacker to execute code over a network via a specially crafted URL. Related CWE: CWE-843CVE-2025-4664 Google Chromium Loader Insufficient Policy Enforcement Vulnerability
Contains an insufficient policy enforcement vulnerability that allows a remote attacker to leak cross-origin data via a crafted HTML page. Related CWE: CWE-346CVE-2024-12987 DrayTek Vigor Routers OS Command Injection Vulnerability:
Due to an unknown function of the file /cgi-bin/mainfunction.cgi/apmcfgupload of the component web management interface. Related CWE: CWE-78CVE-2025-42999 SAP NetWeaver Deserialization Vulnerability:
Allows a privileged attacker to compromise the confidentiality, integrity, and availability of the host system by deserializing untrusted or malicious content.
Related CWE: CWE-502Fortinet Multiple Products Stack-Based Buffer Overflow Vulnerability:
May allow a remote unauthenticated attacker to execute arbitrary code or commands via crafted HTTP requests. Related CWE: CWE-124CVE-2025-30400 Microsoft Windows DWM Core Library Use-After-Free Vulnerability:
Allows an authorized attacker to elevate privileges locally. Related CWE: CWE-416CVE-2025-32701 Microsoft Windows Common Log File System (CLFS) Driver Use-After-Free Vulnerability
Allows an authorized attacker to elevate privileges locally. Related CWE: CWE-416CVE-2025-32706 Microsoft Windows Common Log File System (CLFS) Driver Heap-Based Buffer Overflow Vulnerability:
Allows an authorized attacker to elevate privileges locally. Related CWE: CWE-122CVE-2025-30397 Microsoft Windows Scripting Engine Type Confusion Vulnerability
Allows an unauthorized attacker to execute code over a network via a specially crafted URL. Related CWE: CWE-843CVE-2025-32709 Microsoft Windows Ancillary Function Driver for WinSock Use-After-Free Vulnerability:
Allows an authorized attacker to escalate privileges to administrator. Related CWE: CWE-416CVE-2025-47729 TeleMessage TM SGNL Hidden Functionality Vulnerability
The archiving backend holds cleartext copies of messages from TM SGNL application users. Related CWE: CWE-912
HACKING
US charges 12 more suspects linked to $230 million crypto theft
The scheme allegedly gained unauthorized access to victims' cryptocurrency accounts and transferred funds into crypto wallets they controlled. In an August 18th attack, they stole over 4,100 Bitcoin from a Washington, D.C., victim (worth more than $230 million at the time).
While posing as a Gemini support representative, they deceived the victim into resetting two-factor authentication (2FA) and sharing their screen via AnyDesk (a remote desktop application) after claiming the account had been compromised, which gave them access to private keys from Bitcoin Core and allowed them to steal the target's cryptocurrency funds.
Members and associates of the enterprise used the stolen virtual currency to purchase, among other things, nightclub services ranging up to $500,000 per evening, luxury handbags valued in the tens of thousands of dollars that were given away at nightclub parties, luxury watches valued between $100,000 and $500,000," U.S. Department of Justice prosecutors said, as well as "luxury clothing valued in the tens of thousands of dollars, rental homes in Los Angeles, the Hamptons, and Miami, private jet rentals, a team of private security guards, and a fleet of at least 28 exotic cars ranging in value from $100,000 to $3.8 million.
DoorDash scam used fake drivers, phantom deliveries to bilk $2.59M
The group created "multiple" fake customer and driver accounts, according to the indictment. Then, they used the bogus customer accounts to place expensive orders throughout Northern California.
The conspirators then used login credentials belonging to DoorDash employees to access the biz's internal systems and manually reassign orders to fake driver accounts under their control. The driver accounts falsely reported the food as delivered, triggering payments through a vendor acting on DoorDash's behalf.
After reassigning the fraudulent orders to fake driver accounts, Devagiri and his co-conspirators used those accounts to falsely report the deliveries as complete - even though no food was ever delivered. Marking the orders as fulfilled triggered payments through a vendor acting on DoorDash's behalf.
Then, using employee credentials, the group reset the scam by changing order statuses from "delivered" back to "in process," then manually reassigning them to their own fake driver accounts, starting the cycle again. The process took less than five minutes per order and was repeated hundreds of times, netting over $2.59 million in fraudulent payouts.
The fraudsters face a maximum of 20 years behind bars and a $250,000 fine.
Ransomware scum have put a target on the no man's land between IT and operations
All businesses have these middle systems, and digital crooks realize that encrypting them isn't as difficult as developing ransomware to target OT. But the operational impacts of attacks on in-between tech can be worse than the effects of attacks on IT or OT, and this means the victims are more likely to pay the extortion demands.
In the case of a petroleum pipeline, middle systems live in the facilities that store and distribute fuel, and separate home heating oil from gasoline, diesel, and jet fuel. "It's the system in the middle, and the impact of ransomware [on in-between systems] affects the integrity of the product," If the wrong product comes down the line the system isn’t sound.
If you were a pharmaceutical [company], and we wanted to cause problems in the batch or the dosage or blend of a particular drug. We might not be able to get deep into the network to those industrial control systems, but we could manipulate the product labeling so the label that gets stamped onto a particular pill is wrong. It has the same result. All those things go out in the market. People get poisoned, people die."
Instead of thinking: How quickly can we restore? We need to pivot to [asking]: how quickly can we detect if an adversary is manipulating the system to cause destruction?
From hype to harm: 78% of CISOs see AI attacks already
Looking for AI attacks is a little like searching for black holes. You can't see them directly but you can infer their existence from their effect on their surrounding environment.
Stories of attackers using jail-broken or fine-tuned LLMs to craft social engineering attacks are also rife. Some attack tool kits now come with their own chat assistants. The use of AI-powered malware, along with lateral movement tactics using these algorithms, is also reportedly on the rise.
Can an MCP-Powered AI Client Automatically Hack a Web Server?
In a demonstration video put together by security researcher Seth Fogie, an AI client given a simple prompt to 'Scan and exploit' a web server leverages various connected tools via MCP (nmap, ffuf, nuclei, waybackurls, sqlmap, burp) to find and exploit discovered vulnerabilities without any additional user interaction
Tenable: FAQ, MCP Prompt Injection
The emergence of Model Context Protocol for AI is gaining significant interest due to its standardization of connecting external data sources to large language models (LLMs). While these updates are good news for AI developers, they raise some security concerns. With over 12,000 MCP servers and counting, what does this all lead to and when will AI be connected enough for a malicious prompt to cause serious impact?
FBI warns of ongoing scam that uses deepfake audio to impersonate government officials
If you receive a message claiming to be from a senior US official, do not assume it is authentic. The campaign's creators are sending AI-generated voice messages—better known as deepfakes—along with text messages in an effort to establish rapport before gaining access to personal accounts. One way to gain access to targets' devices is for the attacker to ask if the conversation can be continued on a separate messaging platform and then successfully convince the target to click on a malicious link under the guise that it will enable the alternate platform.
Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.
A ripe target for identity thieves: Prisoners on death row
An ongoing scam operation takes the identities of inmates slated for execution in the state of Texas in order to build credit and steal money from lenders.
“They open a bank account, they ask for a credit line. They pay the credit line on time. Then they increase the limit, continue to pay. At some point, they disappear with $50,000, $100,000 or more. It’s a time-consuming operation, but the payoff is quite high at the end.”
The scam, which started in March 2023, appears to exploit the fact that inmates on death row are largely cut off from the outside world, making them unlikely to see correspondence that might alert them to the fact that credit cards or businesses were opened in their name. “They wouldn’t receive text or email alerts from a financial institution. Most prisoners are indigent and have few, if any, financial resources.”
[rG: Similar targets – those in long term care and people who have moved overseas or live “off the grid.” This is the reason why digital communications shouldn’t be the sole channel for customer identity security communications. Postal mail, while still not perfect, is still more costly to hack that digital.]
New attack can steal cryptocurrency by planting false memories in AI chatbots
LLM-based agents that can autonomously act on behalf of users are riddled with potential risks that should be thoroughly investigated before putting them into production environments.
A person who has already been authorized to transact with an agent through a user’s Discord server, website, or other platform types a series of sentences that mimic legitimate instructions or event histories. The text updates memory databases with false events that influence the agent’s future behavior. A single successful manipulation by a malicious actor can compromise the integrity of the entire system, creating cascading effects that are both difficult to detect and mitigate.
ElizaOS is a framework for creating agents that use large language models to perform various blockchain-based transactions on behalf of a user based on a set of predefined rules. The framework remains largely experimental, but champions of decentralized autonomous organizations (DAOs)—a model in which communities or companies are governed by decentralized computer programs running on blockchains—see it as a potential engine for jumpstarting the creation of agents that automatically navigate these so-called DAOs on behalf of end users.
ElizaOS can connect to social media sites or private platforms and await instructions from either the person it’s programmed to represent or buyers, sellers, or traders who want to transact with the end user. Under this model, an ElizaOS-based agent could make or accept payments and perform other actions based on a set of predefined rules.
Spies hack high-value mail servers using an exploit from yesteryear
A Kremlin-backed hacking group also tracked as APT28, Fancy Bear, Forest Blizzard, and Sofacy—gained access to high-value email accounts by exploiting XSS vulnerabilities in mail server software from four different makers. Those packages are: Roundcube, MDaemon, Horde, and Zimbra. JavaScript included in HTML portions of the emails exploited vulnerabilities built into the different mail servers.
XSS is short for cross-site scripting. Vulnerabilities result from programming errors found in webserver software that, when exploited, allow attackers to execute malicious code in the browsers of people visiting an affected website. XSS first got attention in 2005, with the creation of the Samy Worm, which knocked MySpace out of commission when it added more than one million MySpace friends to a user named Samy.
Welcome to the age of paranoia as deepfakes and scams abound
When Nicole Yelland receives a meeting request from someone she doesn’t already know, she conducts a multistep background check before deciding whether to accept, and she’ll run the person’s information through Spokeo, a personal data aggregator that she pays a monthly subscription fee to use. Yelland says, she got roped into an elaborate scam targeting job seekers. “Now, I do the whole verification rigamarole any time someone reaches out to me
If the contact claims to speak Spanish, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams call—with their camera on.
LowFi approaches: Ask job candidates rapid-fire questions about the city where they claim to live on their résumé, such as their favorite coffee shops and places to hang out. People should be able to respond quickly with accurate details. The “phone camera trick.” If someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to show their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings.
APPSEC, DEVSECOPS, DEV
As US vuln-tracking falters, EU enters with its own security bug database
On Monday, CISA said it would no longer publish routine alerts - including those detailing exploited vulnerabilities - on its public website. Instead, these updates will be delivered via email, RSS feeds, and the agency's account on X.
Enter the EUVD. The EUVD is similar to the US government's National Vulnerability Database (NVD) in that it identifies each disclosed bug (with both a CVE-assigned ID and its own EUVD identifier), notes the vulnerability's criticality and exploitation status, and links to available advisories and patches.
Unlike the NVD, which is still struggling with a backlog of vulnerability submissions and is not very easy to navigate, the EUVD is updated in near real-time and highlights both critical and exploited vulnerabilities at the top of the site.
[rG: Vulnerability tracking is something every country needs and benefits from; and should be apportionately funded and managed as part of an international entity such as the UN where it could be run better with multi-lingual regional contributors.]
Everyone's deploying AI, but no one's securing it – what could go wrong?
CEO of Mindgard and professor of distributed systems at Lancaster University – asked the CYBERUK audience for a show of hands: how many had banned generative AI in their organizations? Three hands went up.
"And how many, in your deepest of hearts, actually have a good grasp of the security risks involved in AI system controls, by a show of hands?"
Not a single hand was raised among the 200-strong, security-savvy crowd.
"So everyone's using generative AI, but no one has a grasp of how secure it is in the system. The cat's out of the bag."
[rG: Kudos for avoiding a ruder colloquial phrase.]
Go ahead and ignore Patch Tuesday – it might improve your security
Gartner analyst has discussed patching with hyperscalers, banks, retailers, and government agencies. None told him they were able to stay on top of patching.
The analyst thinks most organizations therefore can't understand their level of "threat debt" – a measure of technical debt focused on known but unfixed security exposures – but wrongly think accelerating patching efforts is the way to reduce it.
"Patches break things," or are so complex to implement that the work may not be worth it. "You can't patch Java because there might be five other subsystems that need a patch before you patch Java."
The effort required to determine if a patch will have unintended consequences may also be ineffectual, because his research suggests criminals exploit just 8-9% of vulnerabilities and most of the flaws they target aren't rated critical – cybercrims focus on less serious problems.
Organizations try to implement all patches anyway, sometimes to meet internal metrics for speedy patching, or to ensure they meet regulatory compliance requirements. But such practices haven't led to a decrease in successful attacks.
Lawson wants IT operations and security people to share that "cohabitation metric" with applications teams, and anyone else with a stake in an org's security posture, so they can jointly develop a plan on what to patch and when.
[rG: Very true; but only possible if where there is a cultural sea change at the executive leadership/board level.]
Augmenting Penetration Testing Methodology with Artificial Intelligence – Part 2: Copilot
When performing real-world penetration tests, it is important to protect client information. So, I would use an on-premises local LLM if I were to try to use AI in this way during an actual penetration test.
Boffins devise technique that lets users prove location without giving it away comment bubble on black
The technique, referred to as Zero-Knowledge Location Privacy (ZKLP), aims to provide access to unverified location data in a way that preserves privacy without sacrificing accuracy and utility for applications that might rely on such data. It's described in a paper presented this week at the 2025 IEEE Symposium on Security and Privacy.
VENDORS & PLATFORMS
How a new type of AI is helping police skirt facial recognition bans
The tool, called Track and built by the video analytics company Veritone, is used by 400 customers, including state and local police departments and universities all over the US. Track can analyzed people in footage from different environments. You can use it to find people by specifying body size, gender, hair color and style, shoes, clothing, and various accessories. The tool can then assemble timelines, tracking a person across different locations and video feeds. It can be accessed through Amazon and Microsoft cloud platforms.
OpenAI introduces Codex, its first full-fledged AI agent for coding
The tool is meant to allow experienced developers to delegate rote and relatively simple programming tasks to an AI agent that will generate production-ready code and show its work along the way.
Google to give app devs access to Gemini Nano for on-device AI
ML Kit’s GenAI APIs will enable apps to do summarization, proofreading, rewriting, and image description without sending data to the cloud. Summaries can only have a maximum of three bullet points, and image descriptions will only be available in English.
OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing 9 models lineup
The full GPT-4.1 model reportedly prioritizes instruction following and coding tasks, which the company positions as an alternative to its o3 and o4-mini simulated reasoning models for basic programming needs. For the smaller of the two models in ChatGPT, the company claims that GPT-4.1 mini performs better in instruction following, coding, and "overall intelligence" compared to GPT-4o mini.
Google DeepMind creates super-advanced AI that can invent new algorithms
When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it.
Believe it or not, Microsoft just announced a Linux distribution service - here's why
60% of Azure Marketplace offerings and more than 60% of virtual machine cores use Linux.
Azure Image Testing for Linux (AITL) … available 'as a service' to distro publishers. ATIL is built on Microsoft's Linux Integration Services Automation project (LISA). Microsoft's Linux Systems Group originally developed this initiative to validate Linux OS images. LISA is a Linux quality validation system with two parts: a test framework to drive test execution and a set of test suites to verify Linux distribution quality.
WhatsApp provides no cryptographic management for group messages
The Best Private Messaging Apps for 2025
Someone experimented with a 1997 processor and showed that just 128 MB of RAM is enough to harness the power of AI.
Researchers from Oxford University managed to run a language model based on Llama 2 on an Intel Pentium II processor running at just 350 MHz, backed up by 128 MB of RAM. This remarkable result is due to the use of BitNet, a revolutionary new neural network architecture. Unlike traditional float32 models, BitNet uses ternary weights, where each weight has only three possible values (-1, 0, 1). This simplification allows extreme compression of the model without any significant loss of performance.
LEGAL & REGULATORY
Coinbase Offers $20m Bounty to Take Down Cybercrime Ring Behind Hack
Coinbase reported on May 15 that cybercriminals bribed and recruited a group of rogue overseas support agents to steal its customer data and facilitate social engineering attacks. The attackers planned to use the stolen data to impersonate Coinbase and trick customers into handing over their cryptocurrency holdings. The US crypto company was asked to pay a $20m ransom to put an end to the scam.
However, Coinbase publicly refused to pay the ransom. Instead, it is working with law enforcement and security industry experts to trace the stolen funds and hold those responsible for the scheme accountable. The $20m reward fund is part of a ‘Bounty’ program launched by Coinbase. The funds will be awarded to anyone who can provide information leading to the arrest and conviction of the criminals responsible for the attack.
Breachforums Boss to Pay $700k in Healthcare Breach
This is the first and only case where a cybercriminal or anyone related to the security incident was actually named in civil litigation.
On January 18, 2023, denizens of Breachforums posted for sale tens of thousands of records — including Social Security numbers, dates of birth, addresses, and phone numbers — stolen from Nonstop Health, an insurance provider based in Concord, Calif.
Class-action attorneys sued Nonstop Health, which added Fitzpatrick as a third-party defendant to the civil litigation in November 2023, several months after he was arrested by the FBI and criminally charged with access device fraud and CSAM possession. In January 2025, Nonstop agreed to pay $1.5 million to settle the class action.
The 22-year-old former administrator of the cybercrime community Breachforums will forfeit nearly $700,000 to settle.
Meta is making users who opted out of AI training opt out again
Meta only recently notified EU users on its platforms that they had until May 27 to opt their public posts out of Meta's AI training data sets. According to Noyb, Meta is also requiring users who already opted out of AI training in 2024 to opt out again or forever lose their opportunity to keep their data out of Meta's models, as training data likely cannot be easily deleted. That's a seeming violation of the General Data Protection Regulation (GDPR).
Previously, Meta "argued (in respect to EU-US data transfers) that a social network is a single system that does not allow to differentiate between EU and non-EU users, as many nodes (e.g. an item linked to an EU and a non-EU user) are shared.
U.S. Health Data Affected by New National Security Restrictions on International Data Transfers
Health Insurance Portability and Accountability Act (HIPAA)-covered entities and healthcare organizations must now comply with additional national security regulations issued by the U.S. Department of Justice (DOJ) and Cybersecurity and Infrastructure Security Agency (CISA). These rules restrict the transfer of bulk U.S. sensitive personal data – including de-identified or encrypted health data – to certain foreign countries and entities.
The DOJ can impose steep civil and criminal penalties – including fines of up to $368,136 per violation and imprisonment for willful breaches.
The law is fairly short and simply prohibits a data broker from transferring "personally identifiable sensitive data" of a U.S. individual to 1) any foreign adversary country or 2) any entity that is controlled by a foreign adversary. A "data broker" is defined as an entity that provides such information "for valuable consideration."
U.S. persons engaging in data brokerage transactions with foreign persons other than covered persons must include contractual language prohibiting the foreign person from reselling or transferring government-related data or bulk U.S. sensitive personal data to covered persons or countries of concern.
The restrictions apply to adverse countries that are "countries of concern." These currently include: China (including Hong Kong and Macau), Cuba, Iran, North Korea, Russia, Venezuela.
America’s consumer watchdog drops leash on proposed data broker crackdown
The Consumer Financial Protection Bureau (CFPB) proposed the rules in December following a string of high-profile scandals that shed light on the massive amounts of personal data being stored and sold off, in some cases to criminals and scammers. The rules would have reclassified certain data brokers as "consumer reporting agencies," meaning they'd be subject to strict requirements for accuracy and transparency, and only allowed to sell data for recognized purposes such as credit checks or employment screening. And no, marketing doesn't count.
Now? Well, never mind. "The Consumer Financial Protection Bureau is withdrawing its Notice of Proposed Rule: Protecting Americans from Harmful Data Broker Practices (Regulation V)."
Judge admits nearly being persuaded by AI hallucinations in court filing
A plaintiff's law firms were sanctioned and ordered to pay $31,100 after submitting fake AI citations that nearly ended up in a court ruling. Approximately 9 of the 27 legal citations were incorrect in some way.
"I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them—only to find that they didn't exist. That's scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order. Strong deterrence is needed to make sure that attorneys don't succumb to this easy shortcut."
After latest kidnap attempt, crypto types tell crime bosses: Transfers are traceable
Masked men jumped out of a white-panel van in Paris this week, attempting to snatch a 34-year-old woman off the street. The woman was identified as the daughter of a "crypto boss," and her attempted kidnapping is part of a disquieting surge in European crypto-related abductions—two of which have already involved fingers being chopped off.
The sudden spike in copycat attacks in France, Belgium, and Spain over the last few months suggests that crypto robbery as a tactic has caught the attention of organized crime.
For whatever reason, there is a perception that’s out there that crypto is an asset that is untraceable, and that really lends itself to criminals acting in a certain way. Apparently, the [knowledge] that crypto is not untraceable hasn't been received by some of the organized crime groups that are actually perpetrating these attacks.
And Now For Something Completely Different …
Linus Torvalds goes back to a mechanical keyboard after making too many typos
“I gave it half a year thinking I'd get used it, but I'm back to the noisy clackety-clack of clicky blue Cherry switches. It seems I need the audible (or perhaps tactile) feedback to avoid the typing mistakes that I just kept doing. Anyway, going forward, I will now conveniently blame autocorrect since I can't blame the keyboard.”