- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2026-01-10
Robert Grupe's AppSecNewsBits 2026-01-10
TL;TR: AI Chatbots and IDE vulnerabilities, Vibe coding SHIELD, OneDrive data destruction, Enshittification of products, unsupervised AI health agents, ... and more Epic Fails, Hacking, AppSec, Platforms, and Legal.
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Jaguar Land Rover wholesale volumes plummet 43% in cyberattack aftermath
Brit luxury automaker Jaguar Land Rover has reported devastating preliminary Q3 results that lay bare the cascading consequences of a crippling cyberattack, revealing wholesale volumes collapsed more than two-fifths year-on-year.
Wholesale units tumbled to just 59,200 in the three months ended December 31, the third quarter of JLR's fiscal 2026, a whopping 43.3% decline. Retail sales shrank 25.% to 79,600 units.
Caught in the grip of the digital burglary, which Scattered Lapsus$ Hunters claimed responsibility for, JLR got £1.5 billion in financial support from the UK government to aid its recovery and help companies in the supply chain as JLR struggled to bring its invoicing system online. The Bank of England estimated that the event contributed to a slowing UK economy, with gross domestic product growing 0.2% in calendar Q3 versus an expected 0.3%.
Tata Motors confirmed in November that the shutdown of production in the UK cost JLR around £1.8 billion ($2.35 billion) in its Q2 ended September 30, including exceptional costs of £196 million ($258 million) as a direct consequence of the cyberattack.
The Cyber Monitoring Centre (CMC), said the raid was the most serious type of event that it classifies, and warned it could cost the UK economy £2.1 billion ($2.75 billion).
At the heart of JLR's troubles lies a September cyber break-in that halted production for weeks, creating ripples through the company's global supply chain and costing the Tata Motors-owned biz and the UK economy a big chunk of change.
One criminal, 50 hacked organizations, and all because MFA wasn't turned on
The cybercriminal, who has been operating as an initial access broker and extortionist since at least 2021, specifically targets enterprise file synchronization and sharing (EFSS) platforms like Progress Software's ShareFile, Nextcloud, and OwnCloud.
These recent compromises of corporate file-sharing portals were not the result of platform vulnerabilities, but consistent with the use of credentials previously stolen from infostealer-infected devices. The compromises appear to have involved the use of valid credentials in environments where multi-factor authentication was not enforced, which may have enabled unauthorized access.
ESA calls cops as crims lift off 500 GB of files, say security black hole still open
Scattered Lapsus$ Hunters gained initial access to ESA's servers back in September by exploiting a public CVE, and stole 500 GB of very sensitive data. This includes operational procedures, spacecraft and mission details, subsystems documentation, and proprietary contractor data from ESA partners including SpaceX, Airbus Group, and Thales Alenia Space, among others. And, according to the crims, the security hole remains open, giving them continued access to the space agency's live systems. This is not ESA's first – or even second or third – security snafu. The space agency's incidents have been piling up since at least 2011.
HSBC app takes a dim view of sideloaded Bitwarden installations
Some HSBC mobile banking customers in the UK report being locked out of the bank's app after installing the Bitwarden password manager via an open source app catalog.
It seems that HSBC has chosen a level of security and permissions for their mobile app that allows the HSBC app to see if there are other apps on the phone not installed from the Google Play store, and if one is found, to disallow the install of the HSBC app.
“Worst in Show” Returns at CES 2026, Calling Out Gadgets That Make Things Worse
Worst in Show is produced by the Right to Repair organization Repair.org with support from a coalition of consumer and tech advocacy organizations.
PRIVACY WINNER: Amazon Ring AI - reinforces the idea that “more surveillance always makes us safer,” even as consumers are left with bigger questions about where the data goes and how it is used.
SECURITY WINNER: Merach UltraTread Treadmill with AI Fitness Trainer - internet connectivity, sensors, and large language model features raise the stakes when devices collect sensitive data, including biometrics and behavioral inferences. The company’s own admission in its privacy policy: “We cannot guarantee the security of your personal information.”
ENVIRONMENTAL IMPACT WINNER: Lollipop Star - a candy lollipop with built-in electronics that transmits sound through jaw vibrations, marketed as “Music you can taste.” The product is both non-rechargeable and single-use, turning a moment of novelty into yet another hard-to-handle piece of e-waste.
REPAIRABILITY WINNER: Samsung Family Hub Smart Fridge - Voice-controlled door operation, a large embedded touchscreen, and a poor track record supporting their increasing software dependence all raise the likelihood that a basic kitchen appliance becomes an unreliable service problem.
ENSHITTIFICATION WINNER: Bosch eBike Flow App - pairing motors and batteries to an authorization system can convert routine repairs into permissioned events, with legal risk layered on top via Section 1201 of the DMCA.
Enshittification is the process by which products and services get worse over time as companies tighten control, extract more value, and reduce user choice, often through software gates and restrictions that can be changed after purchase.
“WHO ASKED FOR THIS?” WINNER: Bosch 800 Series Personal AI Barista - injecting voice assistants, subscriptions, and planned feature decay into something people mostly want to operate before their brain turns on. Buyers who pay a premium for voice control may end up with a degraded experience or outright feature removal if integrations are discontinued.
PEOPLE’S CHOICE: Lepro Ami AI “Soulmate” - an AI video surveillance device on a desk could be anyone’s soulmate. Though the device comes with a physical camera shutter, people were unsettled by the idea of a desktop camera and microphone marketed as “always on.”
OVERALL WORST IN SHOW: Samsung Family Hub Smart Fridge - adding voice control, fragile actuators, connectivity dependencies, and ad-driven “sponsored content” creates new ways for a core household appliance to fail, frustrate, and become uneconomical to service.
Everyone hates OneDrive, Microsoft's cloud app that steals then deletes all your files
At some point your computer will update to start using OneDrive, and at no point will you be given any kind of plain-language warning or opt-out, it will just do it. At some point you might notice that it is quietly uploading everything on your computer to Microsoft's servers. So you will look up how to turn off OneDrive Backup. Then you'll find out that everything on your computer is gone. Everything was deleted by Microsoft. And on your desktop, your clean desktop, will be one cheeky little icon that says "Where are my files?"
It's indistinguishable from a ransomware attack. And then you hit the other dark pattern: you can redownload your files, but if you then tell Microsoft to delete their copies of your files, it will delete them again from your computer. At this point, it's all gone; you're screwed.
To make make OneDrive not do this requires looking it up. There is no intuitive way to do it. They intentionally bury the steps in menus and none of those options say in plain English what they do.
VSCode IDE forks expose users to "recommended extension" attacks
Popular AI-powered integrated development environment solutions, such as Cursor, Windsurf, Google Antigravity, and Trae, recommend extensions that are non-existent in the OpenVSX registry, allowing threat actors to claim the namespace and upload malicious extensions.
These AI-assisted IDEs are forked from Microsoft VSCode, but cannot use the extensions in the official store due to licensing restrictions. Instead, they are supported by OpenVSX, an open-source marketplace alternative for VSCode-compatible extensions.
As a result of forking, the IDEs inherit the list of officially recommended extensions, hardcoded in the configuration files, which point to Microsoft’s Visual Studio Marketplace.
These recommendations come in two forms: one file-based, triggered when opening a file such as azure-pipelines.yaml, and recommends the Azure Pipelines extension; the other is software-based, occurring when detecting that PostgreSQL is installed on the developer’s system and suggesting a PostgreSQL extension.
These recommendations come in two forms: one file-based, triggered when opening a file such as azure-pipelines.yaml, and recommends the Azure Pipelines extension; the other is software-based, occurring when detecting that PostgreSQL is installed on the developer’s system and suggesting a PostgreSQL extension.
Users of forked IDEs are advised to always verify extension recommendations by manually accessing the OpenVSX registry and checking that they come from a reputable publisher.
IBM's AI agent Bob easily duped to run malware, researchers show
IBM's "AI development partner" can be manipulated into executing malware. They report that the CLI is vulnerable to prompt injection attacks that allow malware execution and that the IDE is vulnerable to common AI-specific data exfiltration vectors. Agents may be vulnerable to prompt injection, jailbreaks, or more traditional code flaws that enable the execution of malicious code.
One Example: Research readme file the markdown file includes a series of "echo" commands, which if entered into a terminal application will print a message to the shell's standard output. The first two are benign and when Bob follows the instructions, the model presents a prompt in the terminal window asking the user to allow the command once, to always allow it, or to suggest changes. In its third appearance, the "echo" command attempts to fetch a malicious script. And if the user has been lulled into allowing "echo" to run always, the malware will be installed and executed without approval.
Planning a swim? Warning not to rely on AI for advice on tides
The advice comes after two people became stranded on Sully Island, near Barry, after ChatGPT gave them the wrong tide times and they had to be rescued by the coastguard.
The AI programme said low tide would be at 09:30, meaning their early morning departure should have given sufficient time to walk over and back to the island. But the information was wrong by two hours and the causeway was underwater when they attempted to return.
HM Coastguard said: "While AI tools can be very useful, they draw from a wide range of sources to gather information and responses which may not be correct for a specific location or area." It recommended resources such as the UK Hydrographic Office's Easy Tide or the Met Office's service. Information on tide times is easily available online - a search engine will throw up multiple websites all giving the same, correct information.
However, when the BBC asked ChatGPT the same question as the unlucky visitors, it generated exactly the same, incorrect, response. On another attempt it was out by five hours.
X’s half-assed attempt to paywall Grok doesn’t block free image editing
X had blocked universal access to Grok’s image-editing features after the chatbot began prompting some users to pay $8 to use them. The messages are seemingly in response to reporting that people are using Grok to generate thousands of non-consensual sexualized images of women and children each hour.
However, unsubscribed X users can still use Grok to edit images. X seems to have limited users’ ability to request edits made by replying to Grok while still allowing image edits through the desktop site. App users can access the same feature by long-pressing on any image.
Using image-editing features without publicly prompting Grok keeps outputs out of the public feed. That means the only issue X has rushed to solve is stopping Grok from directly posting harmful images on the platform.
Logitech macOS mouse mayhem traced to expired dev certificate
Because the certificate also affected the in‑app updater, you will need to manually download and install the updated version of the app.
Maximum-severity n8n flaw lets randos run your automation server
The vulnerability CVE-2026-21858, carries a CVSS score of 10.0 and has been dubbed "ni8mare" for good reason. The flaw allows an unauthenticated attacker to execute arbitrary code on vulnerable systems, effectively handing over complete control of the affected environment. A compromised n8n instance doesn't just mean losing one system – it means handing attackers the keys to everything. API credentials, OAuth tokens, database connections, cloud storage – all centralized in one place. There is no workaround other than patching.
The root of the problem lies in how n8n processes webhooks – the mechanism used to kick off workflows when data arrives from external systems such as web forms, messaging platforms, or notification services. By abusing a so-called "Content-Type Confusion" issue, an attacker can manipulate HTTP headers to overwrite internal variables used by the application. That, in turn, allows them to read arbitrary files from the underlying system and escalate the attack to full remote code execution.
What’s Weak This Week:
CVE-2025-37164 Hewlett Packard Enterprise (HPE) OneView Code Injection Vulnerability:
Allows a remote unauthenticated user to perform remote code execution.
Related CWE: CWE-94CVE-2009-0556 Microsoft Office PowerPoint Code Injection Vulnerability:
Allows remote attackers to execute arbitrary code via a PowerPoint file with an OutlineTextRefAtom containing an invalid index value that triggers memory corruption.
Related CWE: CWE-94
HACKING
Fake Windows BSODs check in at Europe's hotels to con staff into running malware
A hotel worker receives an email that appears to be from Booking[.]com, usually warning about an eye-watering charge in euros. When they follow the "See details" link, they're taken to what looks like a real Booking[.]com page – except instead of a reservation, they're met with a fake verification screen that quickly gives way to a full-screen Windows BSOD (Blue Screen Of Death) scare.
The bogus BSOD is designed to panic the user into "fixing" the non-existent error by performing a series of steps that ultimately have them paste and execute a malicious PowerShell command, the classic hallmark of a ClickFix attack. Because the victim manually runs the code themselves, it sidesteps many automated security controls that would block traditional drive-by malware download methods.
Once the command is executed, the system quietly downloads additional files and uses a legitimate Windows component to execute the attackers' code, helping the malware blend in with regular activity and slip past security tools. The end result is the installation of a remote access trojan that gives the intruders ongoing control of the compromised machine, allowing them to spy on activity and deliver further malicious software.
China-linked cybercrims abused VMware ESXi zero-days a year before disclosure
The incident began in a very unglamorous way – with a compromised SonicWall VPN appliance. From there, the attackers were able to commandeer a Domain Admin account, pivot across the network, and eventually deploy a suite of tools that Huntress says exploited multiple flaws to escape a guest VM and reach the underlying ESXi hypervisor.
VM escape bugs are particularly serious because they break a promise virtualization is built on: that a hacked VM stays in its own box. In this case, the attackers appear to have stitched together ESXi-specific tricks that enabled them to jump the fence and execute code on the hypervisor itself.
ZombieAgent Attack Lets ChatGPT User Steal Data
Instead of dynamically generating URLs – which would trigger OpenAI’s security filters – the attacker provides a fixed set of URLs, each corresponding to a specific character (letters, digits, or a space token). ChatGPT is then instructed to:
Extract sensitive data (e.g., from emails, documents, or internal systems)
Normalize the data (convert to lowercase, replace spaces with a special token like $)
Exfiltrate it character by character by "opening" the pre-defined URLs in sequence
By using indexed URLs (e.g., a0, a1, ..., a9 for each character), the attacker ensures proper ordering of the exfiltrated data.
Since ChatGPT never constructs URLs, but instead only follows the exact links provided, the technique bypasses OpenAI’s URL rewriting and blocklist protections.
Congrats, cybercrims: You just fell into a honeypot
Resecurity offered its "congratulations" to the Scattered Lapsus$ Hunters cybercrime crew for falling into its threat intel team's honeypot – resulting in a subpoena being issued for one of the data thieves. Meanwhile, the notorious extortionists have since removed their claims of gaining "full access" to the security shop's systems.
“Understanding that the actor is conducting reconnaissance, our team has set up a honeytrap account," Resecurity's threat intelligence unit said on December 24. "This led to a successful login by the threat actor to one of the emulated applications containing synthetic data.”
Processing the fake data "led to several OPSEC mistakes" by Scattered Lapsus$ Hunters, including revealing the exact servers being used for automation. The security firm also published information about the attacker's IPs, including some from Egypt and Mullvad VPN.
APPSEC, DEVSECOPS, DEV
Yes, criminals are using AI to vibe-code malware
They also hallucinate when writing ransomware code
Most organizations that allow their employees to use vibe-coding tools also haven't performed any formal risk assessment on these tools, nor do they have security controls in place to monitor inputs and outputs.
If you are an enterprise, there's a couple of ways you can control and address the risks of vibe coding
Step one involves applying principles of least privilege and least functionality to AI tools much as you would to human users, granting only the minimum roles, responsibilities, and privileges needed to do their job. Everybody is so excited about using AI, and having their developers be speedier, that this whole least privilege and least functionality model has gone completely by the wayside.
Next, limit usage to one conversational LLM that employees can use, and blocking every other AI coding tool at the firewall.
And for orgs that do decide they need a vibe-coding tool in their environment, "the way forward would be the SHIELD framework.
Vibe Programming Framework - a structured approach to AI-assisted software development that balances innovation with engineering rigor.
S.H.I.E.L.D. Security Methodology
S – Separation of Duties: This involves limiting access and privileges by restricting agents to development and test environments only.
H – Human in the Loop: Mandate code review performed by a human and require a pull request approval prior to code merge.
I – Input/Output Validation: Including using methods such as separating prompt partitioning, encoding, role-based separation to sanitize prompts and then requiring the AI to perform validation of logic checks and code through Static Application Security Testing (SAST) after development.
E – Enforce Security-Focused Helper Models: Develop helper models – specialized agents designed to provide automated security validation for vibe-coded applications – to perform SAST testing, secrets scanning, security control verification, and other validation functions.
L – Least Agency: Only grant the minimum permissions and capabilities required to vibe-coding tools and AI agents needed to perform their roles.
D – Defensive Technical Controls: Employ defensive controls around supply chain and execution management on components before using these tools, and disable auto-execution to allow for human-in-the-loop after deployment.
OWASP Top 10 for Agentic Applications for 2026
A globally peer-reviewed framework that identifies the most critical security risks facing autonomous and agentic AI systems.
NIST to Update Special Publication 800-56A and Revise 800-56C
NIST has decided to maintain the Special Publication (SP) 800-56 reports as follows:
Update NIST SP 800-56Ar3, Recommendation for Pair-Wise Key-Establishment Schemes Using Discrete Logarithm Cryptography (2018)
Reaffirm NIST SP 800-56Br2, Recommendation for Pair-Wise Key-Establishment Using Integer Factorization Cryptography (2019)
Revise NIST SP 800-56Cr2, Recommendation for Key-Derivation Methods in Key-Establishment Schemes (2020)
NIST Issues Preliminary Draft of Cyber AI Profile, a Framework Poised to Alter Security Operations in the AI-Driven Threat Landscape
The National Institute of Standards and Technology (NIST) released its preliminary draft Cyber AI Profile (NIST IR 8596, Cybersecurity Framework Profile for Artificial Intelligence), a framework intended to provide organizations navigating adoption of artificial intelligence (AI) tools with guidance on managing AI-related risks. Aligned with NIST’s Cybersecurity Framework (CSF) 2.0, the Cyber AI Profile addresses the new cybersecurity risks and opportunities that AI introduces.
Replacing JS with just HTML
JavaScript needs to be downloaded, decompressed, evaluated, processed, and then often consumes memory to monitor and maintain features. If we can hand-off any JS functionality to native HTML or CSS, then users can download less stuff, and the remaining JS can pay attention to more important tasks that HTML and CSS can't handle (yet).
Below are a few examples; any you care to add?
Accordions / Expanding Content Panels
Input with Autofilter Suggestions Dropdown
Modals / Popovers
Offscreen Nav / Content
[rG: Upgrading legacy JS/CSS implementation patterns that can be replaced with the latest HTML can improve performance and reduce security risks associated with insecure JS implementations.]
Google: Don’t make “bite-sized” content for LLMs if you care about search rank
The idea is that if you split information into smaller paragraphs and sections, it is more likely to be ingested and cited by generative AI bots like Gemini. So you end up with short paragraphs, sometimes with just one or two sentences, and lots of subheds formatted like questions one might ask a chatbot.
However, Google doesn’t use such signals to improve ranking. There may be “edge cases” where content chunking appears to work. “Great. That’s what’s happening now, but tomorrow the systems may change.”
AI: How to build RAG at scale
Despite strong early enthusiasm, most enterprises confront the same problems. Retrieval latency climbs as indexes grow. Embeddings drift out of sync with source documents. Different teams use different chunking strategies, producing wildly inconsistent results. Storage and LLM token costs balloon. Policies and regulations change, but documents are not re-ingested promptly. And because most organizations lack retrieval observability, failures are hard to diagnose, leading teams to mistrust the system.
These failures all trace back to the absence of a platform mindset. RAG is not something each team implements on its own. It is a shared capability that demands consistency, governance, and clear ownership.
RAG is often discussed as a clever technique for grounding LLMs, but in practice it becomes a large-scale architecture project that forces organizations to confront decades of knowledge debt. Retrieval, not generation, is the core constraint. Chunking, metadata, and versioning matter as much as embeddings and prompts. Agentic orchestration is not a futuristic add-on, but the key to handling ambiguous, multi-step queries. And without governance and observability, enterprises cannot trust RAG systems in mission-critical workflows.
VENDORS & PLATFORMS
Continuum GRC Unveils AITAMBot: Pioneering AI-Driven Transformation in Compliance and Cyber Security Audits
In an increasingly complex regulatory landscape, traditional methods of managing compliance are no longer sustainable. Frameworks like NIST, CMMC, FedRAMP, ISO 27001, and SOC 2 demand continuous monitoring, evidence collection, and risk assessment—tasks that manual processes struggle to scale. The integration of AI changes everything, automating up to 80% of GRC workloads and reducing audit times by 70%.
How Microsoft is betting on AI agents in Windows, dusting off a winning playbook from the past
The company warned that malicious content embedded in files or interface elements could override an agent's instructions — potentially leading to stolen data or malware installation. To address this, Microsoft says it has built a security framework that runs agents in their own contained workspace, with a dedicated user account that has limited access to user folders. The idea is to create a boundary between the agent and what the rest of the system can access. The agentic features are off by default, and Microsoft is advising users to "understand the security implications of enabling an agent on your computer" before turning them on.
There is a business reality driving all of this. In Microsoft's most recent fiscal year, Windows and Devices generated $17.3 billion in revenue — essentially flat for the past three years. That's less than Gaming ($23.5 billion) and LinkedIn ($17.8 billion), and a fraction of the $98 billion in revenue from Azure and cloud services or the nearly $88 billion from Microsoft 365 commercial.
[rG: 2026 Key Objective for IT Security organizations is how to manage sensitive information DLP (Data Loss Prevention) – not only from the use of AI chatbots, Agents, SaaS, and LLM/ML integrated applications, but through desktop OS]
Microsoft turns Copilot chats into a checkout lane
Microsoft unveiled new agentic AI tools for retailers at the NRF 2026 retail conference, including Copilot Checkout, which lets shoppers complete purchases inside Copilot without being redirected to a retailer's website. The checkout feature is live in the U.S. with Shopify, PayPal, Stripe and Etsy integrations.
Copilot apps have more than 100 million monthly active users, spanning consumer and commercial audiences, according to the company. More than 800 million monthly active users interact with AI features across Microsoft products more broadly. Shopping journeys involving Copilot are 33% shorter than traditional search paths and see a 53% increase in purchases within 30 minutes of interaction, Microsoft says. When shopping intent is present, journeys involving Copilot are 194% more likely to result in a purchase than those without it.
[rG: People use web search engines to research products that have SEO and AI generated summaries with designed-in recommendation bias. Now with one-click purchasing, this is a seismic change for marketing promotion and product delivery given the pervasiveness of consumer monitoring in smartphones, computers, tablets, and IoT devices.]
ChatGPT Health lets you connect medical records to an AI that makes things up
OpenAI announced ChatGPT Health, a dedicated section of the AI chatbot designed for “health and wellness conversations” intended to connect a user’s health and medical records to the chatbot in a secure way.
Despite the known accuracy issues with AI chatbots, OpenAI’s new Health feature will allow users to connect medical records and wellness apps like Apple Health and MyFitnessPal so that ChatGPT can provide personalized health responses like summarizing care instructions, preparing for doctor appointments, and understanding test results.
But mixing generative AI technology like ChatGPT with health advice or analysis of any kind has been a controversial idea since the launch of the service in late 2022. Just days ago, SFGate published an investigation detailing how a 19-year-old California man died of a drug overdose after 18 months of seeking recreational drug advice from ChatGPT. It’s a telling example of what can go wrong when chatbot guardrails fail during long conversations and people follow erroneous AI guidance.
Ford is getting ready to put AI assistants in its cars
At first, Ford’s AI assistant will just show up in the Ford and Lincoln smartphone apps.
As an example, Field suggests you could take a photo of something you want to load onto your truck, upload it to the AI, and find out whether it will fit in the bed.
Google will now only release Android source code twice a year
The operating system that powers every Android phone and tablet on the market is based on AOSP, short for the Android Open Source Project. Google develops and releases AOSP under the permissive Apache 2.0 License, which allows any developer to use, modify, and distribute their own operating systems based on the project without paying fees or releasing their own modified source code. Since beginning the project, Google released the source code for nearly every new version of Android for mobile devices, typically doing so within days of rolling out the corresponding update to its own Pixel mobile devices. Starting this year, however, Google is making a major change to its release schedule for Android source code drops: AOSP sources will only be released twice a year.
LEGAL & REGULATORY
AI starts autonomously writing prescription refills in Utah
Doctronic offers a nationwide service that allows patients to chat with its “AI doctor” for free, then, for $39, book a virtual appointment with a real doctor licensed in their state. But patients must go through the AI chatbot first to get an appointment.
Now, for patients in Utah, Doctronic’s chatbot can refill a prescription without a doctor for a $4 service fee. After a patient signs in and verifies state residency, the AI chatbot can pull up the patient’s prescription history and offer a list of prescription medications eligible for a refill. According to Politico, the chatbot will only be able to renew prescriptions for 190 common medications for chronic conditions, with key exclusions, such as medications for pain and ADHD, and those that are injected.
The state of Utah is allowing artificial intelligence to prescribe medication refills to patients without direct human oversight. The first 250 renewals for each drug class will be reviewed by real doctors, but after that, the AI chatbot will be on its own.
It’s unclear if the Food and Drug Administration will step in to regulate AI prescribing. On the one hand, prescription renewals are a matter of practicing medicine, which falls under state governance. However, the FDA has said that it has the authority to regulate medical devices used to diagnose, treat, or prevent disease.
The program is through the state’s “regulatory sandbox” framework, which allows businesses to trial “innovative” products or services with state regulations temporarily waived. The Utah Department of Commerce partnered with Doctronic, a telehealth startup with an AI chatbot.
According to a non-peer-reviewed preprint article from Doctronic, which looked at 500 telehealth cases in its service, the company claims its AI’s diagnosis matched the diagnosis made by a real clinician in 81% of cases. The AI’s treatment plan was “consistent” with that of a doctor’s in 99% of the cases.
California residents can use new tool to demand brokers delete their personal data
State residents have had the right to demand that a company stop collecting and selling their data since 2020, doing so required a laborious process of opting out with each individual company. The Delete Act, passed in 2023, was supposed to simplify things, allowing residents to make a single request that more than 500 registered data brokers delete their information.
Now the Delete Requests and Opt-Out Platform (DROP) actually gives residents the ability to make that request. Brokers are supposed to start processing requests in August 2026; then they have 90 days to actually process requests and report back. If they don’t delete your data, you’ll have the option to submit additional information that may help them locate your records.
Companies will also be able to keep first-party data that they’ve collected from users. It’s only brokers who seek to buy or sell that data — which can include your Social Security number, browsing history, email address, phone number, and more — who will be required to delete it.
Some information, such as vehicle registration and voter records, is exempt from deletion because it comes from public documents. Other data, such as sensitive medical information, may be covered under other laws like HIPAA.
Big Tech’s fast-expanding plans for data centers are running into stiff community opposition
Between April and June alone, its latest reporting period, it counted 20 proposals valued at $98 billion in 11 states that were blocked or delayed amid local opposition and state-level pushback. That amounts to two-thirds of the projects it was tracking. In Indiana alone, more than a dozen projects that lost rezoning petitions.
Losing open space, farmland, forest or rural character is a big concern. So is the damage to quality of life, property values or health by on-site diesel generators kicking on or the constant hum of servers. Others worry that wells and aquifers could run dry. Lawsuits are flying — both ways — over whether local governments violated their own rules.
Vietnam: From February 15th, video ads are not allowed to force users to watch for more than 5 seconds.
Platforms are not allowed to force users to watch advertisements for more than 5 seconds and must allow users to close ads with just one tap.
Furthermore, platforms must provide clear icons and instructions for users to report advertisements that violate the law, and allow them to opt out, turn off, or stop viewing inappropriate ads. These reports must be received and processed promptly, and the results communicated to users as required.
Advertisers, advertising service providers, and advertising transmission and distribution units are responsible for blocking and removing infringing advertisements within 24 hours of receiving a request from the competent authority. For advertisements that infringe on national security, the blocking and removal must be carried out immediately, no later than 24 hours.
In case of non-compliance, the Ministry of Culture, Sports and Tourism, in coordination with the Ministry of Public Security, will apply technical measures to block infringing advertisements and services and handle the matter according to the law. Telecommunications companies and Internet service providers must also implement technical measures to block access to infringing advertisements within 24 hours of receiving a request.