- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2025-08-23
Robert Grupe's AppSecNewsBits 2025-08-23
Epic Fails, Hacking, AppSec/AISec, Platforms/Vendors, and Legal: Scamlexity, Impersonation as a Service, TeaOnHer, McDonalds, Car Dealer portals, Copilot, Amazon Q, AI/MCP expanding vuln risks, Phishing training theater, AI bubble, ... and more.
LEGAL & REGULATORY
U.K. Government Drops Apple Encryption Backdoor Order After U.S. Civil Liberties Pushback
U.S. Director of National Intelligence (DNI) Tulsi Gabbard, in a statement posted on X, said the U.S. government had been working with its partners with the U.K. over the past few months to ensure that Americans' civil liberties are protected. "As a result, the U.K. has agreed to drop its mandate for Apple to provide a 'backdoor' that would have enabled access to the protected encrypted data of American citizens and encroached on our civil liberties." The development comes after Apple switched off its Advanced Data Protection (ADP) feature for iCloud in the U.K. earlier this February, following government demands for backdoor access to encrypted user data.
Bank forced to rehire workers after lying about chatbot productivity
At that time, CBA claimed that launching the chatbot supposedly "led to a reduction in call volumes" by 2,000 a week. To uncover the truth, FSU escalated the dispute to a fair work tribunal, where the union accused CBA of failing to explain how workers' roles were ruled redundant. The union also alleged that CBA was hiring for similar roles in India, which made it appear that CBA had perhaps used the chatbot to cover up a shady pivot to outsource jobs. Now, CBA has apologized to the fired workers. They can choose to come back to their prior roles, seek another position, or leave the firm with an exit payment.
Microsoft's Nuance coughs up $8.5M to rid itself of MOVEit breach suit
Nuance, best known for its medical transcription and speech recognition systems, was one of hundreds of organizations caught in the blast radius of the Clop ransomware gang's 2023 mass exploitation of MOVEit Transfer. Court filings state that roughly 1.225 million people had their data siphoned from Nuance's MOVEit environment. The $8.5 million settlement is modest by MOVEit class-action standards, where payouts can stretch into the high single digits or even tens of millions. What really sets Nuance apart is the context: it operates firmly in the healthcare space, where exposed patient data draws extra scrutiny from regulators and the media.
Dev gets 4 years for creating kill switch on ex-employer's systems
After a corporate restructuring and subsequent demotion in 2018, the DOJ says that Davis Lu retaliated by embedding malicious code throughout the company's Windows production environment. The malicious code included an infinite Java thread loop designed to overwhelm servers and crash production systems. Lu also created a kill switch named "IsDLEnabledinAD" ("Is Davis Lu enabled in Active Directory") that would automatically lock all users out of their accounts if his account was disabled in Active Directory. When his employment was terminated on September 9, 2019, and his account disabled, the kill switch activated, causing thousands of users to be locked out of their systems. When he was instructed to return his laptop, Lu reportedly deleted encrypted data from his device. Investigators later discovered search queries on the device researching how to elevate privileges, hide processes, and quickly delete files. Lu was found guilty earlier this year of intentionally causing damage to protected computers. After his four-year sentence, Lu will also serve three years of supervised release following his prison term.
SIM-Swapper, Scattered Spider Hacker Gets 10 Years
Noah Michael Urban of Palm Coast, Fla. pleaded guilty in April 2025 to charges of wire fraud and conspiracy. Florida prosecutors alleged Urban conspired with others to steal at least $800,000 from five victims via SIM-swapping attacks that diverted their mobile phone calls and text messages to devices controlled by Urban and his co-conspirators. Although prosecutors had asked for Urban to serve eight years, the federal judge in the case opted to sentence Urban to 120 months in federal prison, ordering him to pay $13 million in restitution and undergo three years of supervised release after his sentence is completed. “The judge purposefully ignored my age as a factor because of the fact another Scattered Spider member hacked him personally during the course of my case.”
Oregon Man Charged in ‘Rapper Bot’ DDoS Service
The government states that just prior to Foltz’s arrest, Rapper Bot had enslaved an estimated 65,000 devices globally. Rapper Bot was reportedly responsible for the March 10, 2025 attack that caused intermittent outages on Twitter/X. The government says Rapper Bot’s most lucrative and frequent customers were involved in extorting online businesses — including numerous gambling operations based in China. Foltz faces one count of aiding and abetting computer intrusions. If convicted, he faces a maximum penalty of 10 years in prison, although a federal judge is unlikely to award anywhere near that kind of sentence for a first-time conviction.
In Otter news, transcription app accused of illegally recording users’ voices
Plaintiff Justin Brewer, points out that the company offers a service called the “Otter Notetaker” that records participants in Google Meet, Zoom, and Microsoft Teams, to transcribe whatever is spoken and produce meeting summaries. The suit points out that Otter’s privacy policy states that the company uses meeting participants’ voices to train its speech recognition AI, while the company’s Privacy & Security FAQ does likewise. Otter accountholders should therefore know their every utterance in an online meeting recorded by NoteTaker helps to improve the company’s AI. But the suit notes Otter records all utterances in a meeting – those made by accountholders and those made by meeting participants who do not have an Otter account. Otter’s services never ask those guests for consent to have their voices recorded or fed into a machine learning model, the complaint claims.
Google yet to take down 'screenshot-grabbing' Chrome VPN extension
A popular Chrome VPN extension, FreeVPN[.]One, which recently appears to have begun snaffling screenshots of users' page activity and transmitting them to a remote server without their knowledge – and Google has yet to take it down. The extension, which had more than 100,000 verified installations, is silently capturing screenshots a little over a second after each page load before transmitting them to a remote server – initially in the clear, then in a later update obfuscated with encryption. The behavior, the researchers claim, was introduced in July – after laying the groundwork with smaller updates which requested additional permissions to access all sites and inject custom scripts. FreeVPN[.]One insists that the Chrome extension "is fully compliant with Chrome Web Store policies, and any screenshot functionality is disclosed in our privacy policy.”
Congressman proposes bringing back letters of marque for cyber privateers
Arizona Republican David Schweikert introduced the Scam Farms Marque and Reprisal Authorization Act of 2025 in the House of Representatives last week. If signed into law, it would give the US President a lot of leeway in issuing letters of marque to create an armada of internet privateers. Bill would let US President commission white hat hackers to go after foreign threats, seize assets on the online seas. The Congressman's communications director did tell us that the idea isn't that novel, with organizations paying similar bounties to hackers "to unwind threats" instead of "paying or bending to the hackers.”
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Microsoft stays mum about M365 Copilot on-demand security bypass
The issue allowed M365 Copilot to access the content of enterprise files without leaving a trace in corporate audit logs. To do this, a malicious insider just had to ask M365 Copilot to summarize a company file without providing a link to it. And while they did fix the issue, classifying this issue as an 'important' vulnerability, they also decided not to notify customers or publicize that this happened. What that means is that your audit log is wrong, and Microsoft doesn’t plan on telling you that. Microsoft just last year started reporting Cloud Service CVEs when patching is not required. But the company said it would only issue CVEs for vulnerabilities deemed "critical," a policy Google Cloud also adopted last year. As this flaw was merely “important”, the Windows biz fixed it a few days ago without informing customers. According to Korman, another person had already informed Microsoft about the vulnerability: Michael Bargury, CTO at Zenity. Bargury discussed the issue at the Black Hat security conference in August 2024. He demonstrated how M365 Copilot security controls could be bypassed using a jailbreak technique that involves appending caret characters to the model's prompt. But according to Korman, Microsoft didn't bother with a fix until Korman reported the problem last month. He argues that the issue was so trivial to exploit that Microsoft needs to disclose it.
Amazon quietly fixed Q Developer flaws that made AI agent vulnerable to prompt injection, RCE
In a series of technical writeups this week, Johann Rehberger described how Amazon Q Developer is vulnerable to prompt injection, which can lead to data theft from a developer's machine and remote code execution (RCE). And if you're reading this, thinking you must have missed Amazon's customer advisory about the flaws and subsequent fixes, you didn't miss anything. An AWS spokesperson told The Register that the cloud giant, which is also a CVE Numbering Authority (CNA), is not issuing a CVE tied to the prompt injection or RCE vulnerabilities
McDonald's not lovin' it when hacker exposes nuggets of rotten security
Burger slinger gets a McRibbing, reacts by firing staffer who helped A white-hat hacker has discovered a series of critical flaws in McDonald's staff and partner portals that allowed anyone to order free food online, get admin rights to the burger slinger's marketing materials, and could allow an attacker to get a corporate email account with which to conduct a little filet-o-phishing.
She found the McDonald's online delivery app only ran client-side security checks when looking up an account’s credit points, with no server-side checking, allowing a Hamburglar to order food for free. Bafflingly, McDonald's did not have a valid security.txt file – a document that defines the process an org suggests security researchers use to share news of vulnerabilities. Bobdahacker eventually got through to a security McEngineer who said that they were "too busy" to fix the flaw, until the hacker pointed out that anyone could get free food. That got the burger barn’s attention, and it got it wrapped up.
Intrigued, she decided to dig a little deeper and looked at the corporation's Feel-Good Design Hub. While the company did set up proper logins, a little bit of URL customization – in this case changing "login" to "register" – allowed anyone to set up an account and the system then emailed the new user a password in plaintext. When she alerted the company, it took three months to fix the issue. An examination of the JavaScript in the Hub also showed that the MagicBell API key and Secret used for authentication was viewable.
McDonald's has staff portals that employees can sign into, but Bobdahacker found that lowly crew members could access the executive portals thanks to a faulty OAuth implementation. The system also exposed supposedly secret corporate documents.
She found that this would allow you to search for any employee, from the CEO down to individual store managers, and get their email addresses. A friend working at McDonald's helped with the research, but was fired over "security concerns from corporate" after Bobdahacker informed McDonald's about the flaws. She has no idea how the fast food giant found her friend's identity.
McDonald's is primarily a franchise operation, and a portal called Global Restaurant Standards contains material that defines rules for franchisees to follow. However the portal was missing one crucial security feature – admin authorization. In practice this meant that anyone could change material hosted on the site. Only last month, researchers found that the AI chatbot McDonald's used to screen job applicants, dubbed Olivia, was pitifully easy to hack. Getting admin access to the bot, built by Paradox[.]ai, required a password – which turned out to be 123456.
Security flaws in a carmaker’s web portal let one hacker remotely unlock cars from anywhere
The takeaway is that only two simple API vulnerabilities blasted the doors open, and it’s always related to authentication. If you’re going to get those wrong, then everything just falls down.
Buggy code loaded in the user’s browser when opening the portal’s login page, allowing the user to modify the code to bypass the login security checks and create a new “national admin” account. When logged in, the account granted access to more than 1,000 of the carmakers’ dealers across the United States.
The dealership portal was a national consumer lookup tool that allowed logged-in portal users to look up the vehicle and driver data of that carmaker. Zveare took a vehicle’s unique identification number from the windshield of a car in a public parking lot and used the number to identify the car’s owner. The tool could be used to look up someone using only a customer’s first and last name.
With access to the portal, it was also possible to pair any vehicle with a mobile account, which allows customers to remotely control some of their cars’ functions from an app, such as unlocking their cars. In transferring ownership to an account controlled by Zveare, the portal requires only an attestation — effectively a pinky promise — that the user performing the account transfer is legitimate.
Another key problem with access to this carmaker’s portal was that it was possible to access other dealer’s systems linked to the same portal through single sign-on, a feature that allows users to log in to multiple systems or applications with just one set of login credentials. The carmaker’s systems for dealers are all interconnected so it’s easy to jump from one system to another. With this, the portal also had a feature that allowed admins, such as the user account he created, to “impersonate” other users, effectively allowing access to other dealer systems as if they were that user without needing their logins.
In the portal Zveare found personally identifiable customer data, some financial information, and telematics systems that allowed the real-time location tracking of rental or courtesy cars, as well as cars being shipped across the country, and the option to cancel them.
How we found TeaOnHer spilling users’ driver’s licenses in less than 10 minutes
To find the domain name, we first looked at the app’s listing on the Apple App Store to find the app’s website.
We looked at the TeaOnHer’s public internet records, it had no meaningful information other than a single subdomain, appserver[.]teaonher[.]com. When we opened this page in our browser, what loaded was the landing page for TeaOnHer’s API. It was on this landing page that we found the exposed email address and plaintext password (which wasn’t that far off “password”) for Lampkin’s account to access the TeaOnHer “admin panel.”
This API landing page included an endpoint called /docs, which contained the API’s auto-generated documentation (powered by a product called Swagger UI) that contained the full list of commands that can be performed on the API. This documentation page was effectively a master sheet of all the actions you can perform on the TeaOnHer API as a regular app user, and more importantly, as the app’s administrator, such as creating new users, verifying users’ identity documents, moderating comments, and more. While it’s not uncommon for developers to publish their API documentation, the problem here was that some API requests could be made without any authentication — no passwords or credentials were needed to return information from the TeaOnHer database.
Black Hat: Phishing training is pretty pointless, researchers find
A scientific study involving thousands of test subjects, eight months and four different kinds of phishing training, the average improvement rate of falling for phishing scams was a whopping 1.7%. Is all of this focus on training worth the outcome?
They enrolled more than 19,000 employees of the UCSD Health system and randomly split them into five groups, each member of which would see something different when they failed a phishing test randomly sent once a month to their workplace email accounts. Most subjects saw right through a phishing email that urged the recipients to change their Outlook account passwords, resulting in failure rates between 1% and 4%.
But about 30% of users clicked on a link promising information about a change in the organization's vacation policy. Almost as many fell for one about a change in workplace dress code. Over the eight months of the experiment, just over 50% failed at least once. "We need to stop punishing people who fail phishing tests. You'd end up punishing half the company."
[rG: A very important metric is to measure the effectiveness of email the DLP/hygiene tools used by the organization, to determine their blocking effectiveness. Maybe that is a future aspiration for email inbox agents, but I’m not going to hold my breath.]
What’s Weak This Week:
CVE-2025-43300 Apple iOS, iPadOS, and macOS Out-of-Bounds Write Vulnerability:
In the Image I/O framework. Related CWE: CWE-787CVE-2025-54948 Trend Micro Apex One OS Command Injection Vulnerability:
Could allow a pre-authenticated remote attacker to upload malicious code and execute commands on affected installations. Related CWE: CWE-78
HACKING
'Impersonation as a service' the next big thing in cybercrime
English-language social engineering is among the most in-demand skill sets on underground forums, with the number of job advertisements mentioning this particular talent more than doubling between 2024 and 2025. The security shop tracked 4 of these types of job listings last year, compared to 10 as of July 2025.
As a bad actor you can subscribe to get tools, training, coaching, scripts, exploits, everything in a box to go out and conduct your infiltration operation that often combine[s] these social engineering attacks with targeted ransomware, almost always with a financial motive.
ShinyHunters, best known for last year's high-profile attacks on Snowflake customers' databases, Ticketmaster, and AT&T, has been on a digital break-in spree since June, when it began compromising dozens of companies' Salesforce instances. These intrusions used social engineering to gain access to the organizations' Salesforce credentials — typically a voice-phishing call intended to trick an employee into providing access — and suspected victims include fashion houses Dior and Chanel, jewelry retailer Pandora, insurance company Allianz, Google, and most recently Workday.
Fake CAPTCHA tests trick users into running malware
Microsoft Threat Intelligence and Microsoft Defender Experts have observed the ClickFix social engineering technique growing in popularity, with campaigns targeting thousands of enterprise and end-user devices globally every day.
ClickFix pretends to be a standard CAPTCHA challenge. But, instead of clicking squares with motorbikes in them, sliding a puzzle-piece into place, or rotating increasingly-bizarre objects to particular orientations, it demands that users do something . . . else.
The fake CAPTCHA tells them to hit the Windows/Super key and R, then Control and V followed by Enter – a combination which, any reader who's used a computer for more than a week or so will likely recognize, opens up the Windows Run prompt, pastes whatever the attacker placed in the clipboard, and executes it. Imagine users smiling to themselves as they do this, thinking of how helpful they are being while crooks are helping themselves to unauthorized access.
As for how users can protect themselves, Microsoft's advice is primarily education-based, along with the use of email filtering to reduce the number of phishing attempts that make it into users' inboxes. The company's report also advises that users should "block web pages from automatically running [Adobe] Flash plugins," an unexpected piece of not-exactly-timely advice given that Adobe killed Flash Player more than four years ago. Microsoft also recommends using PowerShell script block logging and execution policies, turning on optional Windows Terminal warnings that appear when pasting multiple lines, enabling app control policies which prevent the execution of native binaries from the Run command, and even deploying a group policy to remove the Run command from the Start Menu altogether. The report also includes a selection of indicators of compromise, for those who would like to incorporate them into their security scanning systems.
Scamlexity: Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts
A new prompt injection technique called PromptFix that tricks a generative artificial intelligence (GenAI) model into carrying out intended actions by embedding the malicious instruction inside a fake CAPTCHA check on a web page.
This leads to a new reality that the company calls Scamlexity, a portmanteau of the terms "scam" and "complexity," where agentic AI – systems that can autonomously pursue goals, make decisions, and take actions with minimal human supervision – takes scams to a whole new level.
Honey, I shrunk the image and now I'm pwned
The Trail of Bits researchers advise not using image downscaling in agentic AI systems. And if that's necessary, they argue that the user should always be presented with a preview of what the model actually sees, even for CLI and API tools. But really, they say AI systems need systematic defenses that mitigate the risk of prompt injection
The attack scenario: a victim uploads a maliciously prepared image to a vulnerable AI service and the underlying AI model acts upon the hidden instructions in the image to steal data. The technique involves embedding prompts into an image that tell the AI to act against its guidelines, then manipulating the image to hide the prompt from human eyes. It requires the image to be prepared in a way that the malicious prompt encoding interacts with whichever image scaling algorithm is employed by the model.
The researchers say they have devised successful image scaling attacks against Vertex AI with a Gemini back end, Gemini's web interface, Gemini's API via the llm CLI, Google Assistant on an Android phone, and the Genspark agentic browser.
Google noted that the attack only works with a non-standard configuration. In order for the attack to be possible, a user would first need to set MCP tool calls to be confirmed automatically, overriding the default setting, and then ingest the malicious file.
Android’s pKVM Becomes First Globally Certified Software to Achieve Prestigious SESIP Level 5 Security Certification
protected KVM (pKVM), the hypervisor that powers the Android Virtualization Framework, has officially achieved SESIP Level 5 certification. This certification required a hands-on evaluation by Dekra, a globally recognized cybersecurity certification lab, which conducted an evaluation against the TrustCB SESIP scheme, compliant to EN-17927. Achieving Security Evaluation Standard for IoT Platforms (SESIP) Level 5 is a landmark because it incorporates AVA_VAN.5, the highest level of vulnerability analysis and penetration testing under the ISO 15408 (Common Criteria) standard. A system certified to this level has been evaluated to be resistant to highly skilled, knowledgeable, well-motivated, and well-funded attackers who may have insider knowledge and access.
Seagate spins up a raid on a counterfeit hard drive workshop — authorities read criminals' writes while they spill the beans
Just like something out of an action movie, security teams from Seagate's Singapore and Malaysian offices, in conjunction with local Malaysian authorities, conducted a raid on a warehouse that was engaged in cooking up counterfeit Seagate hard drives.
During the raid, authorities reportedly uncovered approximately 700 counterfeit Seagate hard drives, with SMART values that had been reset to facilitate their sale as new. The confiscated batch included several models from Seagate's extensive hard drive range, with capacities reaching up to 18TB. Drives from Kioxia and Western Digital where also discovered. Seagate suspects that the used hard drives originated from China during the Chia boom. Following the cryptocurrency's downfall, numerous miners sold these used drives to workshops where many were illicitly repurposed to appear new. This bust may represent only the tip of the iceberg, as Heise estimates that at least one million of these Chia drives are circulating.
AI crawlers and fetchers are blowing up websites, with Meta and OpenAI the worst offenders
While AI fetchers make up a minority of Ai bot requests – only about 20%, they can be responsible for huge bursts of traffic, with one fetcher generating over 39,000 requests per minute during the testing period. In the face of bots riding roughshod over polite opt-outs like robots.txt directives, webmasters are increasingly turning to active countermeasures like the proof-of-work Anubis or gibberish-feeding tarpit Nepenthes, while Fastly rival Cloudflare has been testing a pay-per-crawl approach to put a financial burden on the bot operators.
APPSEC, DEVSECOPS, DEV
New NIST guide explains how to detect morphed images
Face morphing software can blend two people’s photos into one image, making it possible for someone to fool identity checks at buildings, airports, borders, and other secure places.
Single-image detection can be very accurate, sometimes catching nearly all morphs, if the detector has been trained on the same type of morphing software. But accuracy drops sharply, even below 40%, when facing unfamiliar tools. Differential detectors are more reliable overall, with accuracy ranging from 72% to 90% across different morphing software, but they require a second genuine photo for comparison. Most of the guidance focuses on how to configure detection systems and what to do after a possible morph is identified. Recommendations include a mix of automated tools, human review, and clear procedures for investigating flagged images. NISTIR 8584 Face Analysis Technology Evaluation (FATE) MORPH 4B: Considerations for Implementing Morph Detection in Operations
[rG: Important considerations for any Fraud, Waste, and Abuse or any biometric Identity Authentication applications which process images/digital files.]
Microsoft makes MCP in Visual Studio GA but researchers warn of risks
MCP servers extend the capabilities of agentic AI, enabling developers to sit back and watch tasks being done on their behalf. Barrie references the list of MCP servers on GitHub, which includes MCP SDKs, nearly 400 official servers, and nearly 750 community-contributed servers for which there is a warning from Anthropic, the inventor of the protocol, that "community servers are untested and should be used at your own risk."
Based on the research, only 9 percent of MCPs are fully exploitable, combining both sensitive capabilities with acceptance of untrusted input, but the compounding effect of having multiple MCPs means that using 3 servers becomes a 52 percent chance of high risk vulnerability.
Visual Studio can now connect to local or remote MCP servers, configured using a file called .mcp.json which can be in a user profile, for global use, or in an individual solution. Developers can add MCP servers either by editing this file directly, or using settings in the GitHub Copilot chat window. There is also provision for one-click installation from the web. OAuth authentication is supported, for example to allow the MCP tools to have GitHub access. Organizations that are wary of MCP usage can control access to MCP functionality via GitHub policies.
OWASP AI Security Solutions Landscape for Agentic AI Q3 2025
Highlights open-source and commercial solutions by stage, identifying their coverage of Agentic SecOps duties and threat mitigation, and leverages industry and community input as a peer-reviewed resource for navigating agentic AI’s shifting security challenges.
AI's Hidden Security Debt
Nearly half of the code snippets generated by five AI models contained bugs. Developers using AI assistance not only wrote significantly less secure code than those who worked unaided, but they also believed their insecure code was safe, a clear sign of automation bias.
[rG: Too much to summarize – read it.]
VENDORS & PLATFORMS
China cut itself off from the global internet for an hour on Wednesday
The Great Firewall of China (GFW) exhibited anomalous behavior by unconditionally injecting forged TCP RST+ACK packets to disrupt all connections on TCP port 443.
That disruption meant Chinese netizens couldn’t reach most websites hosted outside China, which is inconvenient. The incident also blocked other services that rely on port 443, which could be more problematic because many services need to communicate with servers or sources of information outside China for operational reasons. For example, Apple and Tesla use the port to connect to offshore servers that power some of their basic services.
[rG: Underscoring the importance of knowing off-shored dependencies to determine potential business service disruptions that may need resiliency switching responsive design.]
Is the AI bubble about to pop? Sam Altman is prepared either way.
"Someone" will lose a "phenomenal amount of money," he said. The statement came as his company negotiates a secondary share sale at a $500 billion valuation—up from $300 billion just months earlier. "Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes."
Palantir trades at 280 times forward earnings. During the dot-com peak, ratios of 30 to 40 times earnings marked bubble territory. While warning about a bubble, he's simultaneously seeking a valuation that would make OpenAI worth more than Walmart or ExxonMobil—companies with actual profits. OpenAI hit $1 billion in monthly revenue in July but is reportedly heading toward a $5 billion annual loss.
MIT report: 95% of generative AI pilots at companies are failing
5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.
People like ChatGPT for basic tasks and hate complicated enterprise systems, and companies that try to build their own AI usually fail. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows.
GPT-5′s rollout fell flat for consumers, but the AI model is gaining where it matters most
Last week’s rollout of GPT-5, OpenAI’s newest artificial intelligence model, was rocky. Critics bashed its less-intuitive feel, ultimately leading the company to restore its legacy GPT-4 to paying chatbot customers.
But GPT-5 isn’t about the consumer. It’s OpenAI’s effort to crack the enterprise market (where the real money is), where rival Anthropic has enjoyed a head start.
Platforms including Cursor, Vercel, JetBrains, Factory, Qodo and GitHub Copilot are rolling GPT-5 into certain default artificial intelligence workflows or public previews.
One week in, and startups like Cursor, Vercel, and Factory say they’ve already made GPT-5 the default model in certain key products and tools, touting its faster setup, better results on complex tasks, and a lower price.
Some companies said GPT-5 now matches or beats Claude on code and interface design, a space Anthropic once dominated.
Harvard dropouts to launch ‘always on’ AI smart glasses that listen and record every conversation
Two former Harvard students are launching a pair of “always-on” AI-powered smart glasses that listen to, record, and transcribe every conversation and then display relevant information to the wearer in real time. “Our goal is to make glasses that make you super intelligent the moment you put them on. The AI listens to every conversation you have and uses that knowledge to tell you what to say … kinda like IRL Cluely.”
While Meta’s glasses have an indicator light when their cameras and microphones are watching and listening as a mechanism to warn others that they are being recorded, the Halo X glasses do not have an external indicator.
There are several states in the U.S. that make it illegal to covertly record conversations without the other persons’ consent. They are aware of this but that it is up to their customer to obtain consent before using the glasses.
The glasses will be priced at $249.
[rG: They don’t provide any details of how the data will be secure, but state that they aim for SOC 2 certification for security – LOL that is the wrong answer.]
In Xcode 26, Apple shows first signs of offering ChatGPT alternatives
The latest Xcode beta contains clear signs that Apple plans to bring Anthropic's Claude and Opus large language models into the integrated development environment (IDE).
This news is also relevant for the wider Apple ecosystem, not just developers, as it's the first clear example of Apple working on support for a third-party model besides those offered by OpenAI. The company's executives have often said they planned to do that in the future in Xcode and in Siri.
Google says it dropped the energy cost of AI queries by 33x in one year
The Google team describes a number of optimizations the company has made that contribute to this. One is an approach termed Mixture-of-Experts, which involves figuring out how to only activate the portion of an AI model needed to handle specific requests, which can drop computational needs by a factor of 10 to 100. They've developed a number of compact versions of their main model, which also reduce the computational load. Data center management also plays a role, as the company can make sure that any active hardware is fully utilized, while allowing the rest to stay in a low-power state.
The other thing is that Google designs its own custom AI accelerators, and it architects the software that runs on them, allowing it to optimize both sides of the hardware/software divide to operate well with each other. That's especially critical given that activity on the AI accelerators accounts for over half of the total energy use of a query. Google also has lots of experience running efficient data centers that carries over to the experience with AI.
And Now For Something Completely Different …
College student’s “time travel” AI experiment accidentally outputs real 1834 history
Grigorian has been developing what he calls TimeCapsuleLLM, a small AI language model (like a pint-sized distant cousin to ChatGPT) which has been trained entirely on texts from 1800–1875 London.
Grigorian wants to capture an authentic Victorian voice in the AI model's outputs. As a result, the AI model ends up spitting out text that's heavy with biblical references and period-appropriate rhetorical excess. Grigorian's project joins a growing field of researchers exploring what some call "Historical Large Language Models" (HLLMs) if they feature a larger base model than the small one Grigorian is using. Similar projects include MonadGPT, which was trained on 11,000 texts from 1400 to 1700 CE that can discuss topics using 17th-century knowledge frameworks, and XunziALLM, which generates classical Chinese poetry following ancient formal rules. These models offer researchers a chance to interact with the linguistic patterns of past eras.