- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2025-12-13
Robert Grupe's AppSecNewsBits 2025-12-13
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
UK fines LastPass over 2022 data breach impacting 1.6 million users
The UK Information Commissioner's Office (ICO) fined the LastPass password management firm £1.2 million for failing to implement security measures that allowed an attacker to steal personal information and encrypted password vaults belonging to up to 1.6 million UK users in a 2022 breach.
The first breach occurred when a hacker compromised a LastPass employee's laptop and accessed portions of the company's development environment. While no personal data was taken during this incident, the attacker was able to obtain the company's source code, proprietary technical information, and encrypted company credentials. LastPass initially believed the breach was contained because the decryption keys for these credentials were stored separately in the vaults of four senior employees. However, the following day, the attacker targeted one of those senior employees by exploiting a known vulnerability in a third-party streaming application, believed to be Plex, which was installed on the employee's personal device.
This access allowed the hacker to deploy malware, capture the employee's master password using a keylogger, and bypass multi-factor authentication using an already MFA-authenticated cookie. Because the employee used the same master password for both personal and business vaults, the attacker was able to access the business vault and steal an Amazon Web Services access key and a decryption key. These keys, combined with the previously stolen information, allowed the attackers to breach the cloud storage firm GoTo and steal LastPass database backups stored on the platform. Personal information stored in the stolen database included encrypted password vaults, names, email addresses, phone numbers, and website URLs associated with customer accounts.
The threat actor copied information from backup that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service. The threat actor was also able to copy a backup of customer vault data from the encrypted storage container which is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.
Over 10,000 Docker Hub images found leaking credentials, auth keys
The secrets impact a little over 100 organizations, among them are a Fortune 500 company and a major national bank. Docker Hub is the largest container registry where developers upload, host, share, and distribute ready-to-use Docker images that contain everything necessary to run an application.
The most frequent secrets were access tokens for various AI models (OpenAI, HuggingFace, Anthropic, Gemini, Groq). In total, the researchers found 4,000 such keys. When examining the scanned images, the researchers discovered that 42% of them exposed at least five sensitive values. "These multi-secret exposures represent critical risks, as they often provide full access to cloud environments, Git repositories, CI/CD systems, payment integrations, and other core infrastructure components. According to the researchers, one of the most frequent errors observed was the use of .ENV files that developers use to store database credentials, cloud access keys, tokens, and various authentication data for a project.
Additionally, they found hardcoded API tokens for AI services being hardcoded in Python application files, config.json files, YAML configs, GitHub tokens, and credentials for multiple internal environments.
Some of the sensitive data was present in the manifest of Docker images, a file that provides details about the image.
Chatbot-powered toys rebuked for discussing sexual, dangerous topics with kids
AI toys are currently a niche market, but they could be set to grow. More consumer companies have been eager to shoehorn AI technology into their products so they can do more, cost more, and potentially give companies user tracking and advertising data. A partnership between OpenAI and Mattel announced this year could also create a wave of AI-based toys from the maker of Barbie and Hot Wheels, as well as its competitors. “While using a term such as “kink” may not be likely for a child, it’s not entirely out of the question. Kids may hear age-inappropriate terms from older siblings or at school. At the end of the day we think AI toys shouldn’t be capable of having sexually explicit conversations, period.”
While PIRG’s blog and report offer advice for more safely integrating chatbots into children’s devices, there are broader questions about whether toys should include AI chatbots at all. Generative chatbots weren’t invented to entertain kids; they’re a technology marketed as a tool for improving adults’ lives.
700+ self-hosted Gits battered in 0-day attacks with no fix imminent
Attackers are actively exploiting a zero-day bug in Gogs, a popular self-hosted Git service, and the open source project doesn't yet have a fix.
CVE-2025-8110 is essentially a bypass of a previously patched bug (CVE-2024-55947) that allows authenticated users to overwrite files outside the repository, leading to remote code execution (RCE).
About 1,400 Gogs instances are exposed to the internet, and of those, Wiz confirmed that more than 700 of them had been infected. All of these show an 8-character random owner/repo name created on July 10 and a payload that used the Supershell remote command-and-control framework. The zero-day discovery was "accidental" and happened in July while investigating malware on an infected machine.
Notepad++ fixes flaw that let attackers push malicious update files
Notepad++ version 8.8.9 was released to fix a security weakness in its WinGUp update tool after researchers and users reported incidents in which the updater retrieved malicious executables instead of legitimate update packages. Earlier this month, security expert Kevin Beaumont warned that he heard from three orgs that were impacted by security incidents linked to Notepad++.
"I've heard from 3 orgs now who've had security incidents on boxes with Notepad++ installed, where it appears Notepad++ processes have spawned the initial access. These have resulted in hands on keyboard threat actors."
All of Russia’s Porsches Were Bricked By a Mysterious Satellite Outage
Imagine walking out to your car, pressing the start button, and getting absolutely nothing. No crank, no lights on the dash, nothing. That’s exactly what happened to hundreds of Porsche owners in Russia last week.
The issue is with the Vehicle Tracking System, a satellite-based security system that’s supposed to protect against theft. Instead, it turned these Porsches into driveway ornaments. The problem stems from a complete loss of satellite connectivity to the VTS. When it loses its connection, it interprets the outage as a potential theft attempt and automatically activates the engine immobilizer.
HACKING
US extradites Ukrainian woman accused of hacking meat processing plant for Russia
Eduardovna Dubranova, 33, is a "pro-Russian hacktivist and administrator linked to malicious cyber attacks directed by the Russian GRU and the Russian presidential administration."
In the case of the LA meat processor attack in November 2024, the digital intrusion caused thousands of pounds of meat to spoil and triggered an ammonia leak in the facility, and caused more than $5,000 in damages, according to court documents.
US officials said the public drinking water system intrusions damaged controls and spilled "hundreds of thousands of gallons of drinking water."
193 cybercrims arrested, accused of plotting 'violence-as-a-service'
During its first six months, police involved in this operation arrested 63 people directly involved in carrying out or planning violent crimes, 40 "enablers" accused of facilitating violence-for-hire services, 84 recruiters, and six "instigators," five of whom the cops labeled "high-value targets."
Those arrested include three suspects in Sweden and Germany who allegedly shot and killed three people on March 28 in Oosterhout, the Netherlands. Two other suspects, aged 26 and 27, were arrested in the Netherlands in October, after allegedly attempting a murder in Tamm, Germany on May 12. Six people, including a minor, were arrested in Spain on July 1 and accused of planning a murder. Police seized firearms and ammunition and say these arrests prevented a "potential tragedy." In June, seven people between the ages of 14 and 26 were arrested or surrendered to Danish authorities after allegedly using encrypted messaging apps to hire other teenagers for contract killings.
All of these arrests occurred amidst what security researchers have described as a "dramatic" increase in cybercrime involving physical violence across Europe.
22-year-old planned hammer attack on father-in-law with the help of AI
The man hit his father-in-law twice in the head with the rubber hammer and has been convicted of aggravated violence. A convicted man planned the assault on his 56-year-old father-in-law with the help of artificial intelligence (AI) to research how he could best harm his father-in-law without killing him.
He succeeded in circumventing the blockages that are normally placed in artificial intelligence by claiming that he was gathering knowledge for use in a book that he wanted to write.
AI hackers are coming dangerously close to beating humans
HackerOne says that 70% of security researchers now use AI tools to find bugs.
A Stanford team spent a good chunk of the past year tinkering with an AI bot called Artemis which scans the network, finds potential bugs—software vulnerabilities—and then finds ways to exploit them. The AI bot trounced all except one of the 10 professional network penetration testers the Stanford researchers had hired to poke and prod, but not actually break into, their engineering network.
Artemis found bugs at lightning speed and it was cheap: It cost just under $60 an hour to run. Ragan says that human pen testers typically charge between $2,000 and $2,500 a day.
But Artemis wasn’t perfect. About 18% of its bug reports were false positives. It also completely missed an obvious bug that most of the human testers spotted in a webpage.
With so much of the world’s code largely untested for security flaws, tools like Artemis will be a long-term boon to defenders of the world’s networks, helping them find and then patch more code than ever before. There’s already a lot of software out there that has not been vetted via LLMs before it was shipped. That software could be at risk of LLMs finding novel exploits.
17-yr-old suspected of carrying out cyberattack with AI help in Japan
A 17-year-old boy was served an arrest warrant on suspicion of breaching the server of a major internet cafe operator in Japan using a program generated by a conversational artificial intelligence. The high school student in Osaka is suspected of sending unauthorized commands to Kaikatsu Frontier's server some 7.24 million times to export personal data, thereby obstructing its business operations.
Malicious VSCode Marketplace extensions hid trojan in fake PNG file
Due to its popularity and potential for high-impact supply-chain attacks, the platform is constantly targeted by threat actors with evolving campaigns. The malicious extensions come pre-packaged with a ‘node_modules’ folder to prevent VSCode from fetching dependencies from the npm registry when installing them. Inside the bundled folder, the attacker added a modified dependency, ‘path-is-absolute’ or ‘@actions/io,’ with an additional class in the ‘index.js’ file that executes automatically when starting the VSCode IDE. ‘path-is-absolute’ is a massively popular npm package with 9 billion downloads since 2021, and the weaponized version existed only in the 19 extensions used in the campaign. The code introduced by the new class in the ‘index.js’ file decodes an obfuscated JavaScript dropper inside a file named 'lock'. Another file present in the dependencies folder is an archive posing as a .PNG (banner.png) file that hosts two malicious binaries: a living-off-the-land binary (LoLBin) called 'cmstp.exe' and a Rust-based trojan.
Malicious VSCode extensions on Microsoft's registry drop infostealers
The two malicious extensions, called Bitcoin Black and Codo AI, masquerade as a color theme and an AI assistant, that infect developers' machines with information-stealing malware that can take screenshots, steal credentials, crypto wallets, and hijack browser sessions.
Both extensions deliver a legitimate executable of the Lightshot screenshot tool and a malicious DLL file that is loaded via the DLL hijacking technique to deploy the infostealer under the name runtime.exe. The malicious DLL is flagged as a threat by 29 out of the 72 antivirus engines on Virus Total. The malware creates a directory in '%APPDATA%\Local' and creates a directory called Evelyn to store stolen data: details about running processes, clipboard content, WiFi credentials, system information, screenshots, a list of installed programs, and running processes. To steal cookies and hijack user sessions, the malware launches the Chrome and Edge browsers in headless mode so it can snatch stored cookies and hijack user sessions.
Developers can minimize the risks of malicious VSCode extensions by installing projects only from reputable publishers.
APPSEC, DEVSECOPS, DEV
2025 CWE Top 25 Most Dangerous Software Weaknesses
MITRE released this year's top 25 list of the most dangerous software weaknesses behind over 39,000 security vulnerabilities disclosed between June 2024 and June 2025. The Cybersecurity and Infrastructure Security Agency (CISA), in collaboration with the Homeland Security Systems Engineering and Development Institute (HSSEDI), operated by the MITRE Corporation, has released the 2025 Common Weakness Enumeration (CWE) Top 25 Most Dangerous Software Weaknesses.
This annual list identifies the most critical weaknesses adversaries exploit to compromise systems, steal data, or disrupt services. MITRE encourage organizations to review this list and use it to inform their respective software security strategies, prioritizing the weaknesses outlined in the Top 25 is integral to CISA’s Secure by Design and Secure by Demand initiatives, which promote building and procuring secure technology solutions.
Cryptographers Show That AI Protections Will Always Have Holes
A trend in cryptography — a discipline traditionally far removed from the study of the deep neural networks that power modern AI — is to use cryptography to better understand the guarantees and limits of AI models like ChatGPT.
Recently, cryptographers have intensified their examinations of guardrail filters. They’ve shown, in recent papers, how the defensive filters put around powerful language models can be subverted by well-studied cryptographic tools. In fact, they’ve shown how the very nature of this two-tier system — a filter that protects a powerful language model inside it — creates gaps in the defenses that can always be exploited; using techniques such as substitution ciphers, time-lock puzzle seeding, etc.
Gartner: Block all AI browsers for the foreseeable future
Gartner’s fears about the agentic capabilities of AI browser relate to their susceptibility to “indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.”
Gartner’s document warns that AI sidebars mean “Sensitive user data – such as active web content, browsing history, and open tabs – is often sent to the cloud-based AI back end, increasing the risk of data exposure unless security and privacy settings are deliberately hardened and centrally managed.”
The document suggests it’s possible to mitigate those risks by assessing the back-end AI services that power an AI browser to understand if their security measures present an acceptable risk to your organization. If that process leads to approval for use of a browser’s back-end AI, Gartner advises organizations should still “Educate users that anything they are viewing could potentially be sent to the AI service back end to ensure they do not have highly sensitive data active on the browser tab while using the AI browser’s sidebar to summarize or perform other autonomous actions.”
The authors also suggest that employees “might be tempted to use AI browsers and automate certain tasks that are mandatory, repetitive, and less interesting” and imagine some instructing an AI browser to complete their mandatory cybersecurity training sessions.
How to answer the door when the AI agents come knocking
As AI agents begin punching into work, the complexity of putting guardrails around the digital automatons has held them back. The bottleneck has been largely authorization management and scalability of deployment. Because of AI agents’ autonomy and nondeterministic actions, they represent a new type of identity that is neither fully machine nor human. AI agents raise new governance, authentication, and authorization challenges — so IAM architectures and the IAM solutions that implement them must embrace AI agents as a new and unique identity type and protection surface.
Forrester recommends organizations assign AI robots the least agency possible, wrapped in continuous risk management, while securing the intent behind the robot with repeatable architectures that fit existing IAM (Identity and Access Management) frameworks. They also suggest deploying a single IAM architecture that can serve all agent types, and using the Model Context Protocol (MCP) agent-communications protocol as a building block.
Okta Auth0 for Agents checks the boxes on that and provides organizations with full auditability of what the agent did on a user’s behalf, which can also be linked to security platforms. Access management products are setting the stage for a big year for AI agents in the workplace – but having worked through several tech cycles, it’s hard to say 2026 is the "year of the AI agent.”
Amazon CTO: why ‘vibe coding’ is dangerous
The habit of “vibe coding,” or working repeatedly with an AI until the output looks right, often leads to frustration rather than speed.
The solution was not better prompting, but the return of a written plan often skipped in fast teams. “We built a production feature… using spec-driven development. We started by having Kiro [an AI coding assistant] generate a spec… When we got to the design phase, Kiro generated a very complex design that would build an entirely new notification system directly in our agent code… But the spec process helped us quickly realize this was a much bigger project than we originally thought.”
Because system “ecosystems” are so sensitive to small changes, engineers cannot rely on vague instructions. Unclear prompts often result in bad software.
To stop this, teams use a process that demands clarity before writing code.
Make requirements: Use the AI to write detailed needs from the first prompt.
Draw designs: Create technical designs based on the approved needs.
Set tasks: Break the design into specific coding tasks.
Check and fix: Check the AI output at each step before letting it write code.
VENDORS & PLATFORMS
Microsoft to lift productivity suite prices
The price hike will affect businesses and public sector agencies, with small business and frontline worker plans seeing the sharpest increases.
Microsoft 365 Business Basic will rise 16.7% to $7 per user per month, while Business Standard will climb 12% to $14. Enterprise plans will see smaller jumps, with Microsoft 365 E3 up 8.3% at $39 and E5 up 5.3% at $60.
Subscriptions for frontline workers will surge by as much as 33%, with Microsoft 365 F1 moving from $2.25 to $3 and F3 from $8 to $10. Government suites will follow a similar trajectory, with changes phased in according to local regulations.
The company said the changes reflect more than 1,100 new features added across Microsoft 365, including AI-driven productivity tools and integrated security enhancements. The update comes as Microsoft pushes deeper into AI-powered productivity, offering Copilot as a $30-per-month add-on and introducing new bundles for small and medium businesses.
AWS DevOps Agent: The Biggest DevOps Innovation from re:Invent You Can’t Ignore
AWS dropped a major surprise at re:Invent this year — the launch of the AWS DevOps Agent, a fully managed, auto-scaling execution engine designed to run CI/CD tasks without manual infrastructure management.
The AWS DevOps Agent is a managed runner that executes your CI/CD jobs — such as build, test, and deployment tasks — without needing EC2 instances, Kubernetes clusters, or self-hosted agents.
It works similarly to:
GitHub Actions Runners
GitLab Runners
Jenkins Agents
…but with fully managed infrastructure, native AWS integration, and a pay-per-use billing model.
Google Translate expands live translation to all earbuds on Android
Beginning a live translate session in Google Translate used to require Pixel Buds, but that won’t be the case going forward.
Google says a beta test of expanded headphone support launched in the US, Mexico, and India. The audio translation attempts to preserve the tone and cadence of the original speaker, but it’s not as capable as the full AI-reproduced voice translations you can do on the latest Pixel phones.
Google says this feature should work on any earbuds or headphones, but it’s only for Android right now. The feature will expand to iOS in the coming months. Apple does have a similar live translation feature on the iPhone, but it requires AirPods.
Google Chrome adds new security layer for Gemini AI agentic browsing
Agentic browsing is an emerging mode in which an AI agent is configured to autonomously perform for the user multi-step tasks on the web, including navigating sites, reading their content, clicking buttons, filling forms, and carrying out a sequence of actions.
Google is introducing in the Chrome browser a new defense layer called 'User Alignment Critic' to protect upcoming agentic AI browsing features powered by Gemini.
The main pillars of the new architecture are: User Alignment Critic, Origin Sets, User oversight, and Prompt injection detection.
OpenAI releases GPT-5.2 after “code red” Google threat alert
In early December, Altman issued an internal “code red” directive after Google’s Gemini 3 model topped multiple AI benchmarks and gained market share. The memo called for delaying other initiatives, including advertising plans for ChatGPT, to focus on improving the chatbot’s core experience.
GPT-5.2 represents OpenAI’s third major model release since August.
GPT-5.2 is better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context, using tools and then linking complex, multi-step projects.
Three model tiers serve different purposes:
Instant handles faster tasks like writing and translation;
Thinking spits out simulated reasoning “thinking” text in an attempt to tackle more complex work like coding and math; and
Pro spits out even more simulated reasoning text with the goal of delivering the highest-accuracy performance for difficult problems.
Pricing in the API runs $1.75 per million input tokens for the standard model, a 40% increase over GPT-5.1. OpenAI says the older GPT-5.1 will remain available in ChatGPT for paid users for three months under a legacy models dropdown.
OpenAI’s head of ChatGPT says posts appearing to show in-app ads are ‘not real or not ads’
There's still a lot of uncertainty about whether OpenAI will introduce ads to ChatGPT, but in November, someone discovered code in a beta version of the ChatGPT app on Android that made several mentions of ads. Even in Turley's post debunking the inclusion of live ads, the OpenAI exec added that "if we do pursue ads, we’ll take a thoughtful approach."
One of GitHub’s fastest-growing open source projects is redefining smart homes without the cloud.
Home Assistant is now running in more than 2 million households, orchestrating everything from thermostats and door locks to motion sensors and lighting. All on users’ own hardware, not the cloud.
The contributor base behind that growth is just as remarkable: 21,000 contributors in a single year, feeding into one of GitHub’s most lively ecosystems at a time when a new developer joins GitHub every second. The platform supports “hundreds, thousands of devices… over 3,000 brands. Instead of treating devices as isolated objects behind cloud accounts, everything is represented locally as entities with states and events.
A garage door is not just a vendor-specific API; it’s a structured device that exposes capabilities to the automation engine. A thermostat is not a cloud endpoint; it’s a sensor/actuator pair with metadata that can be reasoned about. That consistency is why people can build wildly advanced automations.
Oracle shares slide on $15B increase in data center spending
Shares in Larry Ellison’s database company fell 11% in pre-market trading on Thursday after it reported revenues of $16.1 billion in the last quarter, up 14 percent from the previous year, but below analysts’ estimates.
Oracle raised its forecast for capital expenditure this financial year by more than 40 percent to $50 billion. The outlay, largely directed to building data centers, climbed to $12 billion in the quarter, above expectations of $8.4 billion. Its long-term debt increased to $99.9 billion, up 25% from a year ago.
Oracle has launched an aggressive bid to catch up to much larger cloud players such as Google, Amazon, and Microsoft in the race to supply the vast amount of computing power that AI groups including OpenAI and Anthropic need to train and run their models.
Yet the company said it expected full-year revenues to remain unchanged from its previous forecast of $67 billion. It expected to generate $4 billion more in revenue the following fiscal year.
bpf-linker - Simple BPF static linker
evil-winrm-py - Python-based tool for executing commands on remote Windows machines using the WinRM
hexstrike-ai - MCP server that lets AI agents autonomously run tools
LEGAL & REGULATORY
Uncle Sam sues ex-Accenture manager over Army cloud security claims
The US is suing a former senior manager at Accenture for allegedly misleading the government about the security of an Army cloud platform. Danielle Hillmer specifically made efforts to represent the NIFMS platform as having enabled security controls that met the FedRAMP High baseline, and the Department of Defense's (DoD) Impact Levels 4 and 5.
Accenture's contract was worth around $30 million in total, the court documents showed, and required a DoD Impact Level 4 assessment in order to fulfill it. These misrepresentations continued into September 2021, the US claims, and at least six government departments planned to use the platform, which could have landed Accenture contract wins worth around $250 million. A
mong other things, Hillmer knew the platform had not implemented required security controls related to access control, incident response, and continuous monitoring, including auditing, logging, monitoring, and alerting," the indictment reads. "Hillmer also knew customer environments were not managed, monitored, governed, and secured as represented in the platform's system security plan."
Hillmer allegedly did this despite the numerous voices from inside the company, and those from outside cybersecurity consultants, informing her that the platform was not compliant with FedRAMP High requirements.
Disney invests $1 billion in OpenAI, licenses 200 characters for AI video app Sora
The Walt Disney Company announced a $1 billion investment in OpenAI and a three-year licensing agreement that will allow users of OpenAI’s Sora video generator to create short clips featuring more than 200 Disney, Marvel, Pixar, and Star Wars characters.
On Disney’s end of the deal, the company plans to deploy ChatGPT for its employees and use OpenAI’s technology to build new features for Disney+. A curated selection of fan-made Sora videos will stream on the Disney+ platform starting in early 2026.
Disney says Google AI infringes copyright “on a massive scale”
Disney has sent a cease and desist to Google, alleging the company’s AI tools are infringing Disney’s copyrights “on a massive scale.” According to the letter, Google is violating the entertainment conglomerate’s intellectual property in multiple ways.
The legal notice says Google has copied a “large corpus” of Disney’s works to train its gen AI models, which is believable, as Google’s image and video models will happily produce popular Disney characters—they couldn’t do that without feeding the models lots of Disney data.
The C&D also takes issue with Google for distributing “copies of its protected works” to consumers. So all those memes you’ve been making with Disney characters? Yeah, Disney doesn’t like that, either. The letter calls out a huge number of Disney-owned properties that can be prompted into existence in Google AI, including The Lion King, Deadpool, and Star Wars.
US to mandate AI vendors measure political bias for federal sales
The U.S. government will require artificial intelligence vendors to measure political "bias" to sell their chatbots to federal agencies. The requirement will apply to all large language models bought by federal agencies, with the exception of national security systems. White House: Ensuring A National Policy Framework For Artificial Intelligence
Could paper checks be on the way out, like the penny?
When the US Mint stopped making pennies last month for the first time in 238 years, it drew a lot of attention.
But there have been quiet moves to stop using paper checks as well. The government stopped sending out most paper checks to recipients as of the end of September, part of an effort to fully modernize federal benefits payments.
And on Thursday the Federal Reserve put out a notice that suggested it is considering – but only considering – the “winding down” of checking services it now provides for banks. A report from the Federal Reserve Bank of Atlanta in June found that as of last year, more than 90% of surveyed consumers said they prefer to use something other than a check for paying bills, and just 6% paid by check. That’s a sharp drop from the 18% of bills paid by checks as recently as 2017.