EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Amazon Reportedly Pins the Blame for AI-Caused Outage on Humans
When Amazon Web Services got hit by a 13-hour outage in December, it wasn’t because a person tripped over a cord. The outage was the fault of Kiro, Amazon’s AI coding assistant—though Amazon reportedly blamed human error for the situation.
Kiro was working autonomously when it came across an issue. It decided that its best course of action was to “delete and recreate the environment” that was causing problems. That led to the outage that Amazon described as an “extremely limited event,” ultimately knocking out service in one part of mainland China.
Under typical circumstances, Kiro requires two people to approve of its proposed changes before moving forward. But in this case, the AI agent was reportedly working with an engineer who had broader permissions than lower-ranking employees, and Kiro was being treated as an extension of an operator. As a result, it was given the same permissions as a person and was allowed to push the change without approval, which led to the outage.
Amazon described the outage incident as “a user access control issue, not an AI autonomy issue,” and said that it was just a “coincidence that AI tools were involved” because “the same issue could occur with any developer tool or manual action.”
The thing is it was an AI agent that allegedly had an unexpected level of access to the company’s code base and made a boo-boo.

 

Microsoft 365 Copilot Flaw Allows AI Assistant to Summarize Sensitive Emails
The issue, tracked under Microsoft reference CW1226324, was first flagged on February 4, 2026, and remains ongoing. According to the incident report, the Copilot “Work Tab” Chat feature is actively summarizing emails that carry a confidential sensitivity label, even when DLP policies are explicitly configured to restrict such processing.
Microsoft’s investigation identified a code-level defect as the root cause.
The flaw allows Copilot to inadvertently pick up items stored in users’ Sent Items and Draft folders, bypassing the confidentiality labels applied to those messages.
This is particularly concerning for organizations in regulated industries such as healthcare, finance, and government, where email confidentiality controls are not merely best practices but compliance requirements. The NHS flagged the incident internally as INC46740412, indicating the issue has a real-world impact for public sector users relying on Microsoft 365.

 

PayPal discloses data breach that exposed user info for 6 months
“On December 12, 2025, PayPal identified that due to an error in its PayPal Working Capital ("PPWC") loan application, the PII of a small number of customers was exposed to unauthorized individuals during the timeframe of July 1, 2025 to December 13, 2025.
PayPal has since rolled back the code change responsible for this error, which potentially exposed the PII. We have not delayed this notification as a result of any law enforcement investigation."
The company now offers affected users two years of free three-bureau credit monitoring and identity restoration services through Equifax, which require enrollment by June 30, 2026. Affected customers are advised to monitor their credit reports and their account activity for suspicious transactions.

 

Microsoft deletes blog telling users to train AI on pirated Harry Potter books
The blog was written in November 2024 by a senior product manager to promote a new feature that the blog said made it easier to “add generative AI features to your own applications with just a few lines of code using Azure SQL DB, LangChain, and LLMs.”
What better way to show “engaging and relatable examples” of Microsoft’s new feature that would “resonate with a wide audience” than to “use a well-known dataset” like Harry Potter books, the blog said.
The books are “one of the most famous and cherished series in literary history,” and fans could use the LLMs they trained in two fun ways: building Q&A systems providing “context-rich answers” and generating “new AI-driven Harry Potter fan fiction” that’s “sure to delight Potterheads.”
The blog linked to a Kaggle dataset that included all seven Harry Potter books, which has been available online for years and incorrectly marked as “public domain.” But Hacker News commenters speculated that the Harry Potter dataset flew under the radar, with only 10,000 downloads over time, not catching the attention of J.K. Rowling, who famously keeps a strong grip on the Harry Potter copyrights. The dataset was promptly deleted on Thursday after Ars reached out to the uploader.
The Harry Potter books weren’t the only books targeted, the thread noted, linking to a separate Azure sample containing Isaac Asimov’s Foundation series, which is also not in the public domain.
Microsoft could have avoided this week’s backlash by more carefully reviewing blogs, noting that “if a company is risk averse, this would probably be flagged.”

 

Healthcare security: Write login details on whiteboard, hope for the best
The whiteboard has been on show at the UK medical center for a while now. "A few months ago, I explained to a lady on the front desk that displaying this information was a bad idea. Clearly, they don't believe me."
The whiteboard contains usernames and passwords for system access. It's a change from a Post-it note stuck to the screen, but it's no less likely to make a security professional shriek in horror. After all, not only is the account exposed, but anyone can use it, which renders an access log somewhat redundant. 

 

HACKING

ShinyHunters demands $1.5M not to leak Vegas casino and resort chain data
The cybercrime crew listed the hospitality company on its blog, claiming to have stolen more than 800,000 records containing employees' Social Security numbers and other private details. The extortionists set a February 23 deadline for Wynn to "reach out" and threatened to leak the data, "along with several annoying (digital) problems that'll come your way," if the resort chain did not comply with the demands.
Shiny declined to say if it got the Wynn employee to give up the credentials via a social engineering trick, or simply paid the individual for access. The group has previously used Telegram to solicit insider access, and in one case reportedly claimed it agreed to pay a CrowdStrike employee $25,000 for access, though the security shop said no systems were breached.

 

Crims hit a $20M jackpot via malware-stuffed ATMs
They are doing this through ATM jackpotting - a cyber-physical attack in which crooks exploit physical and software vulnerabilities in ATMs to deploy malware that instructs the machine to dispense cash on demand without bank authorization. Of the 1,900 such incidents reported since 2020, more than 700 occurred in 2025 alone.
Crims typically gain initial access via generic keys that open the ATM face, and then infect the machine with malware, either removing the ATM's hard drive and copying malware onto it before putting it back into the machine, or simply replacing the hard drive.

 

Hackers target Microsoft Entra accounts in device code vishing attacks
Threat actors are targeting technology, manufacturing, and financial organizations in campaigns that combine device code phishing and voice phishing (vishing) to abuse the OAuth 2.0 Device Authorization flow and compromise Microsoft Entra accounts.
Unlike previous attacks that utilized malicious OAuth applications to compromise accounts, these campaigns instead leverage legitimate Microsoft OAuth client IDs and the device authorization flow to trick victims into authenticating.
This provides attackers with valid authentication tokens that can be used to access the victim's account without relying on regular phishing sites that steal passwords or intercept multi-factor authentication codes.
This can then be used to gain access to the user's resources and connected SSO applications, like Microsoft 365, Salesforce, Google Workspace, Dropbox, Adobe, SAP, Slack, Zendesk, Atlassian, and many others.

 

‘Starkiller’ Phishing Service Proxies Real Login Pages, MFA
Most phishing websites are little more than static copies of login pages for popular online destinations, and they are often quickly taken down by anti-abuse activists and security firms. But a stealthy new phishing-as-a-service offering lets customers sidestep both of these pitfalls: It uses cleverly disguised links to load the target brand’s real website, and then acts as a relay between the target and the legitimate site — forwarding the victim’s username, password and multi-factor authentication (MFA) code to the legitimate site and returning its responses.
There are countless phishing kits that would-be scammers can use to get started, but successfully wielding them requires some modicum of skill in configuring servers, domain names, certificates, proxy services, and other repetitive tech drudgery. Enter Starkiller, a new phishing service that dynamically loads a live copy of the real login page and records everything the user types, proxying the data from the legitimate site back to the victim.

 

Fake job recruiters hide malware in developer coding challenges
A new variation of the fake recruiter campaign from North Korean threat actors is targeting JavaScript and Python developers with cryptocurrency-related tasks.
Developers applying for the job are required to show their skills by running, debugging, and improving a given project. However, the attacker's purpose is to make the applicant run the code.
The activity has been ongoing since at least May 2025 and is characterized by modularity, which allows the threat actor to quickly resume it in case of partial compromise.
The bad actor relies on packages published on the npm and PyPi registries that act as downloaders for a remote access trojan (RAT). In total, researchers found 192 malicious packages related to this campaign, which they dubbed 'Graphalgo'.
In one case highlighted in the ReversingLabs report, a package named ‘bigmathutils,’ with 10,000 downloads, was benign until it reached version 1.1.0, which introduced malicious payloads. Shortly after, the threat actor removed the package, marking it as deprecated, likely to conceal the activity.

 

PromptSpy ushers in the era of Android threats using GenAI
Researchers uncovered the first known case of Android malware abusing generative AI for context-aware user interface manipulation. Machine learning has been used to similar ends already – just recently, researchers at Dr.WEB found Android.Phantom, which uses TensorFlow machine learning models to analyze advertisement screenshots and automatically click on detected elements for large scale ad fraud. Because the attackers rely on prompting an AI model (in this instance, Google’s Gemini) to guide malicious UI manipulation, we have named this family PromptSpy.
Specifically, Gemini is used to analyze the current screen and provide PromptSpy with step-by-step instructions on how to ensure the malicious app remains pinned in the recent apps list, thus preventing it from being easily swiped away or killed by the system. The AI model and prompt are predefined in the code and cannot be changed. Since Android malware often relies on UI navigation, leveraging generative AI enables the threat actors to adapt to more or less any device, layout, or OS version, which can greatly expand the pool of potential victims.

 

How Private Equity Debt Left a Leading VPN Open to Chinese Hackers
In early 2024, the agency that oversees cybersecurity for much of the US government issued a rare emergency order — disconnect your Connect Secure virtual private network software immediately. Chinese spies had hacked the code and infiltrated nearly two dozen organizations.
The software, which is made by Ivanti Inc., was something of an industry standard across government and much of the corporate world. Clients included the US Air Force, Army, Navy and other parts of the Defense Department, the Department of State, the Federal Aviation Administration, the Federal Reserve, the National Aeronautics and Space Administration, thousands of companies and more than 2,000 banks
Some government officials and private-sector executives are now reconsidering their approach to evaluating cybersecurity software. In addition to excising private equity-owned VPNs from their networks, some factor private equity ownership into their risk assessments of key technologies.
This should be part of a risk assessment when you’re looking at a product: What is the ownership structure? Are they investing in the future or are they not? Over the years, have we seen them shift dollars from investing into paying off debt.
In the engineering department for Connect Secure VPNs and related technologies, nearly all of the core team in California was let go, and the UK branch was closed. Ivanti also shuttered or drastically downsized engineering offices across Europe, sending much of that work to India. By 2024, the former employees said, layoffs, resignations and other restructuring actions had reduced the former Pulse Secure engineering team by more than half. Ivanti continued laying off nearly all the remaining engineers in its California office through the end of that year.
[rG: This isn’t unique to private equity. The same thing happens to products in public equity companies with acquisitions and divestitures.]

 

Password managers’ promise that they can’t see your vaults isn’t always true
New research shows that these claims aren’t true in all cases, particularly when account recovery is in place or password managers are set to share vaults or organize users into groups. The researchers reverse-engineered or closely analyzed Bitwarden, Dashlane, and LastPass and identified ways that someone with control over the server—either administrative or the result of a compromise—can, in fact, steal data and, in some cases, entire vaults. The researchers also devised other attacks that can weaken the encryption to the point that ciphertext can be converted to plaintext.
The adulterated zero-knowledge term used by password managers appears to have come into being in 2007.
Sadly, it is just marketing hype, much like ‘military-grade encryption. Zero-knowledge seems to mean different things to different people. Much unlike ‘end-to-end encryption,’ ‘zero-knowledge encryption’ is an elusive goal, so it’s impossible to tell if a company is doing it right.

  

APPSEC, DEVSECOPS, DEV

NIST Announcing the "AI Agent Standards Initiative" for Interoperable and Secure Innovation
The Center for AI Standards and Innovation (CAISI) at NIST announced the launch of the AI Agent Standards Initiative. The Initiative will ensure that the next generation of AI—AI agents capable of autonomous actions—is widely adopted with confidence, can function securely on behalf of its users, and can interoperate smoothly across the digital ecosystem.

 

NIST Updates Age Estimation Evaluation with Two New Algorithm Submissions
The National Institute of Standards and Technology updated its Face Analysis Technology Evaluation (FATE) Age Estimation and Verification (AEV) results as of February 13, 2026, adding performance data for two newly submitted algorithms in its ongoing assessment of software that analyzes facial images to estimate age.
In the updated report, the two new algorithm entries are ROC’s roc-002 (submitted February 6, 2026) and Shufti’s shufti-000 (submitted January 26, 2026).

 

Your AI-generated password isn't random, it just looks that way
AI security company Irregular looked at Claude, ChatGPT, and Gemini, and found all three GenAI tools put forward seemingly strong passwords that were, in fact, easily guessable.
AI will likely be writing the majority of all code, and if that's true, then the passwords it generates won't be as secure as expected. People and coding agents should not rely on LLMs to generate passwords. Passwords generated through direct LLM output are fundamentally weak, and this is unfixable by prompting or temperature adjustments: LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation.

 

OpenClaw security fears lead Meta, other AI firms to restrict its use
A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments.
Valere, which works on software for organizations including Johns Hopkins University stated, “If it got access to one of our developer’s machines, it could get access to our cloud services and our clients’ sensitive information, including credit card information and GitHub codebases. It’s pretty good at cleaning up some of its actions, which also scares me.”

 

EU Parliament blocks AI tools over cyber, privacy fears
The European Parliament has disabled AI features on the work devices of lawmakers and their staff over cybersecurity and data protection concerns. It had disabled "built-in artificial intelligence features" on corporate tablets after its IT department assessed it couldn't guarantee the security of the tools' data.
"Some of these features use cloud services to carry out tasks that could be handled locally, sending data off the device. As these features continue to evolve and become available on more devices, the full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled."

  

 

VENDORS & PLATFORMS

NIST’s Quantum Breakthrough: Single Photons Produced on a Chip
Quantum computers will upend current cryptology by using Shor’s algorithm to rapidly negate the current public/private key secure encryption methods. This has largely been solved by NIST’s post quantum cryptology (PQC) algorithms.
NIST has developed Superconducting Nanowire Single-Photon Detectors (SNSPDs) which would allow single photons to be reliably sent and received over longer distances – up to 600 miles.
The second big advance is that NIST can do this on a single chip, which means such chips could be in mass production by the end of next year. Traditionally, NIST develops standards and industry rapidly adopts them. While the QKD market is likely to be relatively small (limited to areas that require very strong security), separate applications will quickly follow.

 

 

Founder ditches AWS for Euro stack, finds sovereignty isn't plug-and-play
One founder lays out what happened when they decided to ditch the US hyperscalers and piece together a "Made in EU" stack instead.
"Is self-hosting more work than SaaS? Obviously. But it means my data stays exactly where I put it, and I'm not at the mercy of a provider's pricing changes or acquisition drama."

 

The Ghost in the Machine: Why GenAI Can Be Both a Brilliant Researcher and a Terrible Advocate
We have seen blockchain, NFTs, and the Internet of Things all promised as revolutionary technologies.
However, as anyone who has spent an hour stress-testing a large language model (LLM) knows, these systems possess a baffling duality.  They are simultaneously the smartest person in the room and the most confident liar you have ever met.
By now, most of us have heard of attorneys who submitted briefs containing fictional case law.  They did not do this out of malice.  They did it because they failed to understand that an LLM is based on a probability engine.  It predicts the next most likely word of a sentence based on a statistical distribution.
With LLMs, hallucination is not a bug.  It is a fundamental part of how these models work.  They are designed to make guesses.  That can be great for writing a poem about a toaster.  But it can be an expensive lesson for an attorney who thinks that cite-checking has suddenly become archaic.
YouTube: Watch "#Gemini…thinks I should walk to the car wash "
YouTube: Watch "Asking #chatgpt How To Spell Strawberry #fyp #relatable"
YouTube: "Day 5…#grok teaches #chatgpt how to count…"
YouTube: "Day 1…#chatgpt lies to my face"
YouTube: "Waymo car drives on train tracks; passenger runs from vehicle"
YouTube: "Are AI Models "cheating" when they answer questions?"
Needle-in-a-haystack challenge: finding every spell in all Harry Potter books.
Fails due to using trained knowledge from articles that list the spells, instead of analyzing the provided data as instructed.
[rG AI Threats and Defenses presentation.]

 

Are you outsourcing your intelligence to AI?
Large Language Models offer us 24/7 access to advice and guidance. It’s easy to fall into an authority bias toward LLMs because not only do tools like ChatGPT answer all questions with an astonishingly confident tone, but outsourcing our decision-making is convenient. Also known as Cognitive Offloading.
Given the fact that LLMs are not well-rounded, critical-minded people, this can be dangerous.
LLMs have been known to hallucinate by making up data or resources, reduce cognitive problem-solving skills, and hinder spontaneous creativity. It also has a bias for positivity, which means it can validate or support even the worst of ideas. This bias can be especially powerful in making you drift off track as a leader.
Delegating our approach and decisions to AI leads to a sea of sameness. Remember, your experience, insights, and senses are unique and valuable. They are your competitive advantage. No AI tool can replace this.
[rG: LLMs great for summarizing data they are trained on (both good and bad); can generate new mashups from training and provided sources; but don’t have real-world experience to discern truth, practicality, or inspirational innovation.
Generated output doesn’t result in organizational learning or skills unless it is critically evaluated and validated through physically exercised experimentation.]

 

 

Snyk CEO bails, wants someone with more AI experience to replace him
“Snyk is entering ‘Part Two’ – an era of hyper-intensive AI innovation. This next chapter requires a visionary, AI-immersed leader ready to commit their full energy to a multi-year journey of technical disruption.”
And presumably somebody who is happy playing buzzword bingo.
[rG: AI washing is very stressful.
"Nothing is more difficult, and nothing requires more character, than to find yourself in open contradiction to your time and loudly to say: No." - Kurt Tucholsky]

 

LEGAL & REGULATORY

Poland bans camera-packing cars made in China from military bases
The announcement from the country’s Ministry of Defence says the decision came after risk analysis of the potential for the many gadgets built into modern cars to allow “uncontrolled acquisition and use of data.”
The ban also prohibits officials connecting their work phones to infotainment systems in China-made cars.
The ban isn’t permanent: the Ministry has called for development of a vetting process to allow carmakers to undergo a security assessment that, if passed, will mean their vehicles can enter protected facilities. Exemptions are also available for inspections carried out by state and local governments, and during rescues.
[rG: New designs needed for restricted access facilities and areas: visual and audio barriers, personal device bans (phone, laptops, tablets, fitness, vehicles, IoTd), private communication devices and networks - not just in active war theaters.]

 

Attackers have 16-digit card numbers, expiry dates, but not names. Now org gets £500k fine
The Information Commissioner's Office (ICO) originally fined DSG Retail £500,000 ($673,000) in 2020, the maximum financial penalty allowed under the Data Protection Act 1998 (DPA 1998) – the relevant legislation at the pre-GDPR time.
Hackers installed malware on 5,390 tills across consumer electronics stores Currys PC World and Dixons Travel, both of which DSG owns. The malware went unnoticed for nine months, hoovering up 5.6 million payment card details and the personal information belonging to around 14 million people.
The point of contention, central to the protracted legal case, is whether the card details the attackers scooped up could be used to identify cardholders.
Lord Justice Warby concluded on Thursday that this argument was incorrect, siding with the ICO, sending the case back to the first-tier tribunal which ruled correctly in the first instance. His judgment challenged the upper tribunal's interpretation of the law, saying that personal data must be viewed from the perspective of the controller; if it can lead to the identification of an individual, in this case, at DSG Retail, then it is personal data.
The relevant statute requires data controllers to safeguard this data, regardless of whether a third party could use it to identify individuals.

 

Lawsuit: ChatGPT told student he was “meant for greatness”—then came psychosis
Darian DeCruise has sued OpenAI, alleging that a recently deprecated version of ChatGPT “convinced him that he was an oracle” and “pushed him into psychosis.”
This case, which was first reported by ALM, marks the 11th such known lawsuit to be filed against OpenAI that involves mental health breakdowns allegedly caused by the chatbot.
DeCruise’s lawyer, Benjamin Schenk—whose firm bills itself as “AI Injury Attorneys” states: “OpenAI purposefully engineered GPT-4o to simulate emotional intimacy, foster psychological dependency, and blur the line between human and machine—causing severe injury. This case keeps the focus on the engine itself. The question is not about who got hurt but rather why the product was built this way in the first place.”

 

Texas sues TP-Link over China links and security vulnerabilities
The lawsuit claims that TP-Link is the dominant player in the US networking and smart home market, controlling 65 percent of the American market for network devices.
It also alleges that TP-Link represents to American consumers that the devices it markets and sells within the US are manufactured in Vietnam, and that consistent with this, the devices it sells in the American market carry a "Made in Vietnam" sticker.
However, the Attorney General alleges that, despite these representations, TP-Link’s networking and smart home devices are manufactured and developed by Chinese subsidiaries owned and managed by the company. The petition claims the facilities in Vietnam perform only final assembly.
Security researchers and experts have for years reported on TP-Link's "numerous and dangerous" firmware vulnerabilities that Chinese state-sponsored hackers have exploited to access the devices.

 

US lawyers fire up privacy class action accusing Lenovo of bulk data transfers to China
The case states the threshold for "covered personal identifiers" is 100,000 US persons or more and lists a range of potential identifiers, from government and financial account numbers to IMEIs, MAC, and SIM numbers, demographic data, and advertising IDs.
It then alleges that Lenovo's website "uses trackers which expose American's [sic] behavioral data to foreign adversaries."
"When a user lands on the homepage of Website, [sic] the Website loads numerous first and third-party tracking implementations that measure and record user data," it says, including the likes of TikTok, Facebook, Microsoft, and Google.
This allows Lenovo to collect bulk personal data, it claims, and "Lenovo knowingly permits access to, or transfer of, such bulk US sensitive personal data to entities or persons that qualify as covered persons under the DOJ Rule, including its foreign parents that are directly or indirectly controlled by persons in China, such as the Lenovo Group."
This means that Lenovo Group, operating under Chinese jurisdiction, "can use this data to build detailed dossiers on US residents, identify psychological or financial vulnerabilities, and target individuals in sensitive roles – such as jurists, military personnel, journalists, politicians, or dissidents.”

 

UK to demand social platforms take down abusive intimate images within 48 hours
Platforms that do not do so would potentially face fines of 10 percent of "qualifying worldwide income" or have their services blocked in the UK.
The amendment follows outrage over the Elon Musk-owned chatbot Grok's willingness to generate nude or sexualized images of people, mainly women and girls, which forced a climbdown earlier this year.
Under the UK's proposals, victims would only have to report an abusive image once, and not have to contact multiple platforms or remain constantly vigilant for new uploads.
The government said: "Plans are currently being considered by Ofcom for these kinds of images to be treated with the same severity as child sexual abuse and terrorism content, digitally marking them so that any time someone tries to repost them, they will be automatically taken down."
It added that creating or sharing non-consensual intimate images will also become a "priority offence" under the Online Safety Act, "meaning this crime is treated with the same seriousness as child abuse or terrorism."

 

US plans online portal to bypass content bans in Europe and elsewhere
The U.S. State Department is developing an online portal that will enable people in Europe and elsewhere to see content banned by their governments including alleged hate speech and terrorist propaganda, a move Washington views as a way to counter censorship. U.S. officials have denounced EU policies that they say are suppressing right-wing politicians, including in Romania, Germany and France, and have claimed rules like the EU's Digital Services Act and Britain's Online Safety Act limit free speech. In rules that fall most heavily on social media sites and large platforms like Meta's, Facebook and X, the EU restricts the availability — and in some cases requires rapid removal — of content classified as illegal hate speech, terrorist propaganda or harmful disinformation under a group of rules, laws and decisions since 2008.
EU regulators regularly require U.S.-based sites to remove content and can impose bans as a measure of last resort. X, which is owned by Trump ally Elon Musk, was hit with a 120 million-euro fine in December for noncompliance. 

 

Keep Reading