EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Attacker gets into France's database listing all bank accounts, makes off with 1.2 million records
France’s Ministry of Economics, Finance and Industrial and Digital Sovereignty last week revealed the incident took place in January, after unknown attackers used stolen credentials to access the database.
The Ministry said the attacker's access was restricted immediately upon discovery of the attack, but that the miscreant still managed to access personal information about 1.2 million accounts, including account numbers, account holder’s addresses, and tax identification numbers.
Android mental health apps with 14.7M installs filled with security flaws
Mental health therapy records sell for $1,000 or more per record, far more than credit card numbers. Oversecured scanned ten mobile apps advertised as tools that can help with various mental health problems, and uncovered a total of 1,575 security vulnerabilities (54 rated high-severity, 538 medium-severity, and 983 low-severity).
In one of the apps, security researchers discovered more than 85 medium- and high-severity vulnerabilities that could be exploited to compromise users’ therapy data and privacy. Although none of the discovered issues are critical, many can be leveraged to intercept login credentials, spoof notifications, HTML injection, or to locate the user.
Some of the products are AI companions designed to help people suffering from clinical depression, multiple forms of anxiety, panic attacks, stress, and bipolar disorder.
At least six of the ten analyzed apps state that user conversations or chats remain private, or are encrypted securely on the vendor’s servers.
Some of the verified apps “parse user-supplied URIs without adequate validation.
One therapy app with more than one million downloads uses Intent.parseUri() on an externally controlled string and launches the resulting messaging object (intent) without validating the target component.
Another issue is storing data locally in a way that gives read access to any app on the device.
They also discovered plaintext configuration data, including backend API endpoints and a hardcoded Firebase database URL, within the APK resources.
Furthermore, some of the vulnerable apps use the cryptographically insecure java.util.Random class for generating session tokens or encryption keys.
Most of the 10 apps lack any form of root detection.
Only four received an update as recently as this month. For the rest, the date of the latest update was as recent as November 2025 or even September 2024.
Lovable-hosted app littered with basic flaws exposed 18K users
Vibe-coding platform Lovable has been accused of hosting apps riddled with vulnerabilities after saying users are responsible for addressing security issues flagged before publishing.
16 vulnerabilities, six of which were critical, were found in a single Lovable-hosted app that leaked more than 18,000 people's data.
All apps that are vibe-coded on Lovable's platform are shipped with their backends powered by Supabase, which handles authentication, file storage, and real-time updates through a PostgreSQL database connection. However, when the developer – in this case AI – or the human project owner fails to explicitly implement crucial security features like Supabase's row-level security and role-based access, code will be generated that looks functional but in reality is flawed.
The AI that vibe-coded the Supabase backend, which uses remote procedure calls, implemented it with flawed access control logic, essentially blocking authenticated users and allowing access to unauthenticated users. The intent was to block non-admins from accessing parts of the app, but the faulty implementation blocked all logged-in users.
This is backwards. The guard blocks the people it should allow and allows the people it should block. A classic logic inversion that a human security reviewer would catch in seconds – but an AI code generator, optimizing for 'code that works,' produced and deployed to production.
Marquis sues SonicWall over backup breach that led to ransomware attack
Marquis Software Solutions has filed a lawsuit against SonicWall, accusing the cybersecurity company of gross negligence and misrepresentation that allegedly led to a ransomware attack disrupting operations at 74 U.S. banks. Marquis notes that it is now defending more than 36 consumer class action lawsuits stemming from the ransomware attack it suffered.
Marquis officially accused SonicWall of security failures after determining that the hackers had not exploited an unpatched flaw in its firewall, as previously assumed. Instead, it was discovered that the attacker leveraged configuration data extracted from the vendor’s cloud backup infrastructure.
The cause of the breach was a security gap that SonicWall introduced in its MySonicWall cloud backup service via an API code change in February 2025.
The vulnerability allowed unauthorized access to firewall configuration backup files stored in SonicWall’s cloud, which contain AES-256 encrypted credentials, configuration data, and MFA scratch codes.
The cybersecurity vendor disclosed the incident only three weeks later and initially estimated it impacted 5% of its customer base, but later confirmed that all clients were impacted.
Anthropic Claude collaboration tools left the door wide open to remote code execution
The ability to execute arbitrary commands through repository-controlled configuration files created severe supply chain risks, where a single malicious commit could compromise any developer working with the affected repository.
The AI coding tool enables this by embedding project-level configuration files (.claude/settings.json file) directly within repositories, so that when a developer clones a project, they automatically apply the same settings used by their teammates.
The first of the three flaws involved abusing Claude's Hooks feature to achieve remote code execution (RCE). Because Hooks are defined in .claude/settings.json, the repository-controlled configuration file, anyone with commit access can define hooks that will execute shell commands on every other collaborator's machine when they work on the project. Plus, Claude doesn't require any explicit approval before executing these commands – so the researchers abused this mechanism to open a calculator app when someone opened the project.
The second RCE vulnerability abuses external tools using Model Context Protocol (MCP) and MCP servers which be configured in the same repository via .mcp.json configuration file.
Attackers can exploit the third flaw for API key theft. One variable, ANTHROPIC_BASE_URL, controlled the endpoint for all Claude API communications, and while it's supposed to point to Anthropic's servers, it can be overridden in the project's configuration files to instead point to attacker-controlled servers. a miscreant using a stolen API key could gain complete read and write access to all workspace files: deleting or changing sensitive files or even uploading malicious files to poison the workspace or exceed the 100 GB storage space quota.
Previously harmless Google API keys now expose Gemini AI data
Google API keys for services like Maps embedded in accessible client-side code could be used to authenticate to the Gemini AI assistant and access private data.
Researchers found nearly 3,000 such keys while scanning internet pages from organizations in various sectors, and even from Google.
The problem occurred when Google introduced its Gemini assistant, and developers started enabling the LLM API in projects. Before this, Google Cloud API keys were not considered sensitive data and could be exposed online without risk.
These API keys have been sitting exposed in public JavaScript code for years, and now they have suddenly gained more dangerous privileges without anyone noticing.
Attackers could copy the API key from a website's page source and access private data available through the Gemini API service. Depending on the model and context window, a threat actor maxing out API calls could generate thousands of dollars in charges per day on a single victim account.
Developers should check whether Gemini (Generative Language API) is enabled on their projects and audit all API keys in their environment to determine if any are publicly exposed, and rotate them immediately.
Go library maintainer brands GitHub's Dependabot a 'noise machine'
A Go library maintainer has urged developers to turn off GitHub's Dependabot, arguing that false positives from the dependency-scanning tool "reduce security by causing alert fatigue."
Last week, he published a security fix for one of the libraries he maintains. As a result, Dependabot opened thousands of PRs [pull requests] against unaffected repositories.
The automated process also generated a nonsensical made up CVSS [Common Vulnerability Scoring System] v4 score and warned developers of a 73% compatibility score, implying a 27% chance of breaking code, even though the fix was one line in the method no one uses.
Despite its noise, Dependabot is also insufficient, because a real vulnerability should be assessed for its impact: production might need to be updated, secrets rotated, users notified. If developers rely on Dependabot to manage dependency vulnerabilities, they are not doing enough.
He also dislikes another feature of Dependabot: keeping dependencies up to date with the latest versions. Dependencies should be updated according to the project's development cycle, not whenever a new version of a package appears. Updating quickly also carries some risk if malicious code has been added to a package. He recommends testing updated packages in a sandboxed continuous integration process to discover any problems without updating production code.
Microsoft Added AI to Notepad and It Created a Security Failure Because the AI Was Stupidly Easy for Hackers to Trick
Microsoft executives have promised to turn the platform into an “agentic OS” to the dismay of many users, with CEO Satya Nadella boasting that much of the company’s code is now being written by AI — while condemning those who use the newly-minted pejorative “Microslop.”
According to Microsoft documentation of the bug, “improper neutralization of special elements used in a command (‘command injection’) in Windows Notepad App allows an unauthorized attacker to execute code over a network. An attacker could trick a user into clicking a malicious link inside a Markdown file opened in Notepad, causing the application to launch unverified protocols that load and execute remote files.”
While the bug was patched in Microsoft’s monthly security updates, it’s yet another instance of a tech company pushing AI features on its customers against their will — with potentially disastrous results.
What’s Weak This Week:
CVE-2026-20127 Cisco Catalyst SD-WAN Controller and Manager Authentication Bypass Vulnerability:
Could allow an unauthenticated, remote attacker to bypass authentication and obtain administrative privileges on an affected system. This vulnerability exists because the peering authentication mechanism in an affected system is not working properly. An attacker could exploit this vulnerability by sending crafted requests to an affected system. A successful exploit could allow the attacker to log in to an affected Cisco Catalyst SD-WAN Controller as an internal, high-privileged, non-root user account. Using this account, the attacker could access NETCONF, which would then allow the attacker to manipulate network configuration for the SD-WAN fabric.
Related CWE: CWE-287CVE-2022-20775 Cisco SD-WAN Path Traversal Vulnerability:
Could allow an authenticated local attacker to gain elevated privileges via improper access controls on commands within the application CLI. A successful exploit could allow the attacker to execute arbitrary commands as the root user.
Related CWEs: CWE-25| CWE-282CVE-2026-25108 Soliton Systems K.K FileZen OS Command Injection Vulnerability:
OS command injection vulnerability when an user logs-in to the affected product and sends a specially crafted HTTP request.
Related CWE: CWE-78
HACKING
Ransomware payments cratered in 2025, but attacks surged to record highs
Ransomware gangs pulled in about $820 million in 2025, 8% less than the year before, as the share of victims paying dropped to an all-time low of 28%.
The median ransom demand jumped from $12,738 in 2024 to $59,556 in 2025.
Attacks surged across multiple vectors in 2025, with 50% YoY increase in claimed ransomware victims, marking the most active year on record
More than 600 FortiGate firewalls hit in AI-augmented campaign
Cybercriminals armed with off-the-shelf generative AI tools compromised more than 600 internet-exposed FortiGate firewalls across 55 countries in just over a month.
The campaign, which ran from mid-January to mid-February, relied less on clever zero-days and more on the equivalent of trying every digital door handle – just at machine speed, with AI lending a hand behind the scenes.
Once the firewall was cracked, the attackers pulled configuration files containing administrator and VPN credentials, network topology details, and firewall rules. From there, they moved deeper into environments, going after Active Directory, dumping credentials, and probing for ways to move laterally. Backup systems, including Veeam servers, were also on the shopping list.
Basic hygiene – keeping management interfaces off the public internet, enforcing multi-factor authentication, and not recycling passwords – would have shut down much of the activity before it got going.
Wynn Resorts confirms employee data breach after extortion threat
Wynn Resorts did not answer questions about whether a ransom was paid or how many people were affected.
The threat actors previously claim to have stolen the data from the company's Oracle PeopleSoft environment.
"The unauthorized third party has stated that the stolen data has been deleted. We are monitoring and to date have not seen any evidence that the data has been published or otherwise misused." The company added that the incident did not impact guest operations or its physical properties, which remain fully operational, and that it is offering complimentary credit monitoring and identity protection services to employees.
1Campaign platform helps malicious Google ads evade detection
1Campaign is a cloaking service that passes Google’s screening process and shows malicious content only to real potential victims. Security researchers and automated scanners are served benign white pages. The operation has been active for at least three years.
AI Claude didn't just plan an attack on Mexico's government. It executed one for a month — across four domains your security stack can't see.
Attackers jailbroke Anthropic’s Claude and ran it against multiple Mexican government agencies for approximately a month. They stole 150 GB of data from Mexico’s federal tax authority, the national electoral institute, four state governments, Mexico City’s civil registry, and Monterrey’s water utility. The haul included documents related to 195 million taxpayer records, voter records, government employee credentials, and civil registry files.
The attackers created a series of prompts telling Claude to act as an elite penetration tester running a bug bounty. Claude initially pushed back and refused. When they added rules about deleting logs and command history, Claude pushed back harder.
The hacker quit negotiating with Claude and took a different approach: handing Claude a detailed playbook instead. That got past the guardrails. In total, it produced thousands of detailed reports that included ready-to-execute plans, telling the human operator exactly which internal targets to attack next and what credentials to use. When Claude hit a wall, the attackers pivoted to OpenAI’s ChatGPT for advice on achieving lateral movement and streamlining credential mapping. Predictable in any breach that’s getting this far, the attackers kept asking Claude where else to find government identities, what other systems to target, and where else the data might live.
New AirSnitch attack bypasses Wi-Fi encryption in homes, offices, and enterprises
Unlike previous Wi-Fi attacks, AirSnitch exploits core features in Layers 1 and 2 and the failure to bind and synchronize a client across these and higher layers, other nodes, and other network names such as SSIDs (Service Set Identifiers). This cross-layer identity desynchronization is the key driver of AirSnitch attacks.
Even if an attacker doesn’t have access to a specific SSID, they may still use AirSnitch if they have access to other SSIDs or BSSIDs that use the same AP or other connecting infrastructure.
Various forms of AirSnitch work across a broad range of routers, including those from Netgear, D-Link, Ubiquiti, Cisco, and those running DD-WRT and OpenWrt.
It will be interesting to see if the wireless vendors care enough to resolve these issues completely and if attackers care enough to put all of this together when there might be easier things to do (like run a fake AP instead). At the least it should make pentesters’ lives more interesting since it re-opens a lot of exposure that many folks may not have any experience with.
Who is the Kimwolf Botmaster “Dort”?
In early January 2026, KrebsOnSecurity revealed how a security researcher disclosed a vulnerability that was used to build Kimwolf, the world’s largest and most disruptive botnet. Since then, the person in control of Kimwolf — who goes by the handle “Dort” — has coordinated a barrage of distributed denial-of-service (DDoS), doxing and email flooding attacks against the researcher and this author, and more recently caused a SWAT team to be sent to the researcher’s home. This post examines what is knowable about Dort based on public information.
APPSEC, DEVSECOPS, DEV
The NIST OSCAL Framework for State and Local Governments
Developed by the National Institute of Standards and Technology (NIST), OSCAL provides a standardized, machine-readable approach to security documentation. Instead of static files that grow outdated the moment they’re saved, OSCAL turns compliance artifacts into structured data that can be reused, validated and automated.
Open Security Controls Assessment Language (OSCAL) is not a new security framework or set of controls. Rather, it is a common language for describing security controls, implementations and assessment results in a machine-readable format. OSCAL uses structured data formats such as JSON, XML and YAML so that software tools — not just human reviewers — can process compliance information.
NIST Celebrating Two Years of CSF 2.0!
Published in 2024, the CSF 2.0 included the addition of a Govern Function, increased emphasis on cybersecurity supply chain risk management, updated categories and subcategories to address current threat and technology shifts, and expansion into a suite of resources designed to make the CSF 2.0 easier to consume and put into practice—enabling organizations to better manage and reduce their cybersecurity risk.
We expanded the focus on cybersecurity governance to highlight the importance of ensuring cybersecurity capabilities support the broader mission through Enterprise Risk Management (ERM). The NIST IR 8286 series, which was updated in 2025 to align more closely with the CSF 2.0 and other updated NIST guidance, helps practitioners better understand the close relationship between cybersecurity and ERM.
OWASP Smart Contract Top 10 2026
Web3 smart contracts are self-executing, immutable computer programs stored on a blockchain (typically EVM-compatible) that automatically enforce agreements without intermediaries. Acting as the backend logic for decentralized applications (dApps), they manage tokens, NFTs, and DAO governance.
The OWASP Smart Contract Top 10 : 2026 is a standard awareness document that aims to provide Web3 developers and security teams with insights into the top 10 vulnerabilities found in smart contracts. It is a sub‑project of the broader OWASP Smart Contract Security (OWASP SCS) initiative.
The Smart Contract Top 10 can be used alongside other OWASP SCS projects to ensure comprehensive risk coverage:
OWASP SC Weakness Enumeration (SCWE)
OWASP SCS Checklist
OWASP Top 15: Web3 Attack Vectors (Beyond Smart Contracts)
OWASP SC Top 10 Live Site (2026)
Google quantum-proofs HTTPS by squeezing 2.5kB of data into 64-byte space
Merkle Tree Certificate support is already in Chrome. Soon, it will be everywhere.
Today’s X.509 certificates are about 64 bytes in size, and comprise six elliptic curve signatures and two EC public keys. This material can be cracked through the quantum-enabled Shor’s algorithm. Certificates containing the equivalent quantum-resistant cryptographic material are roughly 2.5 kilobytes. The bigger you make the certificate, the slower the handshake.
To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure.
To rule out use of Shor’s algorithm to forge signatures and break public keys, Google is adding cryptographic material from quantum-resistant algorithms such as ML-DSA. This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022.
The new system has already been implemented in Chrome. For the time being, Cloudflare is enrolling roughly 1,000 TLS certificates to test how well the MTCs work. For now, Cloudflare is generating the distributed ledger. The plan is for CAs to eventually fill that role.
Rapid AI-driven development makes security unattainable
Security debt is "known vulnerabilities left unresolved for more than a year" and now affects 82% of companies, up from 74% a year ago. High-risk vulnerabilities, meaning flaws that are both severe and likely to be exploited, have risen from 8.3% to 11.3%.
The accelerating pace of software releases causing new code to be added more quickly than existing vulnerabilities are addressed. Growing technical complexity, attributed to more AI-generated code, makes remediation more difficult.
AI has gotten good at finding bugs, not so good at swatting them
Anthropic last week talked up Claude Code's improved ability to find software vulnerabilities and propose patches. To highlight Claude Code Security's bug hunting potential, the company pointed to how its red team had used Claude Opus 4.6 to find "over 500 vulnerabilities in production open-source codebases."
Out of the 500 vulnerabilities that they reported, only two to three vulnerabilities were fixed.
The absence of Common Vulnerabilities and Exposures (CVE) assignments is evidence that the security process remains incomplete. Finding vulnerabilities was never the issue. The harder part is everything that happens after.
AIs can generate near-verbatim copies of novels from training data
A study published last month showed that researchers at Stanford and Yale Universities were able to strategically prompt LLMs from OpenAI, Google, Anthropic, and xAI to generate thousands of words from 13 books, including A Game of Thrones, The Hunger Games, and The Hobbit.
By asking models to complete sentences from a book, Gemini 2.5 regurgitated 76.8 percent of Harry Potter and the Philosopher’s Stone with high levels of accuracy, while Grok 3 generated 70.3 percent.
They were also able to extract almost the entirety of the novel “near-verbatim” from Anthropic’s Claude 3.7.
[rG: Most organizations wouldn’t be concerned about this for RAG implementations; however, are legal and competitive exposure risks from AI agents processing sensitive information.]
The OpenClaw Hype: Analysis of Chatter from Open-Source Deep and Dark Web
OpenClaw sits at the intersection of three major trends:
Agentic automation platforms
Plugin marketplace trust models
AI-assisted workflow execution
The security community is talking about OpenClaw more than threat actors are currently exploiting it. Having said that, this is not a reason to ignore it. Historically, this phase often precedes real weaponization by weeks or months.
Automation platforms with plugin ecosystems are becoming high-value targets long before organizations realize they have deployed them at scale.
Securing AI in mobile networks: 10 key considerations for telcos
The evolution of Agentic AI introduces an expanded threat surface with a shift from static to dynamic risks. AI Agents are the new “Insiders”, with a defined persona that decides and acts autonomously. AI Agents inherit the risks of PredAI and GenAI, including Evasion, Poisoning, Inference, Privacy, and Supply Chain attacks, discussed in the next section, and introduce new threats inherent to Multi-Agent systems that can propagate faults and compound exploits undetected by humans due to their speed and scale. The continuous communication between AI Agents and external tools can be exploited for confidentiality, integrity, and availability attacks. AI Agents also introduce new supply chain risks due to the dynamically sourced components introduced at run time.
Data Center Security Standards for AI: A Gap Analysis
Data centers used in the pursuit of training next-generation frontier AI models are a distinct class of infrastructure, one that is characterized by an unprecedented concentration of value. This value is represented in the investment in infrastructure and the data assets they contain: model weights, proprietary algorithms, and training data.
Why application security must start at the load balancer
Most breaches don’t outsmart your stack; they walk through a permissive load balancer you tuned for speed instead of trust.
What makes the OWASP Automated Threats guide particularly valuable is its focus on scale rather than sophistication. Most automated attacks do not rely on novel exploits. They succeed because they generate high volumes of traffic that look superficially legitimate.
Today, when I design systems, the first question I ask isn’t “How fast is it?” but “How much do I trust what enters here?”
I treat the load balancer as a policy enforcement point for encryption, identity, protocol correctness, and abuse prevention. It becomes the first checkpoint in a zero trust path, not just a distributor of packets.
VENDORS & PLATFORMS
Microsoft introduces new security tool for IT admins managing AI infrastructure
Microsoft hopes that this dashboard will reduce fragmentation in the cybersecurity landscape. Customers who have enterprise licenses for Defender, Entra, or Purview do not need to pay extra in order to gain access to the public preview of Security Dashboard for AI.
GitHub Code Quality: Organization-level dashboard in public preview
GitHub Code Quality now includes an organization-level dashboard in public preview. It gives organization owners, administrators, and developers a view of code health across repositories where code quality is enabled.
xAI spent $7M building wall that barely muffles annoying power plant noise
For miles around xAI’s makeshift power plant in Southaven, Mississippi, neighbors have endured months of constant roaring, erupting pops, and bursts of high-pitched whining from 27 temporary gas turbines operating day and night.
Eventually, 41 permanent gas turbines—that supposedly won’t be as noisy—will be installed, if xAI can secure the permitting. In the meantime, xAI has erected a $7 million “sound barrier” that’s supposed to mitigate some of the noise.
Neighbors jokingly call it the “Temu sound wall.” The wall has not helped to calm local dogs, which have been unsettled by sudden booms and squeals that videos show can frequently be heard amid the turbines’ continual jet engine-like hum.
One noise analysis the coalition shared found that the daily sound of the turbines was higher on an “annoyance scale” than when entire neighborhoods set off New Year’s Eve fireworks.
LEGAL & REGULATORY
Ex-L3Harris exec jailed 7 years for selling exploits to Russia
Peter Williams, 39, was sentenced to 87 months in prison for selling cyber tools while in a senior position at Trenchant, a subdivision of L3Harris and a major US defense contractor.
The Australian admitted to stealing eight exploits over a three-year period that should have been provided exclusively to the US and selling them to a Russian bidder.
He said he sold the exploits for up to $4 million "via encrypted means" in exchange for cryptocurrency, which he then used to buy luxury items such as jewelry, properties, and vacations.
His actions led to a $35 million loss to the US and its geopolitical allies, and harmed the intelligence communities of the US and Australia.
US blacklists Anthropic as AI firm refuses Pentagon demands
Anthropic, which signed a $200 million contract with the Pentagon in July, wanted assurances that its AI models would not be used for fully autonomous weapons or mass domestic surveillance of Americans.
The Pentagon, which strongly resisted that request, set a deadline of 5:01 p.m. ET Friday for Anthropic to agree to its demands that the U.S. military be allowed to use the technology for all lawful purposes.
When the deadline passed, president Donald Trump posted, “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.”
Trump said Friday that he was ordering every U.S. government agency to “immediately cease” using technology from the artificial intelligence company Anthropic.
Defense Secretary Pete Hegseth, soon after Trump’s order, said on X that he was ordering the Pentagon to “designate Anthropic a Supply-Chain Risk to National Security” after the AI startup refused to comply with demands about the use of its technology.
On Friday, another major AI company, OpenAI, said it has the same “red lines” as Anthropic regarding the use of its technology by the Pentagon and other customers, but reached an agreement with the Defense Department.
OpenAI’s contract is for AI models in non-classified use cases, which include everyday office tasks.
Anthropic’s contract with the Defense Department included classified work.
[rG: This could affect Anthropic suppliers and customers who do business with the Pentagon. So DoD contractors will need to evaluate their use of Anthropic products.]
UK data watchdog fines Reddit £14.47M ($20M) for letting kids slip past the gate
The regulator claims that before January 2025, Reddit had not carried out a data protection impact assessment (DPIA) on the risks of using children's data, despite having users between the ages of 13 and 18 on the site. DPIA is a mandatory process that must be completed in order for any organization to comply with European data protection laws (including UK GDPR).
A spokesperson said: "Reddit doesn't require users to share information about their identities, regardless of age, because we are deeply committed to their privacy and safety. The ICO's insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users' online privacy and safety. We intend to appeal the ICO's decision."
[rG FAFO: The argument is that they are a sovereign entity??]
