- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits & AI 2024-10-05
Robert Grupe's AppSecNewsBits & AI 2024-10-05
This week’s news roundup newsletter: Apple VoiceOver fail, AI LLM fails, email cc attacks; largest DDoS attack, Linux vulns, certificate management, etc.
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Apple fixes bug that let VoiceOver shout your passwords
Apple just fixed a duo of security bugs in iOS 18.0.1 and iPadOS 18.0.1, one of which might cause users' saved passwords to be read aloud. It's hardly an ideal situation for the visually impaired.
In typical Apple fashion, the company hasn't released much in the way of details about the first security issue, tracked as CVE-2024-44204, which makes it tougher to understand the conditions under which this vulnerability could be triggered, or how to avoid it until the update is applied. What we do know is that it was characterized as a logic issue, which Apple rectified by improving validation.
Also included in the 18.0.1 update is a fix for another audio-based bug. CVE-2024-44207 only affects iPhone 16 which may in some cases capture a few seconds of audio before that the orange indicator is displayed.
Systems used by courts and governments across the US riddled with vulnerabilities
One flaw uncovered in the voter registration cancellation portal for the state of Georgia, for instance, allowed anyone visiting it to cancel the registration of any voter in that state when the visitor knew the name, birthdate, and county of residence of the voter. In another case, document management systems used in local courthouses across the country contained multiple flaws that allowed unauthorized people to access sensitive filings such as psychiatric evaluations that were under seal. And in one case, unauthorized people could assign themselves privileges that are supposed to be available only to clerks of the court and, from there, create, delete, or modify filings.
The number of vulnerabilities—mostly stemming from weak permission controls, poor validation of user inputs, and faulty authentication processes—demonstrate a lack of due care in ensuring the trustworthiness of the systems.
Regular security audits and penetration testing should be standard practice, not an afterthought, and following the principles of Secure by Design should be an integral part of any Software Development Lifecycle.
Man tricks OpenAI’s voice bot into duet of The Beatles’ “Eleanor Rigby”
Normally, when you ask AVM (Advanced Voice Mode ) to sing, it will reply something like, "My guidelines won’t let me talk about that."
OpenAI possibly added this restriction because AVM may otherwise reproduce copyrighted content, such as songs that were found in the training data used to create the AI model itself. That's what is happening here to a limited extent, so in a sense, Smith has discovered a form of what researchers call a "prompt injection," which is a way of convincing an AI model to produce outputs that go against its system instructions.
"I just said we’d play a game. I’d play the four pop chords and it would shout out songs for me to sing along with those chords," Smith told us. "Which did work pretty well! But after a couple songs it started to sing along. Already it was such a unique experience, but that really took it to the next level."
AVM's voice is a little quavery and not pitch-perfect, but it appears to know something about "Eleanor Rigby's" melody when it first sings, "Ah, look at all the lonely people." After that, it seems to be guessing at the melody and rhythm as it recites song lyrics.
AI code helpers just can't stop inventing package names
Large language models (LLMs), when generating sample source code, will sometimes invent names of software package dependencies that don't exist. That's scary, because criminals could easily create a package that uses a name produced by common AI services and cram it full of malware. Then they just have to wait for a hapless developer to accept an AI's suggestion to use a poisoned package that incorporates a co-opted, corrupted dependency.
In a preprint paper titled "We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs," the authors explain that hallucinations are one of the unresolved shortcomings of LLMs.
They used 16 popular LLMs, both commercial and open source, to generate 576,000 code samples in JavaScript and Python, which rely respectively on the npm and PyPI package repositories.
Findings reveal that the average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open source models, including a staggering 205,474 unique examples of hallucinated package names. The 30 tests run from the set of research prompts resulted in 2.23 million packages being created – about 20% of which (440,445) were determined to be hallucinations. Of those, 205,474 were unique non-existent packages that could not be found in PyPI or npm.
While the larger models – those shaped with fine-tuning and more parameters – are more accurate in their answers, they are less reliable. That's because the smaller models will avoid responding to some prompts they can't answer, whereas the larger models are more likely to provide a plausible but wrong answer.
Attackers exploit critical Zimbra vulnerability using cc’d email addresses
When an admin manually changes default settings to enable the postjournal service, attackers can execute commands by sending maliciously formed emails to an address hosted on the server.
In the span of about an hour, a honey pot server received roughly 500 requests. The payload isn’t delivered through emails directly, but rather through a direct connection to the malicious server through SMTP.
[rG: Illustrating the importance of performing SSDLC Design Threat Assessment evaluating all application communication pathways along with backend processing capabilities misuse.]
HACKING
A Single Cloud Compromise Can Feed an Army of AI Sex Bots
Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. These illicit chat bots, which use custom jailbreaks to bypass content filtering. Investigating the abuse of AWS accounts for several organizations, researchers found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled full logging of LLM activity (by default, logs don’t include model prompts and outputs), and thus they lacked any visibility into what attackers were doing with that access.
Researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be. Within minutes, their bait key was scooped up and used in a service that offers AI-powered sex chats online.
FIN7 hackers launch deepfake nude “generator” sites to spread malware
FIN7 directly operated sites like "aiNude[.]ai", "easynude[.]website", and nude-ai[.]pro," which offered "free trials" or "free downloads," but in reality just spread malware.
The fake websites allow users to upload photos that they would like to create deepfake nudes. However, after the alleged "deepnude" is made, it is not displayed on the screen. Instead, the user is prompted to click a link to download the generated image. Doing so will bring the user to another site that displays a password and a link for a password-protected archive hosted on Dropbox. However, instead of a deepnude image, the archive archive contains the Lumma Stealer information-stealing malware. When executed, the malware will steal credentials and cookies saved in web browsers, cryptocurrency wallets, and other data from the computer.
Crook made millions by breaking into execs’ Office365 inboxes
Federal prosecutors have charged a man for an alleged “hack-to-trade” scheme that earned him millions of dollars by breaking into the Office365 accounts of executives at publicly traded companies and obtaining quarterly financial reports before they were released publicly.
The attacker pulled off the breaches by abusing the password reset mechanism Microsoft offered for Office365 accounts. In some cases, he allegedly went on to create forwarding rules that automatically sent all incoming emails to an email address he controlled. Once a person gains unauthorized access to an email account, it’s possible to conceal the breach by disabling or deleting password reset alerts and burying password reset rules deep inside account settings.
Prosecutors didn’t say how the defendant managed to abuse the reset feature. Typically such mechanisms require control of a cell phone or registered email account belonging to the account holder. In 2019 and 2020 many online services would also allow users to reset passwords by answering security questions. The practice is still in use today but has been slowly falling out of favor as the risks have come to be more widely understood.
Cloudflare blocks largest recorded DDoS attack peaking at 3.8Tbps
The assault consisted of a “month-long” barrage of more than 100 hyper-volumetric DDoS attacks flooding the network infrastructure with garbage data. In a volumetric DDoS attack, the target is overwhelmed with large amounts of data to the point that they consume the bandwidth or exhaust the resources of applications and devices, leaving legitimate users with no access.
The threat actor behind the campaign leveraged multiple types of compromised devices, which included a large number of Asus home routers, Mikrotik systems, DVRs, and web servers.
Researchers say that the network of malicious devices used mainly the User Datagram Protocol (UDP) on a fixed port, a protocol with fast data transfers but which does not require establishing a formal connection.
The recently disclosed CUPS vulnerabilities in Linux could be a viable vector for DDoS attacks. After scanning the public internet for systems vulnerable to CUPS, Akamai found that more than 58,000 were exposed to DDoS attacks from exploiting the Linux security issue. More testing revealed that hundreds of vulnerable “CUPS servers will beacon back repeatedly after receiving the initial requests, with some of them appearing to do it endlessly in response to HTTP/404 responses.”
Perfctl: Thousands of Linux systems infected by stealthy malware since 2021
The malware has been circulating since at least 2021. It gets installed by exploiting more than 20,000 common misconfigurations, a capability that may make millions of machines connected to the Internet potential targets, researchers from Aqua Security said. It can also exploit CVE-2023-33246, a vulnerability with a severity rating of 10 out of 10 that was patched last year in Apache RocketMQ, a messaging and streaming platform that’s found on many Linux machines.
Perfctl, the name of a malicious component that surreptitiously mines cryptocurrency. The unknown developers of the malware gave the process a name that combines the perf Linux monitoring tool and ctl, an abbreviation commonly used with command line tools. A signature characteristic of Perfctl is its use of process and file names that are identical or similar to those commonly found in Linux environments. The naming convention is one of the many ways the malware attempts to escape notice of infected users.
The feds still can’t get into Eric Adams’ phone
New York City Mayor Eric Adams, who was indicted last week on charges including fraud, bribery, and soliciting donations from foreign nationals, told federal investigators he forgot his phone password before handing it over. The indictment does not mention what type of device Adams uses.
Several courts have ruled that, even in instances where police have a warrant to search someone’s phone, the Fifth Amendment right against self-incrimination means investigators can’t compel a suspect to tell them their phone password. Phone passcodes are often considered a form of “testimonial” evidence because they require a person to reveal their thoughts. But if Face or Touch ID had been enabled on Adams’ device, the FBI potentially could have unlocked his phone with biometrics — which aren’t typically considered a form of testimonial evidence.
The FBI may be able to get into Adams’ phone without his passcode or thumbprint — they just need the right tools. After investigators at the FBI’s Pittsburgh field office failed to break into the Trump rally shooter’s phone, they sent the device over to the FBI lab in Quantico, Virginia, where agents cracked it in less than an hour. The investigators at Quantico reportedly used an unreleased tool from the Israeli mobile forensics company Cellebrite to unlock the shooter’s phone.
APPSEC, DEVSECOPS, DEV
CIOs listen up: either plan to manage fast-changing certificates, or fade away
The combination of identity sprawl and poor identity hygiene creates vulnerabilities. Certificate issues already routinely cause outages and security problems inside organizations, often because expiry notifications go unnoticed, even inside technology providers as large as Microsoft. These issues also give attackers access to cloud infrastructure, enterprise workloads, and the software supply chain when credentials are poorly managed and secured.
PKI and cryptography have always been very low-level, in the weeds but foundational for security even though CIOs probably haven’t paid much attention to it. Now it’s much more in the spotlight as you’ve got machine identity management, non-human identity management, and post quantum cryptography all becoming hot button items that are going to impact security and compliance across the organization.
Earlier this year Google and Yahoo, soon to be joined by Microsoft, began requiring any organization who’s ever sent out 5,000 messages in a single day to use SPF, DKIM, and DMARC email authentication, as this will affect password resets, shipping notifications, and purchase receipt emails going to consumer email addresses, as well as newsletter and marketing material sent directly or via email providers like Mailchimp. And while consumer email providers are starting with enforcement against bulk senders that still allow them to continue with lax DMARC policies, strong email authentication protections are relevant to every organization, and may well be enforced more stringently and more broadly in the future.
But certificates for email are only a small part of the problem. Thanks to the adoption of complex infrastructure like IIoT, JSON Web Tokens, and Kubernetes, among others, organizations already have in use hundreds of thousands of machine identities secured by SSL/TLS certificates, with lifespans ranging from years to minutes.
Cyber resilience becoming extremely difficult amid Gen AI upgrades
2% of enterprises said they have effective cyber resilience actions across the organization, while the rest cited AI complexity as major challenge.
13% of the organizations pointed out a gap in confidence between their CISOs, CSOs, and CEOs regarding compliance with AI and resilience regulations, painting a rather grim picture of the likelihood of their collective response to it.
49% CISOs are involved to a large extent in key business activities, which means fewer cybersecurity-driven business decisions.
15% were able to measure the financial impact of cybersecurity risks to a significant extent. Nevertheless, organizations are quickly waking up to this potential catastrophe and bringing in steps to prevent oversight.
72% have increased their risk management investment in AI governance.
96% said their cybersecurity regulations have spurred them to increase their cyber investment in the last 12 months, and
78% believe doing so has improved their cyber security posture.
JailbreakBench: An Open Sourced Benchmark for Jailbreaking Large Language Models (LLMs)
Large Language Models (LLMs) are vulnerable to jailbreak attacks, which can generate offensive, immoral, or otherwise improper information. By taking advantage of LLM flaws, these attacks go beyond the safety precautions meant to prevent offensive or hazardous outputs from being generated.
A team of researchers from the University of Pennsylvania, ETH Zurich, EPFL, and Sony AI has developed an open-source benchmark called JailbreakBench to standardize the assessment of jailbreak attempts and defenses. The goal of JailbreakBench is to offer a thorough, approachable, and repeatable paradigm for assessing the security of LLMs. There are four main parts to it, which are as follows.
1. Collection of Adversarial Prompts
2. Dataset for Jailbreaking
3. Standardized Assessment Framework: consists of scoring functions, system prompts, chat templates, and a thoroughly described threat model. By standardizing these components, JailbreakBench facilitates consistent and comparable evaluation across many models, attacks, and defenses.
4. Leaderboard: to promote competitiveness and increase transparency within the research community.
OMB Issues Guidance to Advance the Responsible Acquisition of AI in Government
Office of Management and Budget (OMB) released the Advancing the Responsible Acquisition of Artificial Intelligence in Government memorandum (M-24-18). Successful use of commercially-provided AI requires responsible procurement of AI. This new memo ensures that when Federal agencies acquire AI, they appropriately manage risks and performance; promote a competitive marketplace; and implement structures to govern and manage their business processes related to acquiring AI.
Remote ID verification tech is often biased, bungling, and no good on its own
Remote identity verification technology involves submitting a photo ID, a selfie and/or other forms of identification to verify that a person signing up for a new account, or trying to access an existing one, actually is who they claim to be. One of the products barely merits the term "functional," as it had a false negative rate of around 50 percent, and even the best performer still failed 10 percent of the time.
Some customers are enamored with the IAL2 workflow - which I think is great, but we've gotta go further.
AI tools for software engineers, but without the hype – with Simon Willison (co-creator of Django)
Ways to use LLMs efficiently, as a software engineer, common misconceptions about them, and tips/hacks to better interact with GenAI tools.
If you are not using LLMs for your software engineering workflow, you are falling behind. So use them!
It takes a ton of effort to learn how to use these tools efficiently.
Use local models to learn more about LLMs
LEGAL & REGULATORY
Police arrest four suspects linked to LockBit ransomware gang
This joint action also led to seizures of LockBit infrastructure servers and involved police officers in Operation Cronos, a task force led by the U.K. National Crime Agency (NCA) behind a global LockBit crackdown and an investigation that began in April 2022.
A suspected LockBit ransomware developer was arrested in August 2024 at the request of French authorities while on holiday outside of Russia.
The same month, the U.K.'s National Crime Agency (NCA) arrested two more individuals linked to LockBit activity: one believed to be associated with a LockBit affiliate, while the second was apprehended on suspicion of money laundering.
In a separate action, at Madrid airport, Spain's Guardia Civil arrested the administrator of a bulletproof hosting service used to shield LockBit's infrastructure.
Russia arrests US-sanctioned Cryptex founder, 95 other linked suspects
Following 148 raids, 96 individuals were arrested and charged with organizing and participating in a criminal organization, unlawful access to computer information, illegal payment processing, and illegal banking activities. The accomplices carried out illegal activities in exchanging currencies and cryptocurrencies, delivering and accepting cash, and selling bank cards and personal accounts. In 2023 alone, the criminal network's services processed over 112 billion rubles (just over $1.1 billion), generating 3.7 billion rubles (around $38.7 million) in illicit income
California Passes New Generative Artificial Intelligence Law Requiring Disclosure of Training Data
Companies that integrate their AI offerings with a foundation model should consider the impact of this new law because it could apply to developers that fine-tune or retrain AI systems or services.
On September 28, 2024, Governor Gavin Newsom signed into law AB 2013, which is a generative artificial intelligence (“AI”) law that requires developers to post information on their websites regarding the data used to train their AI systems. The law applies to generative AI released on or after January 1, 2022, and developers must comply with its provisions by January 1, 2026.
The law applies to AI developers, which is defined broadly to mean any person, government agency, or entity that either develops an AI system or service or “substantially modifies it,” which means creating “a new version, new release, or other update to a generative artificial intelligence system or service that materially changes its functionality or performance, including the results of retraining or fine tuning.”
The law also adopts a common definition for AI that we have seen under other laws, such as the EU AI Act, Colorado’s AI law, and the recently passed California AI Transparency Act. AI is “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”
Gov. Newsom vetoes California’s controversial AI bill, SB 1047
The bill would have made companies that develop AI models liable for implementing safety protocols to prevent “critical harms.” The rules would only have applied to models that cost at least $100 million and use 10^26 FLOPS (floating point operations, a measure of computation) during training.
In the same announcement, Newsom’s office noted that he’s signed 17 bills (18, by our count) around the regulation and deployment of AI technology in the last 30 days.
Laws, Guidance, and Recommendations: What to Consider When Using AI to Hire Employees in Italy
Directive (EU) 2019/1152,9 also known as the Directive on Transparent and Predictable Working Conditions requires employers to provide their employees, job applicants, any trade union representatives within the company, and, if no such trade union representatives exist, the territorial trade unions’ bodies with information about:
What aspects of the employment relationship may be affected by AI.
The purpose and operation of the AI tools in use.
What data and parameters are used to train the AI.
What control measures, corrective measures, quality control systems, and cybersecurity tools are in use.
Also Article 22 of the General Data Protection Regulation (GDPR),11 including by performing a risk analysis and an impact assessment of the processing activities carried out.
Why JPMorgan Chase is prepared to sue the U.S. government over Zelle scams
The lender disclosed that the Consumer Financial Protection Bureau could punish JPMorgan for its role in Zelle, the giant peer-to-peer digital payments network. The bank is accused of failing to kick criminal accounts off its platform and failing to compensate some scam victims.
Of the $806 billion that flowed across the network last year, only $166 million in transactions was disputed as fraud by customers of JPMorgan, Bank of America and Wells Fargo, the three biggest players on the platform. But the three banks collectively reimbursed just 38% of those claims. Banks are typically on the hook to reimburse fraudulent Zelle payments that the customer didn’t give permission for, but usually don’t refund losses if the customer is duped into authorizing the payment by a scammer. Amid the scrutiny, the bank began warning Zelle users on the Chase app to “Stay safe from scams” and added disclosures that customers won’t likely be refunded for bogus transactions.
VENDORS & PLATFORMS
Windows 11 Dev build 26120.1930 is out with Copilot key remapping, Windows Sandbox, and more
Microsoft to Release Windows Keyboards with Copilot Key
It's the first major change to Windows keyboards in 30 years, signaling Microsoft's intention to double down on Copilot
Microsoft overhauls security for publishing Edge extensions
With the new Publish API, secrets are now dynamically generated API keys for each developer, reducing the risk of static credentials being exposed in code or other breaches.
These API keys will now be stored in Microsoft's databases as hashes rather than the keys themselves, further preventing possible leaking of the API keys.
To further increase security, access token URLs are generated internally and do not need to be sent by the dev when updating their extensions. This further improves security by limiting additional risks of exposing URLs that could be used to push malicious extension updates.
Finally, the new Publish API will expire API keys every 72 days, compared to its previous two years. Rotating secrets more frequently prevents continued misuse in the event that a secret is exposed.
China makes AI breakthrough, reportedly trains generative AI model across multiple data centers and GPU architectures
Chinese researchers have been working on melding GPUs from different brands into one training cluster. By doing so, the institutions could combine their limited stocks of sanctioned high-end, high-performance chips, like the Nvidia A100, with less powerful but readily available GPUs, like Huawei’s Ascend 910B or the Nvidia H20. This technique could help them combat the high-end GPU shortage within China, although it has historically come with large drops in efficiency.
The End Of The SaaS Era: Rethinking Software’s Role In Business
One of the primary criticisms leveled against SaaS is the misconception of infinite high customer lifetime values (LTVs) [rG: AKA customer lock-in]. The idea that once a customer is acquired, they will continue to pay indefinitely, has proven to be overly optimistic. In reality, businesses face continuous pricing pressures and competitive threats, making customer retention an ongoing challenge rather than a given. With the advent of artificial intelligence and more accessible development tools, creating sophisticated software solutions has become easier than ever before. This democratization of software development has led to a proliferation of options for businesses, driving down prices and reducing the perceived value of individual SaaS offerings.
[rG: This ignores SaaS incorporating AI functionality as part of their platform offerings.]
ChatGPT's 4o-mini model just got a big upgrade – here are 4 of the best new features
Image creation: can now generate images based on text prompts
Browsing: Now, you can use the model through ChatGPT to conduct online research faster than with GPT-4o. That means you can get up-to-date information instead of relying solely on its pre-trained knowledge base.
Uploading: Now, ChatGPT users can employ the model to analyze, summarize, and discuss uploaded documents and pictures.
Memory: allows it to remember previous conversations with specific users and tailor interactions accordingly
18 Generative AI Tools Transforming Customer Service
Cognigy is a generative AI platform designed to help businesses automate customer service voice and chat channels.
IBM WatsonX Assistant is a framework for building AI personal assistants that can help out with just about any business task, including delivering intelligent customer support.
Zendesk is an established leader in the field of customer support software, and it has added generative AI capabilities to its roster of services.
Ada is designed to simplify the creation of custom bots, augmented with domain or enterprise-specific data, and quickly deploy them across omnichannel customer support scenarios, improving both support center efficiency and customer experience.
Aivo automates customer service interaction across chat and social channels.
Certainly is a natural language-based generative customer support and ticketing tailored for e-commerce.
Directly is an on-demand customer support combining automated machine learning and human experts.
Forethought is a AI agent platform capable of handling complex and nuanced customer interactions.
Freshworks’ AI assistant Freddy automates customer support, generates workflows and provides natural language and personalized query resolution within the Freshdesk platform.
Gladly automate troubleshooting and answering both simple and complex queries, as well as routing to human agents when needed.
Intercom automates resolution of common customer inquiries.
LivePerson leverages AI chatbots and real-time messaging with in-depth analytics to understand how customers are using your channels better.
Netomi utilizes “sanctioned” AI to ensure generative language functions remain within brand guidelines and regulatory limitations.
Ultimate automates the resolution of repetitive support requests powered by ChatGPT.
Zoom Virtual Agent offers Customer service AI agents from the popular remote meeting and messaging app.
Google removes Kaspersky's antivirus software from Play Store
Over the weekend, Google removed Kaspersky's Android security apps from the Google Play store and disabled the Russian company's developer accounts. Users have been reporting over the last week that Kaspersky's products (including Kaspersky Endpoint Security and VPN & Antivirus by Kaspersky) are no longer available on Google Play in the United States and other world regions.
And Now For Something Completely Different …
People are using Google study software to make AI podcasts—and they’re weird and amazing
NotebookLM is a surprise hit. Here are some of the ways people are using it.
“All right, so today we are going to dive deep into some cutting-edge tech,” a chatty American male voice says. But this voice does not belong to a human. It belongs to Google’s new AI podcasting tool, called Audio Overview. The podcasting feature was launched in mid-September as part of NotebookLM, a year-old AI-powered research assistant. NotebookLM, which is powered by Google’s Gemini 1.5 model, allows people to upload content such as links, videos, PDFs, and text. They can then ask the system questions about the content, and it offers short summaries. The tool generates a podcast called Deep Dive, which features a male and a female voice discussing whatever you uploaded. The voices are breathtakingly realistic—the episodes are laced with little human-sounding phrases like “Man” and “Wow” and “Oh right” and “Hold on, let me get this right.” The “hosts” even interrupt each other.