- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2024-03-02
Robert Grupe's AppSecNewsBits 2024-03-02
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
US prescription market hamstrung for 9 days (so far) by ransomware attack
The cyberattack against Change Healthcare that began on Feb. 21 is the most serious incident of its kind leveled against a US health care organization. UnitedHealth Group accused a notorious ransomware gang known both as AlphV and Black Cat of hacking its subsidiary Optum. Optum provides a nationwide network called Change Healthcare, which allows health care providers to manage customer payments and insurance claims. The service processes 15 billion transactions involving eligibility verifications, pharmacy operations, and claims transmittals and payments. “All of these have been disrupted to varying degrees over the past several days and the full impact is still not known.” Optum estimated that as of 2/26, more than 90 percent of roughly 70,000 pharmacies in the US had changed how they processed electronic claims as a result of the outage.
AlphV is one of many syndicates that operates under a ransomware-as-a-service model, meaning affiliates do the actual hacking of victims and then use the AlphV ransomware and infrastructure to encrypt files and negotiate a ransom. The parties then share the proceeds. In December, the FBI and its equivalent in partner countries announced they had seized much of the AlphV infrastructure in a move that was intended to disrupt the group. AlphV promptly asserted it had unseized its site, leading to a tug-of-war between law enforcement and the group. The crippling of Change Healthcare is a clear sign that AlphV continues to pose a threat to critical parts of the US infrastructure.
It’s a situation forcing many of their customers to pay out of pocket for medication. "For Medicare, a lot of them have not been super expensive. However, I do have a claim that I’m not able to process through Medicare. It’s well over $4,000." Fitzgerald is keeping track of all the claims from this week that she needs to submit, but said there’s no guarantee that all of them will go through once systems are back online. The attack even disabled coupons, making paying out of pocket that much more expensive.
UnitedHealth Group Chief Operating Officer Dirk McMahon said the company is setting up a loan program to help providers who can’t submit insurance claims while Change is offline. He said that program will last “for the next couple of weeks as this continues to go on.”
In an update Friday, Change Healthcare said it has successfully tested a new version of its “Rx ePrescribing service” with vendors and retail pharmacy partners. The service was enabled for all customers starting at 2 p.m. ET on Friday, though the company added that its existing Clinical Exchange ePrescribing providers’ tools are still not working.
UnitedHealth also launched a website on Friday with information about Change Healthcare’s response to the attack. On the site, UnitedHealth said it’s establishing a temporary funding assistance program to help providers whose payment distributions have been interrupted.
The company said the program will have no fees, interest or other associated costs, and the funds will need to be repaid when standard operations resume. The program is not meant for providers that are experiencing disruptions to their claims submissions. UnitedHealth recommends using manual workarounds for claims, and said it’s working to address the 15% of claims that workarounds cannot address.
[rG: Rebuilding and restoring data is fundamentally quite straight forward and can be done within a matter of hours by the diligent and prepared. Resetting all access credentials and performing verification test is much more complicated where there aren't existing automated processes to reset credentials and run functional, regression, and integration tests.
S-SDLC best practice is that all applications should have updated security incident response plans, which should be independently tested annually to verify credential secrets rotation, software change deployment speed, and full tests.]
A U.S. government watchdog stole more than 1GB of seemingly sensitive personal data from the cloud systems of the U.S. Department of the Interior. The good news: The data was fake and part of a series of tests to check whether the Department’s cloud infrastructure was secure. The goal of the report was to test the security of the Department of the Interior’s cloud infrastructure, as well as its “data loss prevention solution,” software that is supposed to protect the department’s most sensitive data from malicious hackers.
The OIG team used a virtual machine inside the Department’s cloud environment to imitate “a sophisticated threat actor” inside of its network, and subsequently used “well-known and widely documented techniques to exfiltrate data.”
The OIG said it conducted more than 100 tests in a week, monitoring the government department’s “computer logs and incident tracking systems in real time,” and none of its tests were detected nor prevented by the department’s cybersecurity defenses.
YX International manufactures cellular networking equipment and provides SMS text message routing services, has secured an exposed database that was spilling one-time security codes that may have granted users’ access to their Facebook, Google and TikTok accounts.
Two-factor authentication (2FA) offers greater protection against online account hijacks that rely on password theft by sending an additional code to a trusted device, such as someone’s phone. Two-factor codes and password resets, like the ones found in the exposed database, typically expire after a few minutes or once they are used. But codes sent over SMS text messages are not as secure as stronger forms of 2FA — an app-based code generator, for example — since SMS text messages are prone to interception or exposure, or in this case, leaking from a database onto the open web.
The technology company left one of its internal databases exposed to the internet without a password, allowing anyone to access the sensitive data inside using only a web browser, just with knowledge of the database’s public IP address.
Eken and Tuck, which are largely the same hardware produced by the Eken Group in China, according to CR. The cameras are further resold under at least 10 more brands. The cameras are set up through a common mobile app, Aiwit. And the cameras share something else, CR claims: "troubling security vulnerabilities."
Among the camera's vulnerabilities cited by CR:
Sending public IP addresses and Wi-Fi SSIDs (names) over the Internet without encryption
Takeover of the cameras by putting them into pairing mode (which you can do from a front-facing button on some models) and connecting through the Aiwit app
Access to still images from the video feed and other information by knowing the camera's serial number.
Hugging Face is a popular collaboration platform that helps users host pre-trained machine learning models and datasets, as well as build, deploy, and train them. Safetensors is a format devised by the company to store tensors keeping security in mind, as opposed to pickles, which has been likely weaponized by threat actors to execute arbitrary code and deploy Cobalt Strike, Mythic, and Metasploit stagers.
It's possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform, as well as hijack any models that are submitted through the conversion service. This, in turn, can be accomplished using a hijacked model that's meant to be converted by the service, thereby allowing malicious actors to request changes to any repository on the platform by masquerading as the conversion bot.
NTT West president Masaaki Moribayashi announced his resignation on Thursday, effective at the end of March, in atonement for the leak of data pertaining to 9.28 million customers that came to light last October. "Our social responsibility is extremely important. I'm stepping down to take the blame."
The theft is believed to have occurred across a decade – long before Moribayashi took the helm. However, the org had been previously tipped off to a potential leak in 2022 by a client. Internal investigations around that time not only didn't find the leak, but also gave incorrect details about security measures – such as encryption software that was never used.
HACKING
On February 19, LockBit was severely disrupted by law enforcement in North America, Europe, and Asia, which seized 34 servers, took over the group’s Tor-based leak sites, froze cryptocurrency accounts, and harvested technical information on the RaaS.
Last weekend, an individual involved with the RaaS, who uses the moniker of “LockBitSupp”, launched a new leak site that lists hundreds of victim organizations and which contains a long message providing his view on the takedown. According to LockBitSupp, a PHP flaw led to the seizure of the vulnerable sites, but not of those not running the scripting language. In fact, some of the group’s known mirror sites are now linking to the new portal.
He also says that law enforcement obtained 20,000 decryption tools, including 1,000 unprotected builds of the locker (out of 40,000 issued during LockBit’s five-year run), and that the takedown was a reaction to the January hack of Georgia’s Fulton County.
GitHub is struggling to contain an ongoing attack that’s flooding the site with millions of code repositories. The malicious repositories are clones of legitimate ones, making them hard to distinguish to the casual eye. An unknown party has automated a process that forks legitimate repositories, meaning the source code is copied so developers can use it in an independent project that builds on the original one. The result is millions of forks with names identical to the original one that add a payload that’s wrapped under seven layers of obfuscation. To make matters worse, some people, unaware of the malice of these imitators, are forking the forks, which adds to the flood.
[rG: Important for software developers and maintainers to use SCA test scanners daily for dependency security monitoring.]
The AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process. To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks. To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.
Malicious submissions have been a fact of life for code repositories. AI is no different.
JFrog researchers said, they found roughly 100 submissions that performed hidden and unwanted actions when they were downloaded and loaded onto an end-user device. Most of the flagged machine learning models—all of which went undetected by Hugging Face—appeared to be benign proofs of concept uploaded by researchers or curious users. 10 of them were “truly malicious” in that they performed actions that actually compromised the users’ security when loaded.
The Ubiquiti EdgeRouters make an ideal hideout for hackers. The inexpensive gear, used in homes and small offices, runs a version of Linux that can host malware that surreptitiously runs behind the scenes. The hackers then use the routers to conduct their malicious activities. The actions against APT28 and its use of Ubiquity EdgeRouters comes one month after authorities conducted a similar operation against a China state group’s commandeering of small office and home office routers, which were mainly Cisco and Netgear devices that had reached their end of life.
Attackers are impersonate investors and ask to schedule a video conference call. But clicking the Calendly (a popular application for scheduling appointments and meetings) meeting link provided by the scammers prompts the user to run a script that quietly installs malware on macOS systems. Doug clicked the new link, but instead of opening up a videoconference app, a message appeared on his Mac saying the video service was experiencing technical difficulties. “Some of our users are facing issues with our service,” the message read. “We are actively working on fixing these problems. Please refer to this script as a temporary solution.”
Computer scientists have developed an efficient way to craft prompts that elicit harmful responses from large language models (LLMs).BEAST can attack a model as long as the model's token probability scores from the final network layer can be accessed. OpenAI is planning on making this available. Therefore, we can technically attack publicly available models if their token probability scores are available. In "just one minute per prompt, we get an attack success rate of 89 percent on jailbreaking Vicuna-7B- v1.5, while the best baseline method achieves 46 percent" with the help of GPU hardware and a technique called beam search.
Our exploration takes us through the digital alleyways of the internet’s underbelly, where forums like BreachForums and XSS offer a glimpse into the sophisticated world of cyber threats and data breaches. These platforms shrouded in anonymity, are where the boundaries of legality blur, challenging cybersecurity efforts worldwide. As we unveil these forums, we aim to provide insights into their operations, memberships, and the overarching impact they have on both the cyber and physical realms.
APPSEC, DEVSECOPS, DEV
There’s no such thing as getting scalability or availability right in just one place, and then you’re done. You have to make sure your system scales and survives failure at every layer and in every component. It takes only one bottleneck or single point of failure to defeat the system.
91% of enterprises have fallen victim to software supply chain incidents in just a year, underscoring the need for better safeguards for continuous integration/continuous deployment (CI/CD) pipelines.
40% of enterprises say misconfigured cloud services, stolen secrets from source code repositories, insecure use of APIs and compromised user credentials are becoming common. The most common impacts of these attacks are the malicious introduction of crypto-jacking malware (43%) and the needed remediation steps impacting SLAs (service level agreements) (41%).
Cloud-Native Application Protection Platforms (CNAPPs) rely on AI to automate hybrid and multicloud security while shifting security left in the SDLC.
AI continues to harden endpoint security down to the identity level while also defining the future by training LLMs.
Adaptive Automated Threat Detection:
AI is streamlining and simplifying analytics and reporting across CI/CD pipelines, identifying potential risks or roadblocks early and predicting attack patterns.
Using AI and ML to automate patch management.
Lack of AI model visibility and security puts the software supply chain security problem on steroids
AI and machine learning (ML)-enabled tools are software just the same as any other kind of application — and their code is just as likely to suffer from supply chain insecurities. However, they add another asset variable to the mix that greatly increases the attack surface of the AI software supply chain: AI/ML models.
The pace of adoption was so fast that shadow IT entities were cropping up around AI development and business use that escaped the kind of governance that would oversee any other kind of development in the enterprise.
The majority of tools that were being used — whether commercial or open source — were built by data scientists and up-and-coming ML engineers who had never been trained in security concepts. Many AI/ML systems and shared tools lack the basics in authentication and authorization and often grant too much read and write access in file systems. Coupled with insecure network configurations and then those inherent problems in the models, organizations start getting bogged down cascading security issues in these highly complex, difficult-to-understand systems.
AI breaches have already arrived. Organizations need to build out capabilities to scan their models, looking for flaws that can impact not only the hardening of the system but the integrity of its output. This includes issues like AI bias and malfunction that could cause real-world physical harm from, say, an autonomous car crashing into a pedestrian.
As with a solid DevSecOps ecosystem, this means that MLSecOps will need strong involvement from business stakeholders all the way up the executive ladder.
Best practices for automating security are needed throughout the entire AI/ML life cycle, from design to testing, deployment, and ongoing monitoring. The solution is Machine Learning Security Operations or MLSecOps.
While various MLOps frameworks exist, this paper uses a generalized MLOps architecture to represent processes and security procedures. The architecture, illustrated in Figure 1, incorporates an automated continuous integration/continuous delivery (CI/CD) system. It supports the efficient exploration of new techniques in ML model crafting and pipeline preparation and simplifies the processes of building, testing, and deploying new ML components. Figure 1 provides an integrated view of the MLSecOps framework, highlighting security controls and illustrating the flow of artifacts through the pipelines. Artifacts—including datasets, ML code, models, and deployment packages—must be protected.
Securing the AI: All AI deployments – including data, pipelines, and model output – cannot be secured in isolation. Security programs need to account for the context in which AI systems are used and their impact on sensitive data exposure, effective access, and regulatory compliance. Securing the AI model itself means identifying model risks, over-permissive access, and data flow violations throughout the AI pipeline.
Securing from AI: Attackers are currently leveraging generative AI to create malicious software, draft convincing phishing emails, and spread disinformation online via deep fakes. There’s also the possibility that attackers could compromise generative AI tools and large language models themselves.
Securing with AI: AI offers a streamlined way to sift through threats and prioritize which ones are most critical, saving security analysts countless hours. AI is also particularly effective at pattern recognition, meaning threats that follow repetitive attack chains (such as ransomware) could be stopped earlier.
The CSF 2.0 supports implementation of the National Cybersecurity Strategy and it’s organized around six key areas: identify, protect, detect, respond, recover, and the newly introduced ‘govern’.
Back in 2017, GitLab experienced a painful 18-hour outage. That story, and GitLab's subsequent honesty and transparency, has significantly impacted how organizations handle data security today.
One morning in the summer of 2023, Tarsnap backup service went completely offline.
Around Halloween 2021, Roblox a game played by millions every day on an infrastructure of 18,000 servers and 170,000 containers experienced a full-blown outage.
A few days before Thanksgiving Day 2023, an attacker used stolen credentials to access Cloudflare's on-premises Atlassian server, which ran Confluence and Jira. Not long after, they used those credentials to create a persistent connection to this piece of Cloudflare's global infrastructure.
Understand yourself and the cybersecurity playing field
Learn the fundamentals of the role you’re interested in:
Get certified to show employers that you understand the fundamentals
Volunteer
Bounty hunt BugsShow off your work and rise to the top of the resume pile
[rG: "No experience" is misleading; conceptual understanding isn't sufficient and still requires "showing off your work." Cybersecurity is an extension of quality assurance; that, as a baseline, needs a few years IT practical administration, testing, or project management experience first. Going directly from school into cybersecurity rarely produces highly competent professionals. Get really good/proficient at something in IT first, then pivot into cybersecurity specialization.]
VENDORS & PLATFORMS
GitHub has enabled push protection by default for all public repositories to prevent accidental exposure of secrets such as access tokens and API keys when pushing new code. Push protection proactively prevents leaks by scanning for secrets before 'git push' operations are accepted and blocking the commits when a secret is detected.
[rG: SSDF DevSecOps For enterprise managed software deployment pipelines, orchestration rule can be implemented to block deployments if existing internal secrets scanning solution has detected vulnerability.]
GitHub Copilot Enterprise serves as a companion that lets developers ask questions about public and private code, get up to speed with new codebases, build consistencies across engineering teams, and ensure users have access to the same standards and prior work, GitHub said. Chat conversations can be tailored to an organization’s repositories, with answers based on the organizational knowledge base. Pull request diff analysis is offered as well. Bing-powered web search is a feature now in beta. GitHub Copilot Enterprise requires the use of GitHub Enterprise Cloud, and is priced at $39 per user per month.
90% of enterprises are currently experiencing limitations integrating AI into their tech stack. Almost three quarters of companies (73%) report that more than half of the apps in their tech stack have AI capabilities or AI-augmented features.
Almost half of enterprise executives (48%) indicate that their organization’s AI implementation strategy for the next year is focused on building strong integrations between their internal SaaS apps and AI, while close to 20% of practitioners state their organization doesn’t have an AI strategy as it relates to their tech stack and internal business processes.
55% of companies reporting that they have more than 50 SaaS apps in their tech stack and 37% stating they have more than 100. Seventy-three percent of respondents state that more than half of the apps in their tech stack have AI capabilities or AI-augmented features, and 40% of respondents plan to use the built-in AI features for more than half of their SaaS apps.
Road-test of Jupyter Notebooks, using Microsoft's .NET implementation in Visual Studio Code, Polyglot Notebooks.
LEGAL
A woman alleging her personal information was stolen in a cyberattack on a dental practice management chain filed a proposed class-action lawsuit against the company, saying it failed to exercise reasonable care in safeguarding patient data.
Consumer groups are filing legal complaints in the EU in a coordinated attempt to use data protection law to stop Meta from giving local users a "fake choice" between paying up and consenting to being profiled and tracked via data collection.
Lawsuit alleges it misled investors with claims new AI products were 'facilitating greater platformization' and more
And Now For Something Completely Different …
Consider this scenario: Your wall-to-wall neighbor loves to blast Reggaeton music at full volume through a Bluetooth speaker every morning at 9 am. You have two options:
A. Knock on their door and politely ask them to lower the volume.
B. Build an AI device that can handle the situation more creatively.
Reggaeton Be Gone (the name is a homage to Tv-B-Gone device) will monitor room audio, it will identify Reggaeton genre with Machine Learning and trigger comm requests and packets to the Bluetooth speaker with the high goal of disabling it or at least disturbing the sound so much that the neighbor won't have other option that turn it off.