Robert Grupe's AppSecNewsBits 2024-02-24

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response 

Pharmacies across the U.S. are reporting that they are unable to fulfill prescriptions through patients’ insurance due to the ongoing outage at Change Healthcare, which handles much of the billing process. The company processes billions of healthcare transactions annually and claims it handles around one in three U.S. patient records, amounting to around 100 million Americans.
UHG blamed the ongoing cybersecurity incident affecting Change Healthcare on suspected nation-state hackers but said it had no timeframe for when its systems would be back online. UHG did not attribute the cyberattack to a specific nation or government, or cite what evidence it had to support its claim.
UHG said in its filing that it has “retained leading security experts, is working with law enforcement and notified customers, clients and certain government agencies.”
[rG: Restoring service by resetting all credentials and restoring from last good backups should have been possible within hours. It will be interesting to learn from forensic analyis what contributed to the compromise: unsecured platforms, software supply chain component vulnerabilies, exposed credential secrets, pre-production security test coverage (SSDLC & scanners), insecure apps design, unremediated known vulnerabilities tech debt, detection logging response, etc.]

 

A cyberattack has left hundreds of executive accounts compromised and caused a major user data leak as Microsoft Azure was attacked. Although critical user data was compromised, the main target of the attack was mid-level and senior executives. Especially people like financial directors, operations vice presidents, presidents, sales directors, account managers, and CEOs.The reason why so many people fell for this attack was because it was carried out through malicious links embedded in documents. These links led to phishing websites but the anchor text of these links was “View Document”.
The attackers have been identified as a group originating from Russia and Nigeria. However, this assumption is only based on their use of the local fixed-line ISPs in these countries. The rest of the details are still unknown.

 

Thursday morning, more than 74,000 AT&T customers reported outages on digital-service tracking site DownDetector. Roughly 11 hours after reports of the outage first emerged, the company said that it had restored service to all impacted customers.
“Based on our initial review, we believe that today’s outage was caused by the application and execution of an incorrect process used as we were expanding our network, not a cyber attack."
The Federal Communications Commission confirmed Thursday afternoon that it is investigating the outage. The FCC cares a lot more about the inability to connect with 911 [than other types of calls]. The White House says federal agencies are in touch with AT&T about network outages but that it doesn’t have all the answers yet on what exactly led to the interruptions.
AT&T’s stock fell more than 2% Thursday, a an outlier on a day when the market was rocketing higher.
[rG: Fundamentally, any operational change should have been validated by incremental deployments and closely monitored performance, so that a quick rollback restoration could have taken place as soon anomolies were detected. However, with micro-service architectures of complex systems and asynchronous deployments, determining root cause dependencies and service restoration can be a nightmare if there aren't fully updated solution design diagrams and robust automated integration tests.]

 

These customers' records contained personal information, including names, dates of birth, and driver license numbers.
While the U-Haul spokesperson declined to comment on how the criminals obtained the compromised credentials — eg, from an earlier data dump, or a social-engineering campaign — the incident illustrates how these types of identity-related attacks have skyrocketed over the past year.

 

Some Wyze security camera owners reported that they were unexpectedly able to see webcam feeds that weren’t theirs, meaning that they were unintentionally able to see inside of other people’s houses. A Wyze spokesperson tells The Verge that this was due to a web caching issue.

 

If an attacker is able to launch the setup wizard, they only need to partially complete the process – the part that registers the initial admin user to get things in motion. By registering the initial admin user and skipping the rest, the internal user database will be overwritten, deleting all local users except the one specified by the attacker. Once you have administrative access to a compromised instance, it is trivial to create and upload a malicious ScreenConnect extension to gain RCE. This is not a vulnerability, but a feature of ScreenConnect, which allows an administrator to create extensions that execute .Net code as SYSTEM on the ScreenConnect server.

 

Academic researchers show that a new set of attacks called ‘VoltSchemer’ can inject voice commands to manipulate a smartphone's voice assistant through the magnetic field emitted by an off-the-shelf wireless charger. VoltSchemer can also be used to cause physical damage to the mobile device and to heat items close to the charger to a temperature above 536F (280C). Voltage manipulation can be introduced by an interposing device, requiring no physical modification of the charging station or software infection of the smartphone device. The researchers say that this noise signal can interfere with the regular data exchange between the charging station and the smartphone, both of which use microcontrollers that manage the charging process, to distort the power signal and corrupt the transmitted data with high precision.

 

We all know that OpenAI's ChatGPT can make mistakes. They're called hallucinations, although I prefer to call them lies or blunders. But in a peculiar turn of events this Tuesday, ChatGPT began to really lose it. Users started to report bizarre and erratic responses from everyone's favorite AI assistant.
OpenAI acknowledged that users were getting "Unexpected responses" and swiftly fixed the problem by Wednesday afternoon. The company explained: "An optimization to the user experience introduced a bug with how the model processes language." Specifically, large language models (LLMs) generate responses by randomly sampling words and mapping their derived numbers to tokens. Things can go badly wrong if the model doesn't pick the right numbers. The bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense. More technically, inference kernels produced incorrect results when used in certain GPU configurations.
OpenAI then rolled out a fix and confirmed that the incident was resolved. Well, it said it rolled out a fix. I suspect it rolled back to an earlier, stable LLM release.

 

 

HACKING

Threat hunters found a 71 percent year-over-year increase in the volume of attacks using valid credentials in 2023.
Compromised valid accounts represented 30 percent of all incidents – pushing that attack vector to the top of the list of cyber criminals' most common initial access points for the first time ever. Cloud account credentials make up 90 percent of for-sale cloud assets on the dark web.
In addition to using stolen credentials, attackers are targeting API keys and secrets, session cookies and tokens, one-time passwords, and Kerberos tickets. Threat actors have really focused on identity – taking a legitimate identity, logging in as a legitimate user, and then laying low, staying under the radar by living off the land and using legitimate tools. Once they have that identity, they're able to enroll or bypass multi-factor authentication, and then move laterally. In some cases last year – ahem, Microsoft – MFA wasn't even deployed.
[rG: This is were reliance on just standard security vulnerability scanners aren't able to provide protection.]

 

Barracuda's recent Threat Spotlight shows that most attacks on web applications targeted security misconfigurations - such as coding and implementation errors (30%), while 21% involved code injection, where an attacker injects a code that is then interpreted executed by an application.

 

APT stands for Advanced Persistent Threat, a term that generally refers to state-sponsored hacking groups.
A large cache of more than 500 documents published to GitHub last week indicate the records come from i-SOON, a technology company headquartered in Shanghai that is perhaps best known for providing cybersecurity training courses throughout China. But the leaked documents, which include candid employee chat conversations and images, show a less public side of i-SOON, one that frequently initiates and sustains cyberespionage campaigns commissioned by various Chinese government agencies.
The overall tone of the discussions indicates employee morale was quite low and that the workplace environment was fairly toxic. In several of the conversations, i-SOON employees openly discuss with their bosses how much money they just lost gambling online with their mobile phones while at work. The i-SOON data was probably leaked by one of those disgruntled employees.

 

LockBit is a prolific and destructive ransomware group that has claimed more than 2,000 victims worldwide and extorted over $120 million in payments. Instead of listing data stolen from ransomware victims who didn’t pay, LockBit’s victim shaming website now offers free recovery tools, as well as news about arrests and criminal charges involving LockBit affiliates.

It is clear that LockBit has a large reach that spans tooling, various affiliate groups, and offshoots that have not been completely erased even with the major takedown by law enforcement. Besides installing the LockBit-associated ransomware, the attackers are installing several other malicious apps, including a backdoor known as Cobalt Strike, cryptocurrency miners, and SSH tunnels for remotely connecting to compromised infrastructure.

 

SSH-Snake was designed to automate the process of searching for and using SSH private keys to move from system to system, as well as visually map the SSH connections throughout a network.
The SSH-Snake bash script automates discovery of SSH private keys and hosts, and is unique in its ability to self-modify, essentially shrinking itself upon deployment. All unnecessary functions, whitespace and comments are removed from the code after its initial execution, allowing it to remain completely fileless as it stealthily traverses the network, despite its initial large size of more than 1,250 lines. The tool also acts as a worm, self-replicating when it accesses a new destination to repeat the key searching process. The script can also be customized to enable and disable specific commands, and is designed to work on any device.
Threat actors were discovered using the network traversal tool for offensive operations. The attackers exploited known vulnerabilities, including multiple Confluence flaws, for initial access into systems in order to deploy SSH-Snake. The SSH-Snake tool was used to retrieve outputs of victim IP addresses, SSH credentials and bash histories, and this intel could potentially be used for future cyberattacks.

 

Researchers assert that they can successfully attack “up to 27.9% of partial fingerprints and 9.3% of complete fingerprints within five attempts at the highest security FAR [False Acceptance Rate] setting of 0.01%.” This is claimed to be the first work that leverages swiping sounds to infer fingerprint information.

 

 

APPSEC, DEVSECOPS, DEV

 

BSIMM is a data-driven model developed through the analysis of real-world software security initiatives (also known as application security, product security, or DevSecOps programs). The BSIMM14 report, published in December 2023, represents the latest evolution of this detailed measuring stick for software security.

 

Organizations are quickly realizing there's no longer perimeter security versus application security—all security is application security. The application is the layer responsible for executing the security of the perimeter. If you want taller walls, deeper moats and fiercer sharks, you’re going to need to tell the development team to code them.
93% of organization leaders believe the transition to platform engineering to focus on efficiency and processes is the right next step to better capture the benefits of the DevOps’ culture of integration and delivery.
Spending on application security is predicted to reach $13.2 billion in 2025, up from $6.2 billion in 2020, as new software bill of materials (SBOM) and other new reporting regulations increase the pressure on organizations to have comprehensive visibility into their code. Additionally, organizations have accepted that scanning regularly enables them to fix more flaws faster, with 90% of apps now scanned at least once a week. And just as organizations specialized their dev tech stack, they're doing so with their AppSec tools, with a 31% increase in multiple scan types.
The cloud-native application protection platform (CNAPP) market has grown to provide all things cloud security. CNAPP has done a great job of generating, capturing and aggregating the security “things”—the data—in our infrastructure. However, with up to 76 security tools in an organization, this is a lot of data being delivered without context.
Let’s face it: It’s not about blocking and tackling but rather preparation and process. And with nine out of 10 breaches due to defects in design or code, application security is the foundation of any security program. AppSec extends beyond finding and fixing vulnerabilities, shifting left and governance, risk and compliance (GRC). 88% of development teams say it's "highly challenging to gain access to accurate, relevant information regarding application security and compliance."
This is where application security posture management (ASPM) has stepped in to bridge the infrastructure and application security worlds. ASPM isn't just a tool but a platform to align all security processes and data. Last year's ASPM acquisitions were the canary in the coal mine, as we saw Snyk acquire Enso to link their application and infrastructure tools, while CrowdStrike acquired Bionic to expand from infrastructure to application.

 

AI assistants such as GitHub Copilot, which offer code completion suggestions, often amplify existing bugs and security issues in a user's codebase. If developers are inputting code into GitHub that has security issues or technical problems, then the AI model can succumb to the “broken windows” theory by taking inspiration from its problematic surroundings. In essence, if GitHub Copilot is fed with prompts containing vulnerable material, then it will learn to regurgitate that material in response to user interactions.

If humans can't write secure code, AI cannot either because they have been trained using the same ideas, the same code bases. There's also the "hallucination" problem in AI, where the output might appear accurate but may not be factually or contextually correct, particularly in terms of security, because generative AI is often giving you exactly the answer you want.
As AI continues to reshape how code is written and managed, the emphasis on vigilant, security-conscious development practices becomes increasingly crucial. SAST stands as a critical tool in ensuring that the efficiencies gained through AI do not come at the cost of security and reliability.

 

This 32-page document is designed to help organizations create a strategy for implementing large language models (LLMs) and mitigate the risks associated with the use of these AI tools.

 

The monetary authority warned that the security of financial transactions and sensitive data financial institutions process could be at risk, thanks to quantum computers that can "break some of the commonly used encryption and digital signature algorithms."
Cryptographically relevant quantum computers (CRQCs) would break commonly used asymmetric cryptography, while symmetric cryptography could require larger key sizes to remain secure.
Typical users seldom change their passwords, leaving captured encrypted sessions vulnerable to decryption when quantum computers become available in the future. This vulnerability underscores the need for proactive measures, given numerous instances where sensitive information retains its importance over extended durations.

 

 

VENDORS & PLATFORMS

PyRIT is battle-tested by the Microsoft AI Red Team. It started off as a set of one-off scripts as we began red teaming generative AI systems in 2022. As we red teamed different varieties of generative AI systems and probed for different risks, we added features that we found useful.
Over the past year, we have proactively red teamed several high-value generative AI systems and models before they were released to customers. Through this journey, we found that red teaming generative AI systems is markedly different from red teaming classical AI systems or traditional software in three prominent ways.
1. Probing both security and responsible AI risks simultaneously
2. Generative AI is more probabilistic than traditional red teaming
3. Generative AI systems architecture varies widely
The biggest advantage we have found so far using PyRIT is our efficiency gain. For instance, in one of our red teaming exercises on a Copilot system, we were able to pick a harm category, generate several thousand malicious prompts, and use PyRIT’s scoring engine to evaluate the output from the Copilot system all in the matter of hours instead of weeks.

 

Messages sent through iMessage will now be protected by two forms of end-to-end encryption (E2EE), whereas before, it had only one. The encryption being added, known as PQ3, is an implementation of a new algorithm called Kyber that, unlike the algorithms iMessage has used until now, can’t be broken with quantum computing. Apple isn’t replacing the older quantum-vulnerable algorithm with PQ3—it's augmenting it. That means, for the encryption to be broken, an attacker will have to crack both.

 

  • Completing and releasing Rust Infrastructure and Crates Ecosystem Threat Models.

  • Further developing Rust Foundation open source security project Painter and releasing new security project, Typomania.

  • Utilizing new tools and best practices to identify and address malicious crates.

  • Helping reduce technical debt within the Rust Project, producing/contributing to securityfocused documentation, and elevating security priorities for discussion within the Rust Project.

 

With existing tags, a counterfeiter could peel the tag off a genuine item and reattach it to a fake, and the authentication system would be none the wiser. Researchers leveraged terahertz waves to develop an antitampering ID tag that still offers the benefits of being tiny, cheap, and secure. They mix microscopic metal particles into the glue that sticks the tag to an object, and then use terahertz waves to detect the unique pattern those particles form on the item’s surface. Akin to a fingerprint, this random glue pattern is used to authenticate the item.

 

 

 

LEGAL

As required by the “Security standards: General rules” section of the HIPAA Security Rule, each regulated entity must:

  • Ensure the confidentiality, integrity, and availability of all ePHI that it creates, receives, maintains, or transmits;

  • Protect against any reasonably anticipated threats and hazards to the security or integrity of ePHI;

  • Protect against reasonably anticipated uses or disclosures of such information that are not permitted by the Privacy Rule; and

  • Ensure compliance with the Security Rule by its workforce.

 

The U.S. Federal Trade Commission (FTC) will order Avast (Czech Republic) to pay $16.5 million and ban the company from selling the users' web browsing data or licensing it for advertising purposes. The complaint says Avast violated millions of consumers' rights by collecting, storing, and selling their browsing data without their knowledge and consent while misleading them that the products used to harvest their data would block online tracking. The FTC says UK-based company Avast Limited harvested consumers' web browsing information without their knowledge or consent using Avast browser extensions and antivirus software since at least 2014.

 

The European Union's Digital Services Act (DSA), which applies to all online platforms since Feb. 17, requires in particular very large online platforms and search engines to do more to tackle illegal online content and risks to public security.
The European Commission said the investigation will focus on the design of TikTok's system, including algorithmic systems which may stimulate behavioural addictions and/or create so-called 'rabbit hole effects'.
It will also probe whether TikTok has put in place appropriate and proportionate measures to ensure a high level of privacy, safety and security for minors. As well as the issue of protecting minors, the Commission is looking at whether TikTok provides a reliable database on advertisements on its platform so that researchers can scrutinise potential online risks.
[rG: Interesting to see if this gets extended to Redit, SnapChat, Discord, gaming, and other media platforms regarding content access and engagement stickiness.]  

 

 

And Now For Something Completely Different …

DNA exceeds by many times the storage density of magnetic tape or solid-state media. It has been calculated that all the information on the Internet—which one estimate puts at about 120 zettabytes—could be stored in a volume of DNA about the size of a sugar cube, or approximately a cubic centimeter. Achieving that density is theoretically possible, but we could get by with a much lower storage density. An effective storage density of “one Internet per 1,000 cubic meters” would still result in something considerably smaller than a single data center housing tape today.