Robert Grupe's AppSecNewsBits 2024-05-04

What's Weak This Week: Change Healthcare intrusion missed MFA, Kaiser Marketing Leakage, GitLab account takeovers, Microsoft reemphasizing SSDLC Fundamentals

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Health care giant comes clean about recent hack and paid ransom

Change Healthcare, the health care services provider that recently experienced a ransomware attack that hamstrung the US prescription market for two weeks, was hacked through a compromised account that failed to use multifactor authentication, the company CEO told members of Congress.

The breach started on February 12 when hackers somehow obtained an account password for a portal allowing remote access to employee desktop devices. The account, Witty admitted, failed to use multifactor authentication (MFA), a standard defense against password compromises that requires additional authentication in the form of a one-time password or physical security key.

Once the threat actor gained access, they moved laterally within the systems in more sophisticated ways and exfiltrated data.

After burrowing into the Change Healthcare network undetected for 9 days, the attackers deployed ransomware that prevented the company from accessing its IT environment. In response, the company severed its connection to its data centers. The company spent the next two weeks rebuilding its entire IT infrastructure “from the ground up.” In the process, it replaced thousands of laptops, rotated credentials, and added new server capacity.

Change Healthcare paid $22 million to ALPHV. Principal members of the group then pocketed the funds rather than sharing it with an affiliate group that did the actual hacking, as spelled out in a pre-existing agreement. The affiliate group published some of the stolen data, largely validating a chief criticism of ransomware payments.

the ransomware attack resulted in a $872 million cost in its first quarter. That amount included $593 million in direct response costs and $279 million in disruptions. Witty’s written testimony added that as of last Friday, his company had advanced more than $6.5 billion in accelerated payments and no-interest, no-fee loans to thousands of providers that were left financially struggling during the prolonged outage.

Payment processing by Change Healthcare is currently about 86 percent of its pre-incident levels.  

Witty said he was reluctant to give a more precise answer because the company is still investigating the breach and trying to figure out exactly how many people were affected. Witty said that it will likely take “several months,” before the company can begin notifying victims of the data breach.

Had that portal had multi-factor authentication enabled, the breach may not have happened. Several Senators grilled Witty on that failure, asking him whether UnitedHealth and Change Healthcare systems are now protected with multi-factor authentication. During the Senate hearing, Witty said: “We have an enforced policy across the organization to have multi-factor authentication on all of our external systems, which is in place.”

[rG: MFA is a basic first line of defense, but, by itself, is no guarantee against sophisticated, determined attacks - see below HACKING section "9 ways MFA can be breached." Multi-level defenses against lateral movement, proccess execution permissions, and data encryption are needed to prevent sensitive information disclosure, exfiltration, and operational disruptions.]

 

The threat actor had accessed data related to all users of Dropbox Sign, such as emails and user names, in addition to general account settings. The threat actor also accessed phone numbers, hashed passwords, and certain authentication information such as API keys, OAuth tokens, and multi-factor authentication.

 

Kaiser's systems inadvertently shared patient data with third-party advertisers, including Google, Microsoft, and social platform X, thanks to the presence of improperly implemented tracking code that Kaiser used to see how its members navigated through its Web and mobile sites. The shared data included names, IP addresses, what pages people visited, whether they were actively signed in, and even the search terms they used when visiting the company's online health encyclopedia.

Once shared, advertisers have used this information to target ads at users for complementary products (based on health data); this has happened multiple times in the past few years, including at Goodrx. Although this does not fit the traditional definition of a data breach, it essentially results in the same outcome — an entity and the use case the data was not intended for has access to it. There is usually no monitoring/auditing process to identify and prevent the issue.

 

A change GitLab implemented in May 2023 made it possible for users to initiate password changes through links sent to secondary email addresses. In January, GitLab disclosed that the feature allowed attackers to send reset emails to accounts they controlled and from there click on the embedded link and take over the account. While exploits require no user interaction, hijackings work only against accounts that aren’t configured to use multifactor authentication. Even with MFA, accounts remained vulnerable to password resets, but the attackers ultimately are unable to access the account, allowing the rightful owner to change the reset password.

The vulnerability, classified as an improper access control flaw, could pose a grave threat. GitLab software typically has access to multiple development environments belonging to users. With the ability to access them and surreptitiously introduce changes, attackers could sabotage projects or plant backdoors that could infect anyone using software built in the compromised environment. An example of a similar supply chain attack is the one that hit SolarWinds in 2020 and pushed malware to more than 18,000 of its customers, 100 of whom received follow-on hacks. 

All federal civilian executive branch (FCEB) agencies usually have a maximum of 21 days to fix the issue to prevent harmful attacks on the government.

 

I created a single S3 bucket in the eu-west-1 region and uploaded some files there for testing. Two days later, I checked my AWS billing page, primarily to make sure that what I was doing was well within the free-tier limits. Apparently, it wasn’t. My bill was over $1,300, with the billing console showing nearly 100,000,000 S3 PUT requests executed within just one day!

By default, AWS doesn’t log requests executed against your S3 buckets. However, such logs can be enabled using AWS CloudTrail or S3 Server Access Logging. After enabling CloudTrail logs, I immediately observed thousands of write requests originating from multiple accounts or entirely outside of AWS.

One of the popular open-source tools had a default configuration to store their backups in S3. And, as a placeholder for a bucket name, they used… the same name that I used for my bucket. This meant that every deployment of this tool with default configuration values attempted to store its backups in my S3 bucket!

If all those misconfigured systems were attempting to back up their data into my S3 bucket, why not just let them do so? I opened my bucket for public writes and collected over 10GB of data within less than 30 seconds. Of course, I can’t disclose whose data it was. But it left me amazed at how an innocent configuration oversight could lead to a dangerous data leak!

What did I learn from all this?

Lesson 1: Anyone who knows the name of any of your S3 buckets can ramp up your AWS bill as they like.

Lesson 2: Adding a random suffix to your bucket names can enhance security.

Lesson 3: When executing a lot of requests to S3, make sure to explicitly specify the AWS region.

 

Microsoft now has three security principles that form a big part of these goals:

1. secure by design;

2. secure by default;

3. secure operations.

The broader goals are underlined by “six prioritized security pillars,” which is corporate speak for stuff Microsoft needs to greatly improve:

1. Protect identities and secrets

2. Protect tenants and isolate production systems

3. Protect networks

4. Protect engineering systems

5. Monitor and detect threats

6. Accelerate response and remediation: and to increase the transparency around these issues by adopting Common Weakness Enumeration (CWE) and Common Platform Enumeration (CPE) industry standards.

 

1. OWASP Acknowledges Data Leak from Old Wiki

2. PandaBuy Breach Exposes 1.3 Million Users’ Data

3. Prudential Confirms Data Breach Affecting 36,000 Customers

4. Fortinet Flaw Targeted in New Cyber Attack Campaign

5. Hackers Target WordPress Sites with Crypto-Stealing Malware

6. Millions of Discord Users’ Messages Sold on Spy . pet

 

HACKING

Cybercriminals and Advanced Persistent Threat (APT) actors share a common interest in proxy anonymization layers and Virtual Private Network (VPN) nodes to hide traces of their presence and make detection of malicious activities more difficult. Cybercriminals and spies working for nation-states are surreptitiously coexisting inside compromised name-brand routers as they use the devices to disguise attacks motivated both by financial gain and strategic espionage. Financially motivated hackers provide spies with access to already compromised routers in exchange for a fee.

Trend Micro and Fortinet reports follow a separate FBI operation in January that removed malware being used by an espionage group backed by the Chinese government to proxy traffic through home and office routers from Cisco, Netgear, and others. The high quality of the C++ code and its “firmware agnostic” approach to establish covert communications channels on routers from virtually any manufacturer illustrate the importance these botnets have to the Chinese government. Other advanced router malware seen in recent years includes VPN Filter and its successor, Cyclops Blink—both attributed to the Russian government—HiatusRAT and ZuoRAT. With so many competing actors and malware packages targeting routers, it’s easy to see how they can coexist on the same devices.

The best way to keep routers free of this sort of malware is to ensure that their administrative access is protected by a strong password. Remote access should be turned off unless the capability is truly needed and is configured by someone experienced. Firmware updates should be installed promptly. It’s also a good idea to regularly restart routers since most malware for the devices can’t survive a reboot. Once a device is no longer supported by the manufacturer, people who can afford to should replace it with a new one.

 

In recent months, the so-called “dead internet theory” has gained new popularity. It suggests that much of the content online is in fact automatically generated, and that the number of humans on the web is dwindling in comparison with bot accounts.

In some countries, the picture is worse. In Ireland, 71 per cent of internet traffic is automated.

Some of that rise is the result of the adoption of generative artificial intelligence and large language models. Companies that build those systems use bots scrape the internet and gather data that can then be used to train them. Some of those bots are becoming increasingly sophisticated. More and more of them come from residential internet connections, which makes them look more legitimate.

The widespread use of bots has already caused problems for online services such as X. Popular posts on the site are now hit by a huge number of comments from accounts advertising pornography, and the company appears to be struggling to limit them. But X is far from the only site to be hit by automated content that is posing as real. Many similar posts are spreading across Facebook and TikTok.

 

1. MFA prompt bombing

2. Service desk social engineering

3. Adversary-in-the-middle (AITM) attacks

4. Session hijacking

5. Sim swaps

6. Exporting generated tokens

7. Endpoint compromise

8. Exploiting SSO

9. Finding technical deficiencies

 

The vulnerability exploits the way R handles serialization ('saveRDS') and deserialization ('readRDS'), particularly through promise objects and "lazy evaluation."

Attackers can embed promise objects with arbitrary code in the RDS file metadata in the form of expressions, which are evaluated during deserialization, resulting in the code's execution.

 

APPSEC, DEVSECOPS, DEV

72% of global CISOs have experienced an application security incident in the past two years, causing lost revenue and market share.

App security incidents in many cases led to

  • lost revenue (47%),

  • regulatory fines (36%) and

  • lost market share (28%).

87% of CISOs claiming application security is a blind spot at the CEO and board level.

  • 70% of C-suite executives polled said security teams talk too much in technical terms without providing business context, while

  • 75% of CISOs said security tools can’t generate insights the CEO and board can use to understand business risk.

52% of those polled said they are concerned about the potential for the technology to empower cybercriminals – enabling them to create new exploits faster and execute them on a broader scale.

45% complained that AI could enable developers to accelerate delivery of software without proper oversight, increasing the likelihood of buggy code making it into production.

 

The numbers are going to go from 80 to 90 percent to maybe 95, 98, 99 percent of your code in an enterprise environment would be written from basically untrusted, unvetted sources. The software supply chain is going to be the next frontier of cybersecurity and cybersecurity attacks.

Getting around those sorts of problems is going to require good documentation, which includes reliable software bills of material and better vetting of open-source libraries.

 

The newly released guidelines categorize AI risks into three significant types: the utilization of AI in attacks on infrastructure, targeted assaults on AI systems themselves, and failures within AI design and implementation that could jeopardize infrastructure operations.

The guidelines calls on management to act decisively on identified AI risks to enhance safety and security, ensuring that risk management controls are implemented and maintained to optimize the benefits of AI systems while minimizing adverse effects.

 

Jailbreaking a language model refers to the set of techniques used to manipulate or deceive an AI model to perform tasks outside its predefined restrictions. This can include responding to questions or generating content that would normally be restricted due to ethics, privacy, security, or data use policies.

Prevention and Mitigation Measures

  1. Enhancing model training and oversight:Refining learning processes and algorithms to detect and counter manipulation attempts.

  2. Implementing additional security layers: Using monitoring and anomaly detection techniques to identify and respond to suspicious activities.

  3. Educating and raising user awareness:Informing users about the risks associated with jailbreaking and promoting ethical AI use.

 

  1. You Don't Become More Secure Just by Going to the Cloud

  2. Native Security Controls Are Hard to Manage in a Hybrid World

  3. Identity Won't Save Your Cloud

  4. Too Many Firms Don't Know What They're Trying to Protect

  5. Cloud Native Development Incentives Are Out of Whack

 

I recommend that you start with Hack The Box. This platform offers challenges, such as getting source code and finding vulnerabilities. Then, exploit them on a real machine. This is a great way to learn how to use SAST tools in practice.

Or, if you are a developer, you can use SAST tools in your CI/CD pipeline. This way, you can get used to the tools and their output. Simultaneously, you will improve the security of your application.

 

Whether you're a beginner aiming to grasp the essentials or an experienced engineer seeking to refine your skills, these questions will not only prepare you for interviews but also improve your knowledge about system design and software architecture.

By the way, if you are preparing for System design interviews and want to learn System Design in depth then you can also checkout sites like ByteByteGo, Design Guru, Exponent, Educative and Udemy which have many great System design courses

 

VENDORS & PLATFORMS

 

Some ​Google Chrome users report having issues connecting to websites, servers, and firewalls after Chrome 124 was released last week with the new quantum-resistant X25519Kyber768 encapsulation mechanism enabled by default. Google started testing the post-quantum secure TLS key encapsulation mechanism in August and has now enabled it in the latest Chrome version for all users.

Affected Google Chrome users can mitigate the issue by going to chrome://flags/#enable-tls13-kyber and disabling the TLS 1.3 hybridized Kyber support in Chrome.

Administrators can also disable it by toggling off the PostQuantumKeyAgreementEnabled enterprise policy under Software > Policies > Google > Chrome or contacting the vendors to get an update for servers or middleboxes on their networks that aren't post-quantum-ready.

Microsoft has also released information on how to control this feature via the Edge group policies.

However, it's important to note that long-term, post-quantum secure ciphers will be required in TLS, and the Chrome enterprise policy allowing disabling it will be removed in the future.

 

Microsoft on Friday provided a peek at a comprehensive framework that aims to sort out the Domain Name System (DNS) mess so that it’s better locked down inside Windows networks. It’s called ZTDNS (zero trust DNS). Its two main features are (1) encrypted and cryptographically authenticated connections between end-user clients and DNS servers and (2) the ability for administrators to tightly restrict the domains these servers will resolve.

 

With this new Systemd component, some possible attacks on a Linux system security via sudo like this bug will not be possible in the future, which should enhance the overall security. There will be more features in run0 that are not just related to the security backend.

 

Google says it stopped 2.28 million Android apps from being published in its official Play Store last year because they violated security rules. These policy updates included refreshed rules tackling AI apps, bothersome notifications, and privacy. Google put particular emphasis on its new requirement for devs to allow the deletion of account data without needing to reinstall an app. Plus, app devs had to provide more info about themselves and meet the latest testing requirements.

 

 

LEGAL & REGULATORY

On October 21, 2020, the Vastaamo Psychotherapy Center in Finland became the target of blackmail when a tormentor identified as “ransom_man” demanded payment of 40 bitcoins (~450,000 euros at the time) in return for a promise not to publish highly sensitive therapy session notes Vastaamo had exposed online.

Ransom_man announced on the dark web that he would start publishing 100 patient profiles every 24 hours. When Vastaamo declined to pay, ransom_man shifted to extorting individual patients. According to Finnish police, some 22,000 victims reported extortion attempts targeting them personally, targeted emails that threatened to publish their therapy notes online unless paid a 500 euro ransom.

 

The FCC’s findings against AT&T, for example, show that AT&T sold customer location data directly or indirectly to at least 88 third-party entities. The FCC found Verizon sold access to customer location data (indirectly or directly) to 67 third-party entities. Location data for Sprint customers found its way to 86 third-party entities, and to 75 third-parties in the case of T-Mobile customers.

AT&T was fined more than $57 million, while Verizon received a $47 million penalty. Still, these fines represent a tiny fraction of each carrier’s annual revenues. For example, $47 million is less than one percent of Verizon’s total wireless service revenue in 2023, which was nearly $77 billion.

The fine amounts vary because they were calculated based in part on the number of days that the carriers continued sharing customer location data after being notified that doing so was illegal (the agency also considered the number of active third-party location data sharing agreements). The FCC notes that AT&T and Verizon each took more than 320 days from the publication of the Times story to wind down their data sharing agreements; T-Mobile took 275 days; Sprint kept sharing customer location data for 386 days.

 

The head of counterintelligence for a division of the Russian Federal Security Service (FSB) was sentenced last week to nine years in a penal colony for accepting a USD $1.7 million bribe to ignore the activities of a prolific Russian cybercrime group that hacked thousands of e-commerce websites.

In February 2022, Russian authorities arrested six men in the Perm region accused of selling stolen payment card data. They also seized multiple carding shops run by the gang, including Ferum Shop, Sky-Fraud, and Trump’s Dumps, a popular fraud store that invoked the 45th president’s likeness and promised to “make credit card fraud great again.”

 

Utah's Artificial Intelligence Policy Act

Massachusetts Attorney General Provides Artificial Intelligence Guidance on State Consumer Protection Laws

Connecticut Proposed Private Sector Artificial Intelligence Bill

California's Draft Automated Decision-making Technology Regulations

 

Under the PSTI, weak or easily guessable default passwords such as “admin” or “12345” are explicitly banned, and manufacturers are also required to publish contact details so users can report bugs. Products that fail to comply with the rules could face being recalled, and the companies responsible could face a maximum fine of £10 million ($12.53 million) or 4% of their global revenue, whichever is higher.

 

Malaysia, Singapore, and Ghana are among the first countries to pass laws that require cybersecurity firms — and in some cases, individual consultants — to obtain licenses to do business, but concerns remain. 

 

And Now For Something Completely Different …

Wanted: Bold minds, not lazy prompt writers.

An industry pundit delivered a scathing post against a leading IT consulting firm, stating that instead of retaining the services of this company for millions of dollars, simply use ChatGPT for free. Here's their reasoning: The consultants will simply get their answers or advice from ChatGPT anyway, so simply avoid the third party and go right to ChatGPT.

"We realize having fantastic brains and results isn't necessarily as good as someone that is willing to have critical thinking and give their own perspectives on what AI and generative AI gives you back in terms of recommendations. We want people that have the emotional and self-awareness to go, 'hmm, this doesn't feel quite right, I'm brave enough to have a conversation with someone, to make sure there's a human in the loop.'"

 

At the time, Austin was planning on becoming a rival to Silicon Valley. However, after only a few years, tech companies began leaving Austin. High costs, no support for start-ups, the climate, political differences, and numerous other factors led companies to begin fleeing the city. By 2023, the majority of tech companies either withdrew presences from the city, such as Meta who relocated offices back to California, or left completely. Many moves were to Houston, Tulsa, Nashville, Miami, or perhaps most embarrassing, back to the Silicon Valley. While Oracle managed to hold out in 2023, they finally announced their move out of Austin to Nashville this week.

 

For right now, we’re very much still in the “congratulate baby for successfully getting most of its food into its mouth” phase of self-driving racers.