Robert Grupe's AppSecNewsBits 2024-02-03

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response 

In eight of security company TrueSec's most recent incident response engagements that involved Akira and Cisco's AnyConnect SSL VPN as the entry point, at least six of the devices were running versions vulnerable to CVE-2020-3259, which was patched in May 2020.

 

Scans from internet security data company Shadowserver indicate roughly 45,000 instances of the hugely popular CI/CD automation server are vulnerable to CVE-2024-23897, the critical flaw disclosed on January 24.
The revelation of the vast attack surface comes days after multiple exploits were made public on January 26 – themselves released just two days after the coordinated disclosure from Jenkins and Yaniv Nizry, the researcher at Sonar who first discovered the vulnerability.

 

The incident was discovered on November 23, nine days after the threat actor used credentials compromised in the October 2023 Okta hack to access Cloudflare’s internal wiki and bug database. The stolen login information, an access token and three service account credentials, were not rotated following the Okta incident, allowing the attackers to probe and perform reconnaissance of Cloudflare systems starting November 14. The attackers managed to access an AWS environment, as well as Atlassian Jira and Confluence, but network segmentation prevented them from accessing its Okta instance and the Cloudflare dashboard.
With access to the Atlassian suite, the threat actor started looking for information on the Cloudflare network, searching the wiki for “things like remote access, secret, client-secret, openconnect, cloudflared, and token”. The attackers viewed 120 code repositories and downloaded 76 of them to the Atlassian server, but did not exfiltrate them. The 76 source code repositories were almost all related to how backups work, how the global network is configured and managed, how identity works at Cloudflare, remote access, and our use of Terraform and Kubernetes. A small number of the repositories contained encrypted secrets which were rotated immediately even though they were strongly encrypted themselves.
According to Cloudflare, more than 5,000 individual production credentials were rotated following the incident, close to 5,000 systems were triaged, test and staging systems were physically segmented, and every machine within the Cloudflare global network was reimaged and rebooted. The equipment at the São Paulo data center, although not accessed, was sent back to the manufacturers for inspection and replaced.

 

Federal civilian agencies have until midnight Saturday (Feb 03) morning to sever all network connections to Ivanti VPN software, which is currently under mass exploitation by multiple threat groups. The US Cybersecurity and Infrastructure Security Agency mandated the move on Wednesday after disclosing three critical vulnerabilities in recent weeks.
Three weeks ago, Ivanti disclosed two critical vulnerabilities that it said threat actors were already actively exploiting. The attacks, targeted the company’s Connect Secure and Policy Secure VPN products. The vulnerabilities had been under exploitation since early December. Ivanti didn’t have a patch available and instead advised customers to follow several steps to protect themselves against attacks. Among the steps was running an integrity checker the company released to detect any compromises.
Two weeks later, researchers said the zero-days were under mass exploitation in attacks that were backdooring customer networks around the globe. A day later, Ivanti failed to make good on an earlier pledge to begin rolling out a proper patch by January 24. The company didn’t start the process until Wednesday, two weeks after the deadline it set for itself.
Then on Wednesday, Ivanti disclosed two new critical vulnerabilities. German government officials said they had already seen successful exploits of the newest one. The officials also warned that exploits of the new vulnerabilities neutralized the mitigations Ivanti advised customers to implement.

 

RedHunt Labs discovered a Mercedes employee’s authentication token in a public GitHub repository during a routine internet scan in January. This token — an alternative to using a password for authenticating to GitHub — could grant anyone full access to Mercedes’s GitHub Enterprise Server, thus allowing the download of the company’s private source code repositories. The repositories include a large amount of intellectual property… connection strings, cloud access keys, blueprints, design documents, [single sign-on] passwords, API Keys, and other critical internal information.

 

ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users. Some examples contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems they encountered while using the portal. Besides the credentials, the leaked conversation includes the name of the app the employee is troubleshooting and the store number where the problem occurred.
The results appeared Monday morning shortly the user had used ChatGPT for an unrelated query. “I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email. “They weren't there when I used ChatGPT just last night (I'm a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren't from me (and I don't think they're from the same user either).”
OpenAI officials say that the ChatGPT histories a user reported result from his ChatGPT account being compromised, but the user doubts his account was compromised. He said he used a nine-character password with upper- and lower-case letters and special characters. He said he didn’t use it anywhere other than for a Microsoft account. He said the chat histories belonging to other people appeared all at once on Monday morning during a brief break from using his account.

 

 

 

HACKING

FTX’s staff had already endured one of the worst days in the company’s short life. What had recently been one of the world’s top cryptocurrency exchanges, valued at $32 billion only 10 months earlier, had just declared bankruptcy. The perpetrators in this heist stole the $400 million in cryptocurrencies on Nov. 11, 2022 after they SIM-swapped an AT&T customer by impersonating them at a retail store using a fake ID. However, the document refers to the victim in this case only by the name “Victim 1." On a Friday evening, exhausted FTX staffers began to see mysterious outflows of the company’s cryptocurrency, publicly captured on the Etherscan website that tracks the Ethereum blockchain, representing hundreds of millions of dollars worth of crypto being stolen in real time. The SIM-swappers allegedly responsible for the $400 million crypto theft are all U.S. residents. Of the stolen assets that can be traced through ChipMixer, significant amounts are combined with funds from Russia-linked criminal groups, including ransomware gangs and darknet markets, before being sent to exchanges.

 

Ars Technica was recently used to serve second-stage malware in a campaign that used a never-before-seen attack chain to cleverly cover its tracks. A benign image of a pizza was uploaded to a third-party website and was then linked with a URL pasted into the “about” page of a registered Ars user. Buried in that URL was a string of characters that appeared to be random—but were actually a payload. “This is a different and novel way we’re seeing abuse that can be pretty hard to detect,” Mandiant researcher Yash Gupta said. There were no consequences for people who may have viewed the image, either as displayed on the Ars page or on the website that hosted it; however, devices that were infected by the first stage automatically accessed the malicious string at the end of the URL. From there, they were infected with a second stage.

 

The FBI surreptitiously sent commands to hundreds of infected small office and home office routers to remove malware China state-sponsored hackers were using to wage attacks on critical infrastructure. The routers—mainly Cisco and Netgear devices that had reached their end of life—were infected with what’s known as KV Botnet malware. To prevent the devices from being reinfected, the takedown operators issued additional commands that would "interfere with the hackers’ control over the instrumentalities of their crimes (the Target Devices), including by preventing the hackers from easily re-infecting the Target Devices."

 

In a scenario that elicits strong memories of that nail-biting flight scene from Die Hard 2, researchers investigating electronic flight bags (EFBs) found the app used by Airbus pilots was vulnerable to remote data manipulation, given the right conditions.
The vulnerability was found in Flysmart+ Manager, one of many apps within the Flysmart+ suite used by Airbus pilots to synchronize data to other Flysmart+ apps which provide data to pilots informing safe takeoffs and landings. Flysmart+ Manager was found to have disabled app transport security (ATS), by setting the NSAllowsArbitraryLoads property list key to "true." ATS is a key security control responsible for securing communications between the app and the app's update server. ATS is a security mechanism that forces the application to use HTTPS, preventing unencrypted communications. An attacker could use this weakness to intercept and decrypt potentially sensitive information in transit. A feasible attack would have to involve the interception of data flowing to the app, and a number of very specific conditions would need to be met. Which would be extremely difficult for a real-world scenario.

 

Q4 was rife with examples of how data assurances can fail, even when interacting with well-known 'brand established' ransomware groups. Threat actors cannot be trusted to prevent ongoing misuse/publication of stolen data, and … payments to them for these imaginary assurances have zero if not sub-zero value.

 

 

 

APPSEC, DEVSECOPS, DEV

Applicable public companies, known as "registrants," are now subject to cyber incident disclosure and cybersecurity readiness requirements for data stored in SaaS systems, along with the 3rd and 4th party apps connected to them. The new cybersecurity mandates make no distinction between data exposed in a breach that was stored on-premise, in the cloud, or in SaaS environments.
71% of organizations rated their SaaS cybersecurity maturity as mid to high,
yet 79% suffered a SaaS cybersecurity incident in the past 12 months.
SaaS-to-SaaS apps introduce many hiddens risks. The breach of CircleCI, for example, meant countless enterprises with SaaS-to-SaaS connections to the industry-leading CI/CD tool were put at risk. The same holds true for organizations connected to Qlik Sense, Okta, LastPass, and similar SaaS tools that have recently suffered cyber incidents.
"Follow The Data" Is The New "Follow The Money"
SaaS breaches and incidents occur at a regular clip across public companies, and AppOmni has tracked a 25% increase in attacks from 2022 to 2023. IBM calculates that the cost of a data breach averaged an all-time high of $4.45 million in 2023.
The burden of manually evaluating SaaS security risk and posture can be alleviated with a SaaS security posture management (SSPM) tool.

 

Under the new rules, publicly traded companies will be required to report cyber incidents within four business days of determining that the incident is “material,” meaning it would potentially impact a shareholder’s investment decisions. While many existing government regulations and industry standards have required organizations to establish business continuity and incident response (IR) plans in the past, the new SEC rules put more pressure on security practitioners than ever before. As time is of the essence, a well-practiced IR program will be critical. It’s no longer about having a plan in place; it’s about how well it can be executed, which will require many organizations to depart from their current practices.
According to the IBM Cost of the Data Breach Report 2023 (PDF), it’s not just about having an IR plan in place but regularly testing it, which can lower the cost of a breach by as much as $1.49 million on average. In turn, organizations must ensure they run regular training and IR simulation exercises and have strong collaboration within their organization.
An average tabletop exercise can cost an organization anywhere from $30,000 to $50,000. The overall spending is determined by the cost of training in preparation of the simulation, pre-tabletop exercise planning, incident scenario design, logistics and preparation, exercise delivery, and post-exercise analysis. That’s why it is not surprising that over a third of organizations say they space their IR tabletop exercises a year or two apart.
The New Paradigm: Automated IR Simulation: leveraging AI, continuous, universal engagement [rG: e.g. not just management, include dev teams]

 

AI models pose several risks to enterprises, starting with compliance risks — some industries, including health care and finance, have strict guidelines about how AI models can be used. Since AI models also require data, both for training and for real-time analysis and embeddings, there are also privacy and data loss risks. Finally, AI models might be inaccurate, biased, prone to hallucinations, or change in unpredictable ways over time.
To get a handle on all this conduct a comprehensive survey of how AI is being used. Business leaders need to review processes using AI and come to sessions with the AI governance team and review them. And when leaders don’t know what AI is being used, that’s when the bottom-up approach comes in — tracking all the endpoints in an organization to find the systems and users accessing AI applications. Most are cloud-based applications, and when people are using them, IT can track every query to ChatGPT.
Companies should also set up a framework for how AI models can be deployed, based on a company’s compliance environment and tolerance for risk. There are several dimensions of risks that are relevant for generative AI: How many people is it affecting? How big is the impact? AArete/EY/etc. work with companies to help them measure these risks, with some preliminary generative AI risk benchmarks available for some industries.
One of the positive aspects of Europe’s General Data Protection Regulation (GDPR) is that vendors are required to disclose when they use subprocessors. If a vendor develops new AI functionality in-house, one indication can be a change in their privacy policy. You have to be on top of it.
 

Regular assessments are one of the primary ways you can evaluate your organization's security posture and gain the visibility you need to understand where risks are.
Security Maturity and Your Testing Frequency

  • Immature or No Risk Strategy: Assessments are not conducted on an ongoing frequency or are conducted on an ad-hoc basis.

  • Emerging or Ad-Hoc Risk Strategy: Assessments are conducted with some frequency, typically quarterly or monthly.

  • Mature or Set Strategy: Assessments are conducted on an ongoing basis, usually monthly.

  • Advanced Strategy: Regularly assessments are engrained in the overall risk program and take place on a monthly or weekly basis depending on the type of test.

Suggested Testing Frequency by Common Framework#

  • NIST CSF: The National Institute of Standards and Technology (NIST) guidelines vary from quarterly to monthly scans, based on the specific guidelines of the governing framework.

  • PCI DSS: The Payment Card Industry Data Security Standard (PCI DSS) mandates quarterly scans.

  • HIPAA: The Health Information Portability Accountability Act (HIPAA) does not require specific scanning intervals but emphasizes the importance of a well-defined assessment strategy.

Types of Regular Assessments

  • Vulnerability Scans

  • Penetration Tests

  • Breach and Ransomware Simulations

  • Security Reputation Scans

  • Business Impact Analyses

  • Security Posture Assessment

The Top 6 Vulnerabilities

  • Lack of Policies and Procedures

  • Inadequate Testing Practices

  • Training and Cyber Awareness

  • Framework Adoption and Implementation (e.g., NIST Cybersecurity Framework 2.0)

  • Risk Appetite and Understanding

 

Predictive security

  • Input validation and sanitization:

  • Adversarial testing for NLP models:

  • Continuous monitoring and anomaly detection:

Proactive security

  • Interpreter and Simulator AIs

  • Red Team AI

  • Blue Team AI

 

Kubernetes commands 92% of the container orchestration platform market
The Cloud Native Computing Foundations’ recent Kubernetes report found that 28% of organizations have more than 90% of workloads running in insecure Kubernetes configurations. The majority of workloads, more than 71%, are running with root access, increasing the probability of system compromises and sensitive data being exposed. Many DevOps organizations overlook setting readOnlyRootFilesystem to true, which leaves their containers vulnerable to attack and unauthorized executables being written.
A good place to start is with NIST’s Application Container Security Guide (NIST SP 800-190).

  1. Get container-specific security tools in place first.

  2. Enforce strict access controls.

  3. Regularly update container images.

  4. Automate security in CI/CD pipelines.

  5. Conduct thorough vulnerability scanning.

  6. Manage secrets effectively.

  7. Isolate sensitive workloads.

  8. Use immutable infrastructure.

  9. Implement network policies and segmentation.

  10. Implement advanced container network security.

 

[rG: Security correlary would be TestSecOps]

 

 

 

AI

Researchers found that industry standard safety training techniques did not curb bad behaviour from the language models, which were trained to be secretly malicious, and in one case even had worse results: with the AI learning to recognise what triggers the safety software was looking for, and 'hide' its behaviour. Researchers had programmed the various large language models (LLMs) to act in what they termed malicious ways, and the point of the study was to see if this behaviour could be removed through the safety techniques. The researchers claim the results show that "once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.
"I think our results indicate that we don't currently have a good defense against deception in AI systems—either via model poisoning or emergent deception—other than hoping it won't happen. And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defense against it."

 

Nightshade, a new, free downloadable tool created by computer science researchers at the University of Chicago which was designed to be used by artists to disrupt AI models scraping and training on their artworks without consent.
Nightshade seeks to “poison” generative AI image models by altering artworks posted to the web, or “shading” them on a pixel level, so that they appear to a machine learning (ML) algorithm to contain entirely different content — a purse instead of a cow, let’s say. Trained on a few “shaded” images scraped from the web, an AI algorithm can begin to generate erroneous imagery from what a user prompts or asks.
Meanwhile, the team’s earlier tool — Glaze, which works to prevent AI models from learning an artist’s signature “style” by subtly altering pixels so they appear to be something else to machine learning algorithms — has received 2.2 million downloads since it was released in April 2023.

 

A Microsoft AI engineering leader says he discovered vulnerabilities in OpenAI’s DALL-E 3 image generator in early December allowing users to bypass safety guardrails to create violent and explicit images, and that the company impeded his previous attempt to bring public attention to the issue. After attempting to report these issues within Microsoft and to OpenAI, Shane Jones, Microsoft principal software engineering lead, posted publicly on LinkedIn urging OpenAI’s non-profit board to withdraw DALL-E 3 from the market. He informed his Microsoft leadership team of the post, according to the letter, and was quickly contacted by his manager, saying that Microsoft’s legal department was demanding that he delete the post immediately, and would follow up with an explanation or justification. He agreed to delete the post on that basis but never heard from Microsoft legal. “Over the following month, I repeatedly requested an explanation for why I was told to delete my letter,” he writes. “I also offered to share information that could assist with fixing the specific vulnerability I had discovered and provide ideas for making AI image generation technology safer. Microsoft’s legal department has still not responded or communicated directly with me.” “Artificial intelligence is advancing at an unprecedented pace. I understand it will take time for legislation to be enacted to ensure AI public safety. At the same time, we need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public. Concerned employees, like myself, should not be intimidated into staying silent.”
His letter calls on the government to create a system for reporting and tracking AI risks and issues, with assurances to employees of companies developing AI that they can use the system without fear of retaliation.

 

Microsoft, Alphabet and Advanced Micro Devices delivered quarterly results that failed to impress investors who had sent their stocks soaring. Alphabet also said its spending on data centers to support its AI plans would jump this year, highlighting the costs of its fierce competition against AI rival Microsoft. While Google Cloud revenue growth slightly topped Wall Street targets, boosted by interest in AI, Microsoft's Azure grew faster. Optimism about AI pushed Microsoft's stock market value above $3 trillion this month, eclipsing Apple.

 

 

VENDORS & PLATFORMS

Threat Score better prioritizes and understands threats than existing security information and event management and endpoint detection and response services due to the volume and variety of data. Threat Score assesses each alert and assigns a value that ranges from 0 to 10, with 10 indicating a greater likelihood that the activity poses a real threat to the organization.

 

 

LEGAL

The lawsuit argues that New York customers lost millions of dollars — in some cases their entire lifesavings — to scammers and hackers because of Citi’s weak security and anti-fraud measures. The NY AG alleges that Citi gets customers to sign “coerced” affidavits that allows the bank to treat claims of fraud to narrow commercial laws on wire transfers instead of the more substantial protections from the Electronic Fund Transfer Act, a landmark consumer protection law. “Citi then summarily rejects claims for reimbursement and instead blames consumers.”
For example, in October 2021 a New Yorker had $40,000 stolen from her retirement savings account after being tricked by a text message purporting to be from Citi but was actually from a scammer who changed her password and transferred money. For weeks, the customer continued to contact the bank and submit affidavits, but in the end, she was told that her claim for fraud was denied.
In another example, a customer lost $35,000 to a scammer who changed her online passwords and tried to transfer the money. While Citi initially tried to verify the wire transfer by calling the customer, who was working and did not see the call, the bank approved a second attempted wire transfer without getting the customer on the phone. She lost nearly everything she had saved, and Citi refused to reimburse her.

 

The FTC's complaint alleges that the company "failed to monitor attempts by hackers to breach its networks, segment data to prevent hackers from easily accessing its networks and databases, ensure data that is no longer needed is deleted, adequately implement multifactor authentication, and test, review and assess its security controls" and "allowed employees to use default, weak, or identical passwords for their accounts."
​The FTC says that Blackbaud paid the ransomware gang that stole the personal data belonging to millions of people from its systems a ransom of 24 Bitcoin (worth around $250,000 at the time) after the attackers threatened to leak the stolen data online. The company never verified, however, that the hacker actually deleted the stolen data, according to the complaint.

 

Aditya Verma admitted he told friends in July 2022: "On my way to blow up the plane. I'm a member of the Taliban." But he said he had made the joke in a private Snapchat group and never intended to "cause public distress". The message he sent to friends, before boarding the plane, went on to be picked up by UK security services. They then flagged it to Spanish authorities while the easyJet plane was still in the air. Two Spanish F-18 fighter jets were sent to flank the aircraft. One followed the plane until it landed at Menorca, where the plane was searched. Mr Verma, who was 18 at the time, was arrested and held in a Spanish police cell for two days. He was later released on bail. If he had been found guilty, the university student faced a fine of up to €22,500 (£19,300) and a further €95,000 in expenses to cover the cost of the jets being scrambled.
A key question in the case was how the message got out, considering Snapchat is an encrypted app. The message was captured by the security mechanisms of England when the plane was flying over French airspace. The message was made in a strictly private environment between the accused and his friends with whom he flew, through a private group to which only they have access, so the accused could not even remotely assume that the joke he played on his friends could be intercepted or detected by the British services, nor by third parties. It was not immediately clear how UK authorities were alerted to the message. A spokesperson for Snapchat said the social media platform would not "comment on what's happened in this individual case".

 

The government is seeking to update the Investigatory Powers Act (IPA) 2016. Under the proposed amendments to existing laws, if the UK Home Office declined an update, it then could not be released in any other country, and the public would not be informed.
A government spokesperson said: "We have always been clear that we support technological innovation and private and secure communications technologies, including end-to-end encryption, but this cannot come at a cost to public safety."

 

 

And Now For Something Completely Different …

[rG: If you like this, be on the lookout for the continuation part 3 - hopefully coming soon.]

 

Everyday environments, such as industrial sites and automobiles, are abundant in sources like heat, vibration, light, and electromagnetic waves. Harnessing these energies presents a viable alternative to traditional power sources, especially in areas where battery replacement is impractical. A polymer-type piezoelectric device was also attached to the cantilever, generating additional power through tensile and compressive deformation as the cantilever vibrates. This hybrid system demonstrated its potential by successfully powering a commercial IoT sensor (GPS positioning sensor, 3 volts, 20 megawatts), opening doors for continuous operation without relying on battery power.

 

When you have a bunch of 230 kV transmission lines running over your property, why not use them for some scientific experiments? This is where the [Double M Innovations] YouTube channel comes into play, including a recent video where the idea of harvesting electricity from HV transmission lines using regular fences is put to an initial test. Roughly 36.2 Joule would have been collected, giving some idea of the power one could collect from a few kilometers of fencing wire underneath such HV lines. As for whether storing the power inductively coupled on fence wire can be legally used is probably something best discussed with your local energy company.