- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2024-02-17
Robert Grupe's AppSecNewsBits 2024-02-17
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
US military notifies 20,000 of data breach after cloud email leak
The cloud email server, hosted on Microsoft’s cloud for government customers, was accessible from the internet without a password, likely due to a misconfiguration.
TechCrunch reported the spill to SOCOM on February 19, 2023. The cloud email server was secured on February 20 after TechCrunch escalated the incident to senior U.S. government officials after not hearing back.
It’s not clear for what reason the DOD took a year to investigate the incident or notify those affected.
My WinStar, in which guests can access self-service options during their hotel stay, their rewards points and loyalty benefits, and casino winnings, is developed by a Nevada software startup called Dexiga.
The startup left one of its logging databases on the internet without a password, allowing anyone with knowledge of its public IP address to access the WinStar customer data stored within using only their web browser.
None of the data was encrypted, though some sensitive data — such as a person’s date of birth — was redacted and replaced with asterisks. A review of the exposed data by TechCrunch found an internal user account and password associated with Dexiga founder Rajini Jayaseelan.
Dexiga said the incident resulted from a log migration in January. Dexiga did not provide a specific date when the database became exposed. The exposed database contained rolling daily logs.
U.S. Internet Corp. has a business unit called Securence, which specializes in providing filtered, secure email services to businesses, educational institutions and government agencies worldwide. But until it was notified last week, U.S. Internet was publishing more than a decade’s worth of its internal email — and that of thousands of Securence clients — in plain text out on the Internet and just a click away for anyone with a Web browser. Researchers unearthed a public link to a U.S. Internet email server listing more than 6,500 domain names, each with its own clickable link.
So far, an issue with the Ansible playbook that controls the Nginx configuration for the IMAP servers is suspect.
On December 26, 2023, the organization confirmed it suffered a cyberattack after patients started receiving extortion emails informing that their sensitive personal information. Unless Integris Health met the attacker's demands, the stolen data would be sold to other cybercriminals on January 5, 2024.
The emails the patients received from the threat actor contained accurate information and linked to a website in the Tor network hosting the stolen details, but access was not free. Visitors could pay $50 and trust the attacker's word on removing the details, or pay $3 to view information belonging to any other impacted individual.
The threat actor said that they are selling on a dark web marketplace data for 2.3 million Integris patients (based on the number of social security numbers in the database).
Dutch health insurers are reportedly forcing breast cancer patients to submit photos of their breasts prior to reconstructive surgery despite a government ban on precisely that. That sounds pretty bad but it gets worse: These insurers keep losing their copies of these highly intimate pictures, one way or another. Some insurers don't use secure websites and/or other means of electronic communications to transfer these very sensitive photos.
Cancer patients' photos have been stolen by ransomware crews in the past, and then used to extort victims. Some of these images ended up published online in data dumps, and now patients are suing the healthcare provider for allowing the "preventable" and "seriously damaging" leak.
After a media outcry about the situation in 2021 the Dutch Health Minister required that these photos be taken in a hospital, with the rules coming into effect on January 1, 2023. Some hospitals have since refused to do this, citing the sensitive nature of the images and potential privacy nightmares.
Meanwhile, health insurance orgs aren't necessarily following this rule, and are still asking patients directly for photos. One patient was asked to send the nudes via email before the insurer would reimburse a second procedure after a botched first operation.
Security debt arises from unresolved flaws in both code developed in-house (63% of applications affected) and third-party libraries (70% affected). However, flaws in third-party code take 50% longer to address.
71% of organizations have security debt, with
46% having critical high-severity flaws persisting more than one year.
It takes 9 months on average to fix half of all flaws, 50% longer for third-party flaws.
42% of applications have flaws persisting over one year that qualify as security debt.
Among the most surprising findings in the report is the revelation that for the most part, developers are not prioritizing critical flaws when they fix bugs. The most severe stuff is not being worked on the fastest or first.
Recommendations:
Prioritize remediation of critical, high-severity flaws over 1 year old, as these represent 3% of all flaws but are the greatest risk
Integrate scanning and testing across the entire software development life cycle
Move toward continuous remediation to fix flaws faster
Improve developer security competency through hands-on education
Develop strategies to secure the open-source software supply chain
By design, all the personal information including email and IP address, phone number, gender, bcrypt hashed password, 2FA secret and backup code and the code that can be immediately used to reset the password is returned to every single person that uses the pods feature.
Strong hashing algorithms like bcrypt are weakened when poor password choices are allowed and strong password choices (such as having more than 20 characters in it). Exposed password reset tokens meant that anyone could immediately takeover anyone else's account. And there's no 2FA challenge on password reset either.
To be as charitable as possible to Spoutible, you could argue that this is largely just the one vulnerability that is the inadvertent exposure of internal data via a public API. This is data that has a legitimate purpose in their system and it may simply be a case of a framework automatically picking all entity attributes up from the data tier and returning them via the UI. But it's the circumstances that allowed this to happen and then exacerbated the problem when it did that concern me more; clearly there's been no security review around this feature because it was so easily discoverable (at least there certainly wasn't review whilst it was live), nor has been any thought put in to notifying people of potential account takeovers or providing them with the means to invalidate other sessions. Then there are periphery issues such as very weak password rules that make cracking bcrypt so much easier, weak 2FA backup codes and pointless bcrypting of them. Not major issues in and of themselves, but they amplify the problems the exposed data presents.
Clearly this required disclosure before publication, unfortunately Spoutible does not publish a security.txt file.
HACKING
A Chinese-speaking cybercrime group, dubbed GoldFactory started distributing trojanized smartphone apps in June 2023, however, the latest GoldPickaxe version has been around since October. GoldPickaxe and GoldPickaxe.iOS target Android and iOS respectively, tricking users into performing biometric verification checks that are ultimately used to bypass the same checks employed by legitimate banking apps in Vietnam and Thailand. GoldPickaxe.iOS is the first iOS Trojan that combines: collecting victims' biometric data, ID documents, intercepting SMS, and proxying traffic through the victims' devices.
In all cases, the initial contact with victims was made by the attackers impersonating government authorities on the LINE messaging app, one of the region's most popular. For example, in some cases back in November, criminals impersonated officials from the Thai Ministry of Finance, and offered pension benefits to victims' elderly relatives.
From there, victims were socially engineered into downloading GoldPickaxe through various means.
Once the biometrics scans were captured, attackers then used these scans, along with deepfake software, to generate models of the victim's face.
Attackers would download the target banking app onto their own devices and use the deepfake models, along with the stolen identity documents and intercepted SMS messages, to remotely break into victims' banks.
It is also easy to find reports of burglars using Wi-Fi jamming technology over 2021, 2022, and 2023 – with reports becoming more frequent over time.
Jammers simply confused wireless devices rather than blocking signals. They usually work by overloading wireless traffic “so that real traffic cannot get through. Prices ranging from $40 to $1,000, jammers are not legal to use in the U.S. but they are very easy to buy online.
There are a few suggestions given to those now wondering about the efficacy of their home security systems with wireless components. Firstly, physically connect some of the devices which allow for a wired connection and local storage of footage. Secondly, utilize smart home technology that makes it appear that someone is at home. Your device may also have the ability to send alerts when the signal / connection is interrupted, and playing with those settings might be worthwhile.
The North Korean cyberespionage group known as Kimsuky has used the models to research foreign think tanks that study the country, and to generate content likely to be used in spear-phishing hacking campaigns.
Iran’s Revolutionary Guard has used large-language models to assist in social engineering, in troubleshooting software errors, and even in studying how intruders might evade detection in a compromised network. That includes generating phishing emails “including one pretending to come from an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism.” The AI helps accelerate and boost the email production.
The Russian GRU military intelligence unit known as Fancy Bear has used the models to research satellite and radar technologies that may relate to the war in Ukraine.
The Chinese cyberespionage group known as Aquatic Panda — which targets a broad range of industries, higher education and governments from France to Malaysia — has interacted with the models “in ways that suggest a limited exploration of how LLMs can augment their technical operations.”
The Chinese group Maverick Panda, which has targeted U.S. defense contractors among other sectors for more than a decade, had interactions with large-language models suggesting it was evaluating their effectiveness as a source of information “on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs.”
Researchers showed that LLM-powered agents – LLMs provisioned with tools for accessing APIs, automated web browsing, and feedback-based planning – can wander the web on their own and break into buggy web apps without oversight. The OpenAI Assistants API is basically used to have context, to do the function calling, and many of the other things like document retrieval that are really important for high performance. LandChain was basically used to wrap it all up. And the Playwright web browser testing framework was used to actually interact with websites. These agents can perform complex SQL union attacks, which involve a multi-step process (38 actions) of extracting a database schema, extracting information from the database based on this schema, and performing the final hack.
OpenAI's GPT-4 had an overall success rate of 73.3 percent with five passes and 42.7 percent with one pass. The second place contender, OpenAI's GPT-3.5, eked out a success rate of only 6.7 percent with five passes and 2.7 percent with one pass. Every open source model failed, and GPT-3.5 is only marginally better than the open source models.
To estimate the cost of GPT-4, we performed five runs using the most capable agent (document reading and detailed prompt) and measured the total cost of the input and output tokens. Across these 5 runs, the average cost was $4.189. With an overall success rate of 42.7 percent, this would total $9.81 per website.
Assuming that a human security analyst paid $100,000 annually, or $50 an hour, would take about 20 minutes to check a website manually, the researchers say a live pen tester would cost about $80 or eight times the cost of an LLM agent. Kang said that while these numbers are highly speculative, he expects LLMs will be incorporated into penetration testing regimes in the coming years.
The DarkGate PDF malware campaign, for example, relies on ad tools. Dating back to 2018, DarkGate provides backdoor access to victim's computers for the purpose of data theft and ransomware. The campaign involves sending email messages to victims with malicious PDF attachments. Those duped into opening one see a social engineering message – often in the form of a Microsoft OneDrive error message that prompts the victim to click a link to download the document.
The report explains that this often works because the attackers know that office workers rely on cloud-based applications with user interfaces that often change. This makes it more difficult to spot fake interface elements or bogus error messages. Clicking on the fake OneDrive error message does not immediately download the malware payload. Rather, it routes the victim's click – containing identifiers and the domain hosting the file – through an advertising network and then it fetches the malicious URL, which is not evident in the PDF.
Rhysida ransomware uses LibTomCrypt's ChaCha20-based cryptographically secure pseudo-random number generator (CSPRNG) to create encryption keys for each file. The random number output by the CSPRNG is based on the ransomware's time of execution – a method the researchers realized limits the possible combinations for each encryption key. Specifically, the malware use the current time-of-execution as a 32-bit seed for the generator. That means the keys can be derived from the time of execution, and used to decrypt and recover scrambled files.
I saw the Cf-Cache-Status: HIT header. This was pretty interesting to me, as this was not a static file. I checked out the URL, and saw that the path did not have a static extension as expected: https://chat.openai.com/share/CHAT-UUID.
This meant that there was likely a cache rule that did not rely on the extension of the file, but on its location in the URL’s path.
I checked https://chat.openai.com/share/random-path-that-does-not-exist And as expected, it was also cached. It quickly became evident that the cache rule looked something like this: /share/* Which means that pretty much anything under the /share/ path gets cached.
When the victim goes to https://chat.openai.com/share/%2F..%2Fapi/auth/session?cachebuster=123, their auth token will be cached. When the attacker later goes to visit https://chat.openai.com/share/%2F..%2Fapi/auth/session?cachebuster=123, they will see the victim’s cached auth token. This is game over. Once the attacker has the auth token, they can now takeover the account, view chats, billing information, and more.
I was able to use a URL encoded path traversal to cache sensitive API endpoints, thanks to a path normalization inconsistency between the CDN and web server. Surprisingly, this was probably my quickest find in bug bounty, as well as one of my more interesting ones, and my biggest bounty thus far of $6500.
APPSEC
The profile of CISOs has been growing since the early 2000s, set against a non-stop carousel of compliance mandates, data breaches, and emerging cybersecurity threats. While data breaches may have forced businesses to pay attention to security, it was compliance mandates that funded it. From HIPAA and PCI DSS to GDPR, SOC 2, and more, compliance has been a double-edged sword for CISOs.
At the same time, the visibility and importance of digital security and compliance at the board level has forced CIOs, typically the main voice of all things digital, to get increasingly involved in understanding all things cybersecurity. This serves to further blur the roles.
As companies continue to embrace the cloud, software-as-a-service (SaaS), and remote work, the million-dollar question is, How will things shake out? How these roles intersect and come together — or if they even should — depends on so many factors, such as company size, industry, existing org charts, culture, existing IT setup, and future digital transformation plans, to name a few. Some security leaders propose breaking it into two distinct functions: A business-oriented executive focused on risk management and compliance, and a more technical executive focused on threat prevention, detection, and response.
According to a recent study, only about a half of a percent of the world’s top one million websites publish a security.txt file. The lack of this simple file leads to multiple emails and phone calls to the organization, delaying the notification process and the organization’s awareness of the critical need to mitigate their risk to ransomware.
For those that don’t already know, the security.txt is a proposed Internet standard, RFC 9116, which concisely advertises an entity’s vulnerability disclosure process. Like robots.txt, this machine-readable file resides on a public-facing webserver, either in the root or “well-known” directory, where security professionals and researchers can quickly identify the entity’s preferences for reporting vulnerabilities. Each domain and subdomain within an entity’s network should have its own security.txt file.
Logic flaws and business rule bypasses:
Automated scanners cannot detect this issue without a comprehensive understanding of the application's intended business logic.Incomplete coverage and inaccurate risk assessment:
Manual security testing considers the specific context of the application, its data sensitivity, and its business impact, providing a more nuanced assessment of vulnerabilities.Detection of advanced attack techniques:
Manual security testers, with their creativity and deep understanding of application security, can simulate real-world attack scenarios, mimicking the techniques and methodologies employed by actual attackers.
Upon completion of the 12-course certification program, participants will receive an IBM Digital Skills Badge, which shows proficiency to potential employers. Participants are encouraged to further develop their skills by registering as ISC2 Candidates and taking ISC2’s Certified in Cybersecurity test.
Microsoft and OpenAI also work with MITRE to integrate LLM-themed tactics, techniques, and procedures (TTPs) into the MITRE ATT&CK framework or MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) knowledgebase.
VENDORS & PLATFORMS
25% of organizations surveyed in the United Kingdom have already moved half or more of their cloud-based workloads back to on-premises infrastructures. Security issues and high project expectations were reported as the top motivators (33%) for relocating some cloud-based workloads back to on-premises infrastructures such as enterprise data centers, colocation providers, and managed service providers (MSPs).
Another significant driver was the failure to meet internal expectations, at 24%.
The most common motivator for repatriation I’ve been seeing is cost. In the survey, more than 43% of IT leaders found that moving applications and data from on-premises to the cloud was more expensive than expected.
None of this should be surprising. The cloud had no way of delivering on the hype of 2010 to 2015 that gushed about lower costs, better agility, and better innovation. Well, two out of three is not bad, right?
The cloud is a good fit for modern applications that leverage a group of services, such as serverless, containers, or clustering. However, that doesn’t describe most enterprise applications.
If someone uploads a .JPG to your online service, you want to be sure it's a JPEG image and not some script masquerading as one, which could later bite you in the ass. Enter Magika, which uses a trained model to rapidly identify file types from file data, and it's an approach the Big G thinks works well enough to use in production. Magika is, we're told, used by Gmail, Google Drive, Chrome's Safe Browsing, and VirusTotal to properly identify and route data for further processing.
Google claims Magika is 50 percent more accurate at identifying file types than the biz's previous system of handcrafted rules, takes milliseconds to identify a file type, and is said to have at least 99 percent accuracy in tests. It isn't perfect, however, and fails to classify file types about three percent of the time. It's licensed under Apache 2.0, the code is here, and its model weighs in at 1MB.
Nginx is now used on roughly one-third of the world's web servers, ahead of Apache. Nginx Inc. was acquired by Seattle-based networking firm F5 in 2019.
A core developer of Nginx, currently the world's most popular web server, has quit the project, stating that he no longer sees it as "a free and open source project… for the public good." His fork, freenginx, is "going to be run by developers."
Dounin writes in his announcement that "new non-technical management" at F5 "recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers' position." While it was "quite understandable," given their ownership, Dounin wrote that it means he was "no longer able to control which changes are made in nginx," hence his departure and fork.
Comfort, headache, and eye strain are among the top reasons
LEGAL
Staff at the airline told him bereavement fare rates can't be claimed back after having already purchased flights, a policy at odds with what the support chatbot told Moffatt. It's understood the virtual assistant was automated, and not a person sat at a keyboard miles away.
Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.
Air Canada didn't take "reasonable care to ensure its chatbot was accurate," and that customers like Moffatt wouldn't have known why information on Air Canada's webpage should be more correct than its chatbot. Air Canada was ordered to pay Moffatt a total of CA$812.02, including CA$650.88 in damages.
Along with requiring that attacks are reported to the FCC within seven days of a telco discovering them, the same deadline now exists to report any data leaks to the FBI and US Secret Service as well. As the FCC planned, the new rule also eliminates the mandatory seven-day waiting period for reporting break-ins to consumers. The FCC now "requires carriers to notify customers of breaches of covered data without unreasonable delay … and in no case more than 30 days following reasonable determination of a breach."
Ukrainian cybercrime kingpin who ran some of the most pervasive malware operations faces 40 years in prison after spending nearly a decade on the FBI's Cyber Most Wanted List. Vyacheslav Igorevich Penchukov, 37, pleaded guilty this week in the US to two charges related to his leadership role in both the Zeus and IcedID malware operations that netted millions of dollars in the process.
Zeus' primary goals were to recruit machines into its botnet and to act as a banking trojan, stealing various information used for financial fraud, such as bank account information, passwords, and PINs. The FBI et al dismantled Zeus in 2014 after previously claiming that one of its variants, Gameover Zeus, had infected up to 1 million PCs globally, causing in excess of $100 million in losses.
IcedID was first spotted in 2017 and continues to be disseminated by various operations today, including Emotet, Raspberry Robin, and Bumblebee.
This sum won't hurt at all for the corporation, one of the largest clinical medical lab networks in the US. In all, Quest is being charged slightly less than two days of its $994 million annual profit in 2023 – hardly a serious disincentive.
During inspections, authorities dug through Quest's compactors and dumpsters and said they found hundreds of containers of chemicals, as well as bleach, reagents, batteries, electronic waste; medical waste such as used specimen containers for blood and urine; hazardous waste such as used batteries, solvents, and flammable liquids; and unredacted medical information.
The National Crime Agency (NCA) issued a call on Thursday for parents and teachers to take a proactive role in educating young people about the dangers of engaging in cybercrime. It previously pinpointed 12 to 15-year-old boys as the primary target of education efforts, and noticed the average age of suspects subject to cybercrime investigations was 17. We educate children in schools about sharing indecent images when they're underage, we should be educating them about the dark web as well. It's a really complicated problem with no single tech solution.
One in five (20 percent) of children in the UK aged between ten and 16 have demonstrated behavior that would violate the Computer Misuse Act 1990 (CMA). The figure is slightly higher for those who are active gamers, it added, with one in four exhibiting illegal behaviors.
For gamers, even buying a skin for their in-game character using their parents' saved credit card details without consent would be a violation of the CMA. Using off-the-shelf tools to perform DDoS attacks, for example, or accessing a protected server are other common examples of these low-level offenses. Being found guilty under the CMA can have serious consequences for young offenders that could impact their employability in later life by having a criminal record or being expelled from school, or both.