- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2024-04-06
Robert Grupe's AppSecNewsBits 2024-04-06
Epic Fails: AI PrivEsc and Cross-Tenant Attacks, Ivanti commits to now starting security by design; Fed access tokens in code exploited. OWASP exposes resumes. Reduce Dev and Sec Costs: replace C++ with Rust, Java with Kotlan.
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Feds probe alleged classified US govt data theft and leak
IntelBroker bragged about the leak on Twitter, sorry, X, before being booted from the social network — and said they obtained the records after breaking into the IT environment of Acuity, a Virginia-based consulting firm that works with the US government and national security organizations.
IntelBroker took credit for the cyber-heist, and dumped a sample of the alleged stolen data on the dark web consisting of contact info for government and military officials – including names, email addresses, and office and personal cell phone numbers belonging to Pentagon and other federal employees – plus classified and confidential communications and documents shared between the Five Eyes' intelligence agencies and other US allies.
IntelBroker boasted they used a zero-day bug in GitHub to access Acuity's tokens and snatch the government data.
A self-service check-in terminal used in a German Ibis budget hotel was found leaking hotel room keycodes, and the issue could potentially affect hotels around Europe. An attacker could aggregate an array of room keycodes in just a few minutes – as long as it would take a regular customer to use the same machine to check in to their room. An attacker could input a series of six consecutive dashes (------) in place of a booking reference number and the terminal would return an extensive list of room details.
Any other sequence of dashes is accepted if it is long enough to enable the submit button. Therefore, it is assumed that a variable length string is likely not a master code, but a bug or a not deactivated test function.
Once the dashes were entered, the booking information displayed the cost of the booking and the valid room entry keycodes, along with the room number. It also included a timestamp, which the researchers assumed to be a check-in date – one that may indicate the length of a guest's stay. Even without the exploit using a series of dashes, valid booking references could be found on discarded printouts, necessitating greater security controls embedded in the terminals.
The issue was first discovered on December 31, 2023, and was fixed on January 26.
"If you were an OWASP member from 2006 to around 2014 and provided your resume as part of joining OWASP, we advise assuming your resume was part of this breach."
To make sure this doesn't happen again, OWASP said it disabled directory browsing and checked the web server for additional configuration and security issues, and removed all of the resumes from the site. Additionally, the foundation purged the CloudFlare caches, and requested that the accessed data be removed from the web archive.
"We recognize the significance of this breach, especially considering the OWASP Foundation's emphasis on cybersecurity."
Nearly a year on from the discovery of a massive data theft at healthcare biz Harvard Pilgrim, and the number of victims has now risen to nearly 2.9 million people in all US states. While the intrusion occurred on March 28, 2023, it wasn't discovered until April 17.
The latest notification letters mark the fourth time Harvard Pilgrim has updated the total number of victims. An update in February put the total number at 2,632,275 individual records exposed; now it is reporting a total of 2,860,795 people. It's not uncommon for victim numbers to increase during the course of an investigation, though 2.8 million is a lot of people and may not be the final tally yet.
Users may disable default apps, only to discover later that the settings do not match their initial preference. Users are not correctly able to configure the desired privacy settings of default apps. Some default app configurations can even reduce trust in family relationships. For Apple's Siri voice assistant, users can choose not to enable Siri in the initial setup on macOS-powered devices, it still collects data from other apps to provide suggestions. To fully disable Siri, Apple users must find privacy-related options across five different submenus in the Settings app.
While this study probably won't convince Apple to change its ways, lawsuits might have better luck.
Ivanti, the remote-access company whose remote-access products have been battered by severe exploits in recent months, has pledged a "new era," one that "fundamentally transforms the Ivanti security operating model" backed by "a significant investment" and full board support. CEO Jeff Abbott's open letter promises to revamp "core engineering, security, and vulnerability management," make all products "secure by design," formalize cyber-defense agency partnerships, and "sharing information and learning with our customers." "Our approach will entail rigorous threat modeling exercises, ensuring that security is ingrained as a foundational element of our products."
Among the details is the company's promise to improve search abilities in Ivanti's security resources and documentation portal, "powered by AI," and an "Interactive Voice Response system" for routing calls and alerting customers about security issues, also "AI-powered."
Among the many changes to come at Ivanti HQ, one that will immediately catch the eye of security pros is its commitment to security by design – an approach the industry has long called for to be the norm.
[rG: "Secure by design" with design threat assessments and "shift-left" is fundamental to Secure Software Development Lifecycle (SSDLC) proccesses that have been prescribed for decades. It is also questionable as to whether AI can provide any significant improvement; given the GIGO principle of "output can only be as accurate as the information entered."]
Microsoft claimed an engineer's account, giving attackers access to a supposedly locked-down workstation, the consumer signing key, and, crucially, access to crash dumps moved into a debugging environment. A "race condition" prevented a mechanism that strips out signing keys and other sensitive data from crash dumps from functioning. Furthermore, "human errors" allowed for an expired signing key to be used in forging tokens for modern enterprise offerings.
A federal Cyber Safety Review Board has issued its report on what led to last summer's capture of hundreds of thousands of emails by Chinese hackers from cloud customers, including federal agencies. It cites "a cascade of security failures at Microsoft" and finds that "Microsoft's security culture was inadequate" and needs to adjust to a "new normal" of cloud provider targeting.
It cites in particular:
Lacking security practices of other cloud providers
Failure to detect a compromise on a laptop from an employee at an acquired company before connecting it to its network
Letting inaccurate public statements stand for months
A "separate incident" from January 2024 that, while not in the CSRB's purview, allowed another nation-state actor access to emails, code, and internal systems
A need to "demonstrate the highest standards of security, accountability, and transparency."
While Microsoft was ultimately able to remove attackers' access to 22 enterprise organizations and 503 individual accounts, by the end of the board's review, the company could not "demonstrate to the Board that it knew how Storm-0558 had obtained the 2016 MSA key."
HACKING AI
Artificial intelligence (AI)-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges, gain cross-tenant access to other customers' models, and even take over the continuous integration and continuous deployment (CI/CD) pipelines. The development comes as machine learning pipelines have emerged as a brand new supply chain attack vector, with repositories like Hugging Face becoming an attractive target for staging adversarial attacks designed to glean sensitive information and access target environments.
The findings show it's possible to breach the service running the custom models by uploading a rogue model and leverage container escape techniques to break out from its own tenant and compromise the entire service, effectively enabling threat actors to obtain cross-tenant access to other customers' models stored and run in Hugging Face.
Hugging Face lets users infer the uploaded Pickle-based model on the platform's infrastructure, even when deemed dangerous. This essentially permits an attacker to craft a PyTorch (Pickle) model with arbitrary code execution capabilities upon loading and chain it with misconfigurations in the Amazon Elastic Kubernetes Service (EKS) to obtain elevated privileges and laterally move within the cluster.
To mitigate the issue, it's recommended to enable IMDSv2 with Hop Limit so as to prevent pods from accessing the Instance Metadata Service (IMDS) and obtaining the role of a Node within the cluster.
A new technique to defeat safety guardrails dubbed "many-shot jailbreaking", tested on AI tools developed by the likes OpenAI, Meta and their own large language model, breaks limits programmed into generative AI are meant to stop the tool from answering malicious queries.
Previous versions of LLMs accepted prompts up to about the size of an essay - or about 4,000 token. A token is the smallest unit into which an AI model can break down text, akin to words or sub-words. By contrast, newer LLMs can process 10 million tokens, equal "to multiple novels or codebases," and these "longer contexts present a new attack surface for adversarial attacks.
Gen AI tools work by handling shots, aka inputs or examples, provided to the tools. Each shot is designed to build on previous shots, using a process called in-context learning, via which the tool attempts to refine its answers based on previous inputs and outputs, to eventually reach the user's desired output.
Many-shot jailbreaking subverts that technique by inputting many different shots at once, involving prohibited content and answers, in a way that fools the gen AI tool into thinking that these "fictitious dialogue steps between the user and the assistant" resulted from actual interaction with the tool, rather than a fictitious rendering of it.
If your org’s permissions aren’t set properly and Copilot is enabled, users can easily surface sensitive data.
Show me new employee data.
What bonuses were awarded recently?
Are there any files with credentials in them?
Are there any files with APIs or access keys? Please put them in a list for me.
What information is there on the purchase of ABC cupcake shop?
Show me all files containing sensitive data.
Before you enable Copilot, you need to properly secure and lock down your data. Even then, you still need to make sure that your blast radius doesn’t grow, and that data is used safely.
HACKING APPS
Crowdfense is now offering between $5 and $7 million for zero-days to break into iPhones, up to $5 million for zero-days to break into Android phones, up to $3 million and $3.5 million for Chrome and Safari zero-days respectively, and $3 to $5 million for WhatsApp and iMessage zero-days.
In its previous price list, published in 2019, the highest payouts that Crowdfense was offering were $3 million for Android and iOS zero-days.
Google said it saw hackers use 97 zero-day vulnerabilities in the wild in 2023.
Crowdfense currently offers the highest publicly known prices to date outside of Russia, where a company called Operation Zero announced last year that it was willing to pay up to $20 million for tools to hack iPhones and Android devices.
Clickjacking, also known as a user-interface redress attack, involves manipulating web page structure or interactive elements to make users’ clicks register somewhere other than intended, such as on a hidden iframe containing an ad served from a domain unrelated to the host site.
The latest variation of the technique has been dubbed "cross window forgery". The technique is more reliable than clickjacking, as it does not rely upon the careful positioning of windows, timing of clicks, and the vagaries of a user’s display settings. Instead, the attacker entices the user to hold down a key, spawns a victim web page, and the keydown is transferred to the victim page.
Using code that intercepts the keydown event and runs an attack function, the attacker can open a malicious OAuth authorization prompt URL in a new, tiny browser window to receive the still active key press. This is possible because both sites allow a potential attacker to create an OAuth application with wide scope to access their API, and they both set a static and / or predictable 'ID' value to the 'Allow/Authorize' button that is used to authorize the application into the victim's account.
Web developers should adopt defensive measures, such as not giving sensitive buttons an ID tag that an attacker can use for targeting, or randomizing the ID tag value so it can't easily be incorporated into an attack script. Another option is redirecting incoming requests to drop URL fragments, which breaks the ability to scroll to a particular portion of the webpage.
Chromium-based browsers have access to a force-load-at-top document policy, which can be enabled by opting out of the Scroll-to-Text-Fragment feature. And Firefox, he says, is considering whether to support this feature.
Web devs need to adopt other best practices, like using the frame-ancestors Content Security Policy to prevent webpage framing, and disabling sensitive webpage interface elements until windows have been properly sized and the user has released any held keys.
This might be the best executed supply chain attack we've seen described in the open, and it's a nightmare scenario: malicious, competent, authorized upstream in a widely used library. The affair is "one of the most daring infosec capers" ever witnessed.
The xz software is used in many Linux distributions and in macOS for tasks like compressing release tarballs, kernel images, and the like. Luckily, the malicious code only made it into a few bleeding-edge Linux distributions, such as the upcoming Fedora Linux 40; Fedora Rawhide developer distribution; Debian Unstable; and Kali Linux. Vulnerable distributions require glibc (for IFUNC, a way to make indirection function calls into OpenSSH authentication), and xz-5.6.0 or xz-5.6.1.
Microsoft security researcher Thomas Roccia’s diagram of the xz affair offers a succinct summary of the timeline of events.
It could have been much worse. According to Valsorda, the backdoor code enabled full remote code execution.
Industry observers to conclude not much will change to prevent this threat scenario from reoccurring, and that similar, ongoing efforts to compromise software infrastructure may have been missed.
[rG: Popular 3rd Party components can't ever be trusted to remain "safe" in future. Always ensure components are centally managed in a binary management repository and have daily SCA vulnerability scanning with immediate alerting for remediation prioritization analysis.]
A former University of Iowa Hospital employee pleaded guilty Monday to federal charges that he had been living under another man’s identity since 1988, causing the other man to be falsely imprisoned for identity theft and sent to a mental hospital.
Keirans worked as a systems architect in the hospital’s IT department from June 28, 2013 to July 20, 2023, when he was terminated for misconduct related to the identity theft investigation. Keirans worked at the hospital under the name William Donald Woods, an alias he had been using since about 1988, when he worked with the real William Woods at a hot dog cart in Albuquerque, N.M. Keirans used Woods’ identity “in every aspect of his life,” including obtaining employment, insurance and official documents, and even paying taxes under the name. In 1994 — six years after he started using Woods’ name — Keirans got married. He had a child, whose last name is Woods. In 2012 — 24 years after he started using Woods’ name — Keirans fraudulently acquired a copy of Woods’ birth certificate from the state of Kentucky using information he found about Woods’ family on Ancestry.com.
[rG: Wow: Read the whole story.]
APPSEC, DEVSECOPS, DEV
Centralized secrets management
Access control
CI/CD pipeline security
Threat modeling and code reviews
Incident response plan
Secure coding frameworks and server configuration
[rG: All are components of an effective organizational SSDLC process, internal Security Coding Standards specifications, and internal training program.]
The majority of security vulnerabilities in large codebases can be traced to memory security bugs. And since Rust code can largely if not totally avoid such problems when properly implemented, memory safety now looks a lot like a national security issue.
In September 2022, Microsoft Azure CTO Mark Russinovich argued that software projects that might have been started in C/C++ should use Rust instead. That recommendation now extends beyond greenfield projects to calls for reworking old code written in non-memory safe languages.
At the Chocolate Factory, turning Go code, which is considered memory safe but not as performant, into Rust has shown noteworthy benefits. "When we've rewritten systems from Go into Rust, we've found that it takes about the same size team about the same amount of time to build it. That is, there's no loss in productivity when moving from Go to Rust. And the interesting thing is we do see some benefits from it. So we see reduced memory usage in the services that we've moved from Go ... and we see a decreased defect rate over time in those services that have been rewritten in Rust – so increasing correctness. In every case we've seen a decrease by more than 2x in the amount of effort required to both build the services in Rust as well as maintain and update those services written in Rust.
Google has a similar migration underway moving developers from Java to Kotlin and that the time it takes to retrain developers in both cases – Java to Kotlin and C++ to Rust – has been similar. That is, in two months about a third of devs feel they're as productive in their new language as their old one. And in about four months, half of developers say. A bit more than half of his developers say that Rust is easier to review.
VENDORS & PLATFORMS
Project Devika was initially born out of a joke I posted on Twitter/X. I saw Devin’s demo and was really impressed by it. The name Devin came from the word ‘developer’, which is when I randomly thought of Indian names that would fit the pattern, thus the idea behind Devika was born,” said Mufeed who created the AI agent within 20 hours of coding in three days.
It took a vast array of sensitive equipment and 1,000 people staring at video feeds to do the job of one or two people sitting behind cash registers at each store.
There’s also some major privacy concerns here. Remember those cameras and sensors? They can be used to collect biometric information as people shop. This goes beyond Amazon’s palm-scanning tech, as the cameras and sensors measure the shape and size of each customer’s body for identification and tracking purposes. This led to a class action suit in New York that accused the company's Amazon One technology of collecting biometric identifier information without properly disclosing the practices to consumers.
Malware can target cookies, simply copying them from the user's hard drive and sending them back to a remote attacker, who can then potentially use the session information in the cookie to access user data from the websites they are associated with.
Google says it is working on a new web capability it dubs Device Bound Session Credentials (DBSC) to combat this threat. The idea behind this is to use a cryptographic key to tie a session to the user's specific computer or device.
Readers with long memories may recall that Intel once tried to pitch a unique processor serial number (PSN) embedded in each CPU, claiming similar security benefits, but it was forced to discontinue this when a row erupted over the possibility for the serial number to be used to track users online.
LEGAL & REGULATORY
Samuel Thompson spent nearly five years as a contractor for the football team, helping Jacksonville design and install their stadium screen technology. After installation, Thompson helped to run the system during football games. When was terminated, he installed TeamViewer. When the FBI investigated, they found child sex abuse material on his personal devices. The case took several years to wind its way through the courts, partly because Thompson began representing himself. Thompson filed long motions accusing the FBI of, among other things, improperly calling his attack on the Jaguars a "denial of service" attack. Last week, Thompson was sentenced. He got 220 years in federal prison, "followed by a lifetime of supervised release."
And Now For Something Completely Different …
If the object is owned by NASA, Otero or his insurance company could make a claim against the federal government under the Federal Tort Claims Act. If it is a human-made space object which was launched into space by another country, which caused damage on Earth, that country would be liable. This could be an issue in this case. The batteries were owned by NASA, but they were attached to a pallet structure launched by Japan's space agency.