Robert Grupe's AppSecNewsBits 2023-12-09

rG NOTE: New newsletter management platform! If you have any difficulties, please let me know at [email protected].

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
23andMe confirms hackers stole ancestry data on 6.9 million users
The data breach is now known to affect roughly half of 23andMe’s total reported 14 million customers. Because of the way that the DNA Relatives feature matches users with their relatives, by hacking into one individual account, the hackers were able to see the personal data of both the account holder as well as their relatives.
Genetic testing company 23andMe announced that hackers accessed the personal data of 0.1% of customers, or about 14,000 individuals. The company also said that by accessing those accounts, hackers were also able to access “a significant number of files containing profile information about other users’ ancestry.” But 23andMe would not say how many “other users” were impacted by the breach that the company initially disclosed in early October. As it turns out, there were a lot of “other users” who were victims of this data breach: 6.9 million affected individuals in total. Hackers accessed the personal information of about 5.5 million people who opted-in to 23andMe’s DNA Relatives feature, which allows customers to automatically share some of their data with others. The stolen data included the person’s name, birth year, relationship labels, the percentage of DNA shared with relatives, ancestry reports and self-reported location.
23andMe also confirmed that another group of about 1.4 million people who opted-in to DNA Relatives also “had their Family Tree profile information accessed,” which includes display names, relationship labels, birth year, self-reported location.
As proof of the breach, the hacker published the alleged data of one million users of Jewish Ashkenazi descent and 100,000 Chinese users, asking would-be buyers for $1 to $10 for the data per individual account. Two weeks later, the same hacker advertised the alleged records of another four million people on the same hacking forum.
23andMe updates user agreement to prevent data breach lawsuits
"To the fullest extent allowed by applicable law, you and we agree that each party may bring disputes against the other party only in an individual capacity, and not as a class action or collective action or class arbitration."

 

Just about every Windows and Linux device vulnerable to new LogoFAIL firmware attack
LogoFAIL is a constellation of two dozen newly discovered vulnerabilities that have lurked for years, if not decades.
In many cases, LogoFAIL can be remotely executed in post-exploit situations using techniques that can’t be spotted by traditional endpoint security products. And because exploits run during the earliest stages of the boot process, they are able to bypass a host of defenses, including the industry-wide Secure Boot, Intel’s Secure Boot, and similar protections from other companies that are devised to prevent so-called bootkit infections.
Once arbitrary code execution is achieved during the DXE phase, it’s game over for platform security. From this stage, hackers have full control over the memory and the disk of the target device, thus including the operating system that will be started.
The affected parties are releasing advisories that disclose which of their products are vulnerable and where to obtain security patches. 
[rG: Security risk should never be considered in isolation of single vulnerability severities because hackers know that successful attacks exploit chaining multiple neglected vulnerabilities. Fixing and patching urgency needs to be prioritized based on product potential impact, not generic severity ratings. SSDLC: From a design perspective, this highlights the importance of system solution threat modeling with continuing periodic analysis updates based on newly discovered attack chain methods.]

 

Your mobile password manager might be exposing your credentials
Researchers found that when an Android app loads a login page in WebView, password managers can get “disoriented” about where they should target the user’s login information and instead expose their credentials to the underlying app’s native fields. This is because WebView, the preinstalled engine from Google, lets developers display web content in-app without launching a web browser, and an autofill request is generated.
Researchers tested the AutoSpill vulnerability using some of the most popular password managers, including 1Password, LastPass, Keeper and Enpass, on new and up-to-date Android devices. They found that most apps were vulnerable to credential leakage, even with JavaScript injection disabled. When JavaScript injection was enabled, all the password managers were susceptible to their AutoSpill vulnerability. Researchers are now exploring the possibility of an attacker potentially extracting credentials from the app to WebView. The team is also investigating whether the vulnerability can be replicated on iOS.

 

Google Search results are showing Reddit URLs altered to include a slur
While doing a Google search, Reddit results that came up had a URL that looked like this: https://2goback-[slur].reddit.com/r/[the rest of the URL]. Reddit URLs can be manipulated to bring users to specific content. For example, if you were to go to Reddit's homepage and then proceed to the r/Science subreddit, you'd see this URL in your browser: https://www.reddit.com/r/science/. But you can also access the subreddit by typing "science.reddit.com," which redirects to https://www.reddit.com/r/science/. It seems that this functionality is being manipulated.
Google will index URLs that work that it finds, both on and off platform, so if someone shared a link on another platform to one of those arbitrary URLs, Google will crawl and index it, even if “we don’t 'officially' support it.”
Reddit is trying to fix the problem with its "appropriate partners." 
The issue is especially concerning since appending "Reddit" to queries is one of the most helpful tricks to ensure that Google Search results point to helpful, human voices rather than shopping links and websites with fantastic SEO but questionable content. While not everyone searching for Reddit content on Google will see these altered URLs, they are still coming up as legitimate Search results.
[rG: SSDLC Design Threat Assessments need to consider mis-use cases, not just in terms of security controls, but also reputation damage protection compliance considerations.] 

 

Polish train maker denies claims its software bricked rolling stock maintained by competitor
SPS encountered difficulties servicing the rolling stock following a software lockout. The trains locked up for no apparent reason after being serviced in third-party workshops.
The rolling stock and engineering business Newag insists its software is correct and that it did not design the trains' programming logic to fail under specific conditions, as has been claimed. "This is a slander from our competition, which is conducting an illegal black PR campaign against us," it protested. Newag, argued that these third-party repair shops were deficient and that the manufacturer should be servicing its own trains.
Security researchers reverse engineered the train's electronics and, in August 2022 found the train-stopping faults appeared to be not a flaw – but a feature. We found that the PLC [programmable logic controller] code actually contained logic that would lock up the train with bogus error codes after some date, or if the train wasn't running for a given time. One version of the controller actually contained GPS coordinates to contain the behavior to third-party workshops.” They also claimed to have found an undocumented key combination in the cabin controls that would unlock the trains.
Janusz Cieszyński, Poland’s former minister of digital affairs, has since explained on social media that the president of Newag contacted him to say that the firm had been victimized by cyber criminals. Cieszyński added that the analysis he saw suggested otherwise.

 

HACKING
ChatGPT says that asking it to repeat words forever is a violation of its terms
Researchers found the chatbot could reveal personal information when asked to repeat words.
A team of researchers published a paper showing that it was able to get ChatGPT to inadvertently reveal bits of data including people’s phone numbers, email addresses and dates of birth that it had been trained on by asking it to repeat words “forever”. Doing this now is a violation of ChatGPT’s terms of service. 

 

Privilege elevation exploits used in over 50% of insider attacks
Insider threats are on the rise:
55% of insider threats logged by the company rely on privilege escalation exploits.
45% unwittingly introduce risks by downloading or misusing offensive tools.
Rogue insiders typically turn against their employer because they have been given financial incentives, out of spite, or due to differences with their supervisors.
CrowdStrike also categorizes incidents as insider threats when they are not malicious attacks against a company, such as using exploits to install software or perform security testing.
However, in these cases, though they are not used to attack the company, they are commonly utilized in a risky manner, potentially introducing threats or malware to the network that threat actors could abuse.
Crowdstrike has found that attacks launched from within targeted organizations cost an average of $648,000 for malicious and $485,000 for non-malicious incidents. These figures may be even higher in 2023.
Besides the significant financial cost of insider threats, Crowdstrike highlights the indirect repercussions of brand and reputation damages. 

 

Apple report finds steep increase in data breaches, ransomware
Some 2.6 billion personal records have been exposed in data breaches over the past two years and that number continues to grow.
Escalating intrusions, combined with increases in ransomware means the tech industry needs to move toward greater use of encryption.

  • Data breaches in the US through the first nine months of the year are already 20% higher than for all of 2022.

  • Nearly 70 percent more ransomware attacks were reported through September 2023, than in the first three quarters of 2022.

  • Americans and those in the UK topped the list of those most targeted in ransomware attacks in 2023, followed by Canada and Australia. Those four countries accounted for nearly 70% of reported ransomware attacks.

  • One in four people in the US had their health data exposed in a data breach during the first nine months of 2023.

 

New Bluetooth Flaw Let Hackers Take Over Android, Linux, macOS, and iOS Devices
The attack deceives the target device into thinking that it's connected to a Bluetooth keyboard by taking advantage of an "unauthenticated pairing mechanism" that's defined in the Bluetooth specification.
Successful exploitation of the flaw could permit an adversary in close physical proximity to connect to a vulnerable device and transmit keystrokes to install apps and run arbitrary commands.
[rG: Protections include always locking the device when not in actively looking at displays, reviewing the “last successful login timestamp” notices, and suspecting unexpected screen activity.]

 

 

APPSEC, DEVSECOPS, DEV
API Flaws Put AI Models at Risk of Data Poisoning
Security researchers could access and modify an artificial intelligence code generation model developed by Facebook after scanning for API access tokens on AI developer platform Hugging Face and code repository GitHub. Lasso Security said that it had found hundreds of publicly exposed API authentication tokens for accounts held by tech giants including Meta, Microsoft, Google and VMware on the two platforms. Researchers said they had gained "full control over the repositories of several prominent companies." Tampering with training data to introduce vulnerabilities or biases is among the top 10 threats to large language models recognized by OWASP.
Hugging Face is a $4.5 billion company funded by Google and Amazon that offers a platform for developers to build, deploy and train machine learning models. It provides infrastructure for them to demo, run and deploy AI in live applications. They can also store and manage their code. The exposed tokens that allowed writing permissions could put not only the compromised companies' projects at risk, they could affect millions of their models' users without their knowledge.
[rG: Organizations need to be vigilant about scanning code repositories for exposed credentials, and quickly remediate by changing secrets.] 

 

Cloud account hacks likely with AWS Security Token Service exploitation
After using AWS STS to spoof cloud user identities, attackers could leverage API calls to identify roles and privileges associated with long-term IAM tokens, or AKIAs, exfiltrated through malware and phishing attacks. Depending on the token's permission level, adversaries may also be able to use it to create additional IAM users with long-term AKIA tokens to ensure persistence in the event that their initial AKIA token and all of the ASIA short term tokens it generated are discovered and revoked. Organizations have been urged to prevent such AWS token exploitation by monitoring CloudTrail event data, multi-factor authentication exploitation, and role-chaining incidents, as well as changing long-term IAM user access keys. 

 

VENDORS & PLATFORMS
15,000 Go Module Repositories on GitHub Vulnerable to Repojacking Attack
Repojacking, a portmanteau of "repository" and "hijacking," is an attack technique that allows a bad actor to take advantage of account username changes and deletions to create a repository with the same name and the pre-existing username to stage open-source software supply chain attacks. More than 9,000 repositories are vulnerable to repojacking due to GitHub username changes. More than 6,000 repositories were vulnerable to repojacking due to account deletion.
Modules written in the Go programming language are particularly susceptible to repojacking, as unlike other package manager solutions such as npm or PyPI, they are decentralized due to the fact that they get published to version control platforms like GitHub or Bitbucket. Anyone can then instruct the Go module mirror and pkg.go.dev to cache the module's details. An attacker can register the newly unused username, duplicate the module repository, and publish a new module to proxy.golang.org and go.pkg.dev.
It's important for Go developers to be aware of the modules they use, and the state of the repository that the modules originated from.
[rG: For efficient and effective vulnerability management, large organizations need to ensure that there are identified centralized owners of all 3rd party components, who are responsible for monitoring vulnerabilities and implementing/coordinating protective responses.]

 

Microsoft inches closer to glass storage breakthrough that could finally make ransomware attacks impossible in the data center and hyperscalers
Silica: the first cloud storage system for archival data underpinned by quartz glass, an extremely resilient media that allows data to be left in situ indefinitely. Data is written in a square glass platter with ultrafast femtosecond lasers through voxels. These are permanent modifications to the physical structure of the glass, and allow for multiple bits of data to be written in layers across the surface of the glass. These layers are then stacked vertically in their hundreds. To read data, they employ polarization microscopy technology to image the platter, while the read drive scans sectors in a Z-pattern. The images are then sent to be processed and decoded. 

 

AI
Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier
Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint. Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information. This capability, he argues, will not only make spying more accessible but also more comprehensive.
This spying is not limited to conversations on our phones or computers. Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and 'Hey, Google' are already always listening; the conversations just aren’t being saved yet. 

 

Europe agrees landmark AI regulation deal
Europe on Friday reached a provisional deal on landmark European Union rules governing the use of artificial intelligence including governments' use of AI in biometric surveillance and how to regulate AI systems such as ChatGPT.
High-impact foundation models with systemic risk will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the European Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.
Governments can only use real-time biometric surveillance in public spaces in cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.
The agreement bans cognitive behavioural manipulation, the untargeted scrapping of facial images from the internet or CCTV footage, social scoring and biometric categorisation systems to infer political, religious, philosophical beliefs, sexual orientation and race.
Consumers would have the right to launch complaints and receive meaningful explanations while fines for violations would range from 7.5 million euros ($8.1 million) or 1.5% of turnover to 35 million euros or 7% of global turnover. 

 

Healthcare: Generative AI expected to take off in 2024, but concerns linger over cost, reliability and security
Only 25% of healthcare organizations have deployed generative AI solutions, but that is expected to more than double next year as executives see opportunities to automate clinical documentation and improve patient communication.
According to a new KLAS report, 58% of healthcare executives say their organization is likely to implement or purchase a solution within the next year. 
Epic tapped Microsoft to integrate large language model tools and AI into its electronic health record software. And Oracle also is adding gen AI capabilities to its Cerner EHR. The EHR giant also integrated Nuance's Dragon Ambient eXperience (DAX) Express into its EHR workflows. Nuance is Microsoft's speech recognition subsidiary.
Telehealth company Teladoc also tapped Microsoft to integrate AI and ambient clinical documentation tech into its virtual care platform.

 

Generative AI Security: Preventing Microsoft Copilot Data Exposure
[rG: Refer to article for how Copilot works, along with threat risks.]
Varonis for Microsoft 365, can:

  • Automatically discover and classify all sensitive AI-generated content.

  • Automatically ensure that MPIP labels are correctly applied.

  • Automatically enforce least privilege permissions.

  • Continuously monitor sensitive data in M365 and alert and respond to abnormal behavior. 

 

Gmail’s AI-powered spam detection is its biggest security upgrade in years
The upgrade comes in the form of a new text classification system called RETVec (Resilient & Efficient Text Vectorizer). Google says it has been testing RETVec internally "for the past year," and it has already rolled out to your Gmail account.
Google says this can help understand "adversarial text manipulations"—these are emails full of special characters, emojis, typos, and other junk characters that previously were legible by humans but not easily understandable by machines. Previously, spam emails full of special characters made it through Gmail's defenses easily.
[rG: Surprising that this functionality is only now being added by Google, given this spam technique has been known for a long time.]

 

LEGAL
OCR settles first-ever phishing cyberattack investigation
Lafourche Medical Group failed to conduct a risk analysis to identify potential threats or vulnerabilities, and will pay $480K to OCR. 

 

News Article Results in $80,000 HIPAA Settlement by New York State Hospital
The settlement was the result of OCR’s investigation of SJMC after the Associated Press published, in 2020, an article profiling SJMC’s response to the COVID-19 pandemic that included photos and information about SJMC patients. OCR concluded that SJMC disclosed three (3) patients’ protected health information (“PHI) without first obtaining a HIPAA written authorization from these patients. OCR’s press release announcing the settlement noted that “These images were distributed nationally, exposing protected health information including patients’ COVID-19 diagnoses, current medical statuses and medical prognoses, vital signs, and treatment plans.”

 

Illinois Supreme Court unanimously held in Mosby et al. v. The Ingalls Memorial Hospital et al. that when biometrics of healthcare employees are collected in the course of providing medical services, that biometric collection is exempt from the Illinois Biometric Information Privacy Act (BIPA).

 

Federal Government Uses Push Notifications to Track Apple, Google User Contacts
The technique, which takes advantage of the common alerts many people receive when friends contact them via email or text.
Apps use push notifications to buzz users' phones or tablets with updates on new messages or alerts. When a user enables push notifications, Apple and Google create a small bit of data, known as a token, that links their device to the account information they've given the companies, such as name and email address. The tokens could reveal details about who a person is communicating with over a messaging or gaming app, what times they talk and, in some cases, the text of any message displayed in the notification.
Senator Ron Wyden said the federal government had started demanding records on those tokens from Apple and Google because those companies operate as a "digital post office" for relaying the notifications. 

 

And Now For Something Completely Different …
PlayStation is erasing 1,318 seasons of Discovery shows from customer libraries
All Discovery content purchased on the PlayStation Store will be erased before 2024. But there were users who had already purchased stuff from the PlayStation Store and, believe it or not, expect to be able to watch it when they want, since they paid money to buy (rather than rent) it. 
[rG: This has happened with digital content on Amazon and others as well, underscoring that “purchased” digital content is usually subjected to licensing and hosting small print based on the preferential whims of the providers.]

 

 Apple Vision (YouTube)
[rG: iPhone+ strapped to your face. Fab augmented reality headset def on my greed list, but still rather bulky and expensive. Looking forward to poser nerd spotting in 2024.]

 

***

Robert Grupe, CISSP CSSLP PMP
http://rgrupe.com