Robert Grupe's AppSecNewsBits 2024-02-10

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response 

Viamedis and Almerys provide healthcare and insurance services in France with technological and administrative solutions to facilitate transactions. Viamedis first disclosed the cybersecurity incident one week ago on LinkedIn (the company's website remains down), saying that it suffered a data breach impacting beneficiaries and healthcare professionals. The company said the exposure includes names, dates of birth, insurer details, social security numbers, marital status, civil status, and guarantees open to third-party payment. Although contact data was not affected by the breach, it is possible that the data involved in the breach could be combined with other information from previous data leaks. The data protection authority announced the launch of an investigation into this incident to determine what security measures were in place for the two companies and whether GDPR obligations were met.

 

A 17-year-old intern working for an organization that uses Juniper products discovered that after logging in with a regular customer account, Juniper’s support website allowed him to list detailed information about virtually any Juniper device purchased by other customers. The back-end for Juniper’s support website appears to be supported by Salesforce, and that Juniper likely did not have the proper user permissions established on its Salesforce assets. Searching on Amazon.com in the Juniper portal, for example, returned tens of thousands of records. Each record included the device’s model and serial number, the approximate location where it is installed, as well as the device’s status and associated support contract information. “Using serial numbers, I could see which products aren’t under support contracts. And then I could narrow down where each device was sent through their serial number tracking system, and potentially see all of what was sent to the same location. A lot of companies don’t update their switches very often, and knowing what they use allows someone to know what attack vectors are possible.”
[rG: Insecure database design could have been prevented by proper SSDLC security design review (access management) and security UAT testing. Vulnerability scanners would not have detected these issues.]

 

Bitlocker is one of the most easily accessible encryption solutions available today, being a built-in feature of Windows 10 Pro and Windows 11 Pro that's designed to secure your data from prying eyes. However, YouTuber stacksmashing demonstrated a colossal security flaw with Bitlocker that allowed him to bypass Windows Bitlocker in less than a minute with a cheap sub-$10 Raspberry Pi Pico, thus gaining access to the encryption keys that can unlock protected data.
Stacksmashing found that the communication lanes (LPC bus) between the CPU and external TPM are completely unencrypted on boot-up, enabling an attacker to sniff critical data as it moves between the two units, thus stealing the encryption keys.
[rG: SSDLC Design Threat Assessment AppSec Best Practices - encrypt and authenticate ALL sensitive information communication channels.]

 

At the heart of the bug, Munro found that anyone using Livall’s apps for group audio chat and sharing their location must be part of the same friends group, which could be accessed using only that group’s six-digit numeric code. That 6-digit group code simply isn’t random enough. Anyone could brute force all group IDs in a matter of minutes. It is therefore trivial to silently join any group, giving us access to any users’ location and the ability to listen in to any group audio communication. The only way a rogue group user could be detected was if the legitimate user went to check on the members of that group.
Munro sent details of the flaw on January 7 but did not hear back, and received no acknowledgement from the company. Munro then alerted TechCrunch to the flaw and TechCrunch contacted Livall for comment.
When reached by email, Livall founder Bryan Zheng committed to fixing the app within two weeks of our email but declined to take down the Livall apps in the interim. TechCrunch held this report until Livall confirmed it had fixed the flaw in app updates that were released this week.
Livall’s R&D director Richard Yi explained that the company improved the randomness of group codes by also adding letters, and including alerts for new members joining groups. Yi also said the app now allows the shared location to be turned off at the user level.

 

You may have heard about the terrifying botnet consisting of 3 million electric toothbrushes that were infected with malware.
It apparently started with a January 30 story by the Swiss German-language daily newspaper Aargauer Zeitung. Tom's Hardware helped spread the tale in English on Tuesday this week in an article titled, "Three million malware-infected smart toothbrushes used in Swiss DDoS attacks."
Security experts poked holes in the story, saying that the botnet description appeared to be a hypothetical and didn't really make sense anyway. Security researcher Matthew Remacle called it nonsense, pointing out that smart toothbrushes just pair with phones via Bluetooth instead of connecting to the Internet directly. "Supply chain compromise/backdoor in the toothbrush app would be like... the only way this story is even remotely true, because the phones have Internet and the toothbrushes don't. But then it's not a toothbrush botnet, it's a run-of-the-mill phone botnet."
Security expert Robert Graham said there is "no evidence 3 million toothbrushes performed a DDoS," and that the hypothetical offered by a security company was "misinterpreted by a journalist." "What the f*** is wrong with you people???? There are no details, like who is the target of the DDoS? what was the brand of toothbrushes? how are they connected to the Internet (hint: they aren't, they are Bluetooth)?"

 

DHS's inventory of AI systems for cybersecurity is not accurate. Specifically, the inventory identified 2 AI cybersecurity use cases, but officials told us 1 of these 2 was incorrectly characterized as AI. Although DHS has a process to review use cases before they are added to the AI inventory, the agency acknowledges that it does not confirm whether uses are correctly characterized as AI. Until it expands its process to include such determinations, DHS will be unable to ensure accurate use case reporting.
DHS has implemented some but not all of the key practices from GAO's AI Accountability Framework for managing and overseeing its use of AI for cybersecurity. GAO assessed the one remaining cybersecurity use case known as Automated Personally Identifiable Information (PII) Detection—against 11 AI practices selected from the Framework.
GAO found that DHS fully implemented 4 of the 11 key practices and implemented 5 others to varying degrees in the areas of governance, performance, and monitoring. It did not implement 2 practices: documenting the sources and origins of data used to develop the PII detection capabilities, and assessing the reliability of data, according to officials. GAO's AI Framework calls for management to provide reasonable assurance of the quality, reliability, and representativeness of the data used in the application, from its development through operation and maintenance. Addressing data sources and reliability is essential to model accuracy. Fully implementing the key practices can help DHS ensure accountable and responsible use of AI.

 

 

 

HACKING

As the CFO was based in the United Kingdom, and there was no way for the Hong Kong based employee to ask the CFO in person to validate the request, the employee asked for the video call to ensure that the payment request was legitimate.
The criminals, however, were extremely sophisticated – and well prepared. They orchestrated a video conference call with the employee during which the employee saw, and heard, what he thought were multiple colleagues – but, what were in fact, AI-generated deepfakes. After seeing and hearing “people” whose faces and voices he recognized, and verifying with “them” that the payment request was legitimate, the employee was satisfied that all was kosher, and issued the payment as had been requested from him.
The fraudulent nature of the request was discovered only after the employee later mentioned the payment to operations personnel at the company’s headquarters.
[rG: Time for organizations to review their authorization procedures to include further separation of duties and validation communication channels.]

 

In the experiment, researchers instructed the AI to process audio from two sources in a live communication — such as a phone conversation. Upon hearing a specific keyword or phrase, the AI is further instructed to intercept the related audio and manipulate it before sending it on to the intended recipient. According to a blog post from IBM Security, the experiment ended with the AI successfully intercepting a speaker’s audio when they were prompted by the other human speaker to give their bank account information. The AI then replaced the authentic voice with deepfake audio, giving a different account number. The attack was undetected by the “victims” in the experiment.
Traditionally, building a system to autonomously intercept specific audio strings and replace them with audio files generated on the fly would have required a multi-disciplinary computer science effort. But modern generative AI does the heavy lifting itself. “We only need three seconds of an individual’s voice to clone it,” reads the blog, adding that, nowadays, these kinds of deepfakes are done via API.

 

Very few knew about CVE-2023-36802 until Microsoft addressed it as part of its September 2023 Patch Tuesday updates. However, Cyfirma spotted an exploit for it being sold on the dark web as early as February of that year, seven months before the security advisories began popping up. The earliest signs of Raspberry Robin abusing CVE-2023-36802 came in October, just weeks after Patch Tuesday and the same month that public exploit code was made available. Researchers believe this points to the team's access to a developer given the time it took to start making use of the vulnerability, especially compared to a year earlier when it was using year-old vulns.

 

14% increase in reported losses compared to the previous year.
Ransomware gangs also had a record year, with ransomware payments having reached over $1.1 billion in 2023.
Consumers reported losing more money to investment scams—more than $4.6 billion—than any other category in 2023. That amount represents a 21% increase over 2022.
The second highest reported loss amount came from imposter scams, with losses of nearly $2.7 billion reported. In 2023, consumers reported losing more money to bank transfers and cryptocurrency than all other methods combined.
However, the data reflects just a fraction of the actual harm inflicted by scammers since most frauds are never reported.
Those who fell victim to fraud can report incidents at ReportFraud.ftc.gov or file identity theft reports at IdentityTheft.gov.

 

"LassPass" mimicked the name and logo of real LastPass password manager.
As Apple has stepped up its promotion of its App Store as a safer and more trustworthy source of apps, its operators scrambled Thursday to correct a major threat to that narrative: a listing that password manager-maker LastPass said was a “fraudulent app impersonating” its brand.
Apple had removed the app—titled LassPass and bearing a logo strikingly similar to the one used by LastPass—from its App Store. At the same time, Apple allowed a separate app submitted by the same developer to remain.

 

Targets of the threat actor include communications, energy, transportation, and water and wastewater systems sectors in the U.S. and Guam. Volt Typhoon actors conduct extensive pre-exploitation reconnaissance to learn about the target organization and its environment; tailor their tactics, techniques, and procedures (TTPs) to the victim's environment; and dedicate ongoing resources to maintaining persistence and understanding the target environment over time, even after initial compromise.

 

  1. Hacking is not a precise art form

  2. Pop up windows are rarely a part of hacking

  3. Simple hacks are often more effective than intricate ones

  4. Hackers don't type so fast when they're working

  5. Not all hacking is clandestine or illegal

 

 

APPSEC, DEVSECOPS, DEV

  1. Spoutible: need for platforms to enforce strong password policies and secure their APIs against such exploits.

  2. Trello: risks associated with public-facing APIs and the importance of robust privacy settings.

  3. Hathway: importance of securing web applications against known vulnerabilities and the extent of damage that can result from a single breach, both in terms of the volume of data compromised and the variety of personal information exposed.

  4. Delta Dental of California: timely patch management and the risks associated with zero-day vulnerabilities.

 

Security teams need to make sure that AI awareness is baked into every single security decision, especially in an environment where zero trust is being considered.
That urgency inside many firms developing these models is encouraging all manner of security shortcuts in coding. “This urgency is pushing aside many security validations, allowing engineers and data scientists to build their GenAI apps sometimes without any limitations. To deliver impressive features as quickly as possible, we see more and more occasions when these LLMs are integrated into internal cloud services like databases, computing resources and more.
Traditional EDR, XDR, and MDR tools are primarily designed to detect and respond to security threats on conventional IT infrastructure and endpoints,” says Chedzhemov. This makes them ill-equipped to handle the security challenges posed by cloud-based or on-premises AI applications, including LLMs.
No matter what CISOs and CIOs do with LLMs, they will be accepting LLM cloud risks. Whether an enterprise hosts its LLM in the cloud or on-device or on-premises will likely have a negligible impact on their threat landscape. Even if an enterprise is hosting its end locally, the other end of the LLM will almost certainly be in the cloud, especially if that vendor is handling the training. In short, there is going to be extensive cloud exposure with LLMs regardless of the CISO’s decisions.
Despite enterprise policies and rules, shadow IT absolutely extends to LLMs. Employees and department heads have easy access to public models, including ChatGPT and BingChat/Co-Pilot, whenever they want. They can then use those public models to create images, do analysis, write reports, code, and even make business decisions. End users–be it employee, contractor, or third-party with privileges–leveraging shadow LLMs as a massive problem for security and one that can be difficult to control.
It is impossible to protect data that the organization has lost track of, data that has been over-permissioned, or data that the organization is not even aware exists, so data discovery should be the first step in any data risk remediation strategy, including one that attempts to address AI/LLM risks.

 

Also known as SQuaRE (System and Software Quality Requirements and Evaluation), has the goal of creating a framework for the evaluation of software product quality.

 

DevOps model has delivered increased business value and responsiveness through rapid, high-quality service delivery. The disciplines of platform engineering and site reliability engineering have emerged to meet the challenge of improving the DevOps process. Dedicating staff to each of these roles may be out of reach today, but building awareness of how the roles complement each other and identifying required skill sets from among existing staff will position organizations to take advantage of the next big things in DevOps.

 

It has been a long road and the path to implementing Continuous Integration & Delivery (CI/CD) in all of its forms for Power BI is still a journey, but the capabilities are now a reality. This broad category includes a range of features and capabilities related to managing project files with version control and the ability to share and collaborate with other development team members. 

 

 

VENDORS & PLATFORMS

The categories the system tried to identify were: crowd movement, unauthorized access, safeguarding, mobility assistance, crime and antisocial behavior, person on the tracks, injured or unwell people, hazards such as litter or wet floors, unattended items, stranded customers, and fare evasion. Each has multiple subcategories.

 

After buying the devices, customers can pay a subscription to store footage on the cloud, download clips and get discounted products.
That subscription is going up 43%, from £34.99 to £49.99 per device, per year, for basic plan customers.

 

As of 2023, 80% of adults in the United States (over 200 million people) use some form of vision correction, including reading glasses, prescription glasses, prescription sunglasses, or prescription contacts. For purchase on top of the $3,500 headset, UploadVR and everyone else has been told that the only option is to buy the Zeiss-branded corrective lens inserts. This is Apple’s policy, and all of their documentation echoes this requirement.
The Apple Vision Pro can be used while wearing my own glasses, though I didn't use the headset long enough to see the impact of my glasses to Apple's eye-tracked distortion correction.
The gamble of scratching the lenses on such an expensive piece of hardware increases tenfold when you cram your own glasses into it. Apple's warranty notes that it doest not apply "to damage caused by operating the Apple Product outside Apple’s published guidelines" and AppleCare+ warns it will not repair any damage caused by "abnormal or improper use". AppleCare+ is available for the Vision Pro for $499 for two years as a supplement to the one year limited warranty. Even if AppleCare+ covered it, a repair would still cost you $299. The Zeiss corrective inserts start at $100 and are a logical choice for your own personal device. You have to ask yourself if the costly risk of wearing your own glasses is a smarter choice than spending $150 for a pair of removable prescription inserts.

 

New API discovery and protection service is aimed at giving customers a simple way to discover API endpoints, monitor traffic for vulnerabilities, provide testing, and protect applications.
Later this year, F5 plans to introduce a natural language-based AI assistant to help IT security teams more easily identify anomalies, query and generate policy configurations, and apply remediation steps

 

  1. Flipper Zero

  2. O.MG cables

  3. USBKill

  4. USB Nugget

  5. Wi-Fi Pineapple

  6. USB Rubber Ducky

  7. LAN Turtle

Bonus: O.MG Unblocker

 

 

LEGAL

  • Contractors would have just eight hours to report a detected incident to the Cybersecurity and Infrastructure Security Agency (CISA), which would have to be updated every 72 hours thereafter;

  • A software bill of materials (SBOM) would need to be maintained;

  • After an incident, contractors would provide "full access" to IT systems and personnel for CISA and federal law enforcement agencies.

The above ideas – developed by Department of Defense (DoD), General Services Administration (GSA), and NASA – have been suggested in light of the many infosec threats facing the USA. The Cloud Service Providers Advisory Board, (CSP-AB), which counts multiple major US cloud service firms among its members, described the new rules as "burdensome … on information technology companies who are already meeting a high security and compliance bar across the federal marketplace."
The CSP-AB took particular umbrage with the FAR update's SBOM requirements, arguing cloud service providers shouldn't be required to submit them since they're so frequently subject to change – sometimes "up to hundreds of times" per day.

 

The unanimous FCC vote extends anti-robocall rules to cover unsolicited AI deepfake calls by recognizing those voices as “artificial” under a federal law governing telemarketing and robocalling. The FCC’s move gives state attorneys general more legal tools to pursue illegal robocallers that use AI-generated voices to fool Americans.
The decision to interpret the 1991 Telephone Consumer Protection Act (TCPA) more broadly to include AI-generated voices comes weeks after a fake robocall that impersonated President Joe Biden targeted thousands of New Hampshire voters and urged them not to participate in the state’s primary.

 

The Innovation, Science and Economic Development Canada agency said it will “pursue all avenues to ban devices used to steal vehicles by copying the wireless signals for remote keyless entry, such as the Flipper Zero, which would allow for the removal of those devices from the Canadian marketplace through collaboration with law enforcement agencies.” “We are banning the importation, sale and use of consumer hacking devices, like flippers, used to commit these crimes.”
Presumably, such tools subject to the ban would include HackRF One and LimeSDR, which have become crucial for analyzing and testing the security of all kinds of electronic devices to find vulnerabilities before they’re exploited.
Alex Kulagin, COO of Flipper Devices, said in an interview that his company received no communication from the Canadian government ahead of Thursday’s statements. “We’re quite frustrated. Flipper is actually very underpowered to actually run any modern exploits for taking cars.” He said that defeating protections built into vehicles manufactured after 1990 “requires more hardware and software and quite a bit of social engineering, so we don’t see Flipper as the cause.” In the long run, you just make pentesters’ lives harder, and the systems around you are not as secure as they could be. If you hack a bank account using your laptop and nothing else, should we ban all the laptops?

 

Permissible uses of student data include providing the educational services offered by Google Workspace, enhancing the security and reliability of these services, facilitating communication, and fulfilling legal obligations. Non-permissible cases are purposes related to maintaining and improving Google Workspace for Education, ChromeOS, and the Chrome browser, including measuring performance or developing new features and services for these platforms.

 

 

And Now For Something Completely Different …

So long as we insist that cells are computers and genes are their code,” life might as well be “sprinkled with invisible magic”.
When the human genome was sequenced in 2001, many thought that it would prove to be an ‘instruction manual’ for life. But the genome turned out to be no blueprint. In fact, most genes don’t have a pre-set function that can be determined from their DNA sequence Instead, genes’ activity — whether they are expressed or not, for instance, or the length of protein that they encode — depends on myriad external factors, from the diet to the environment in which the organism develops. And each trait can be influenced by many genes.