Robert Grupe's AppSecNewsBits 2024-03-30

Lame List: Thousands of AI Servers actively attacked for months, SQL Injections vulnerabilities since 2007, Cloud email filtering bypassed 80%, Open Source Malware Injection Attacks

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
AT&T confirms data for 73 million customers leaked on hacker forum

AT&T has finally confirmed it is impacted by a data breach affecting 73 million current and former customers after initially denying the leaked data originated from them. This comes after AT&T has repeatedly denied for the past two weeks that a massive trove of leaked customer data originated from them and or that their systems had been breached.

 

Thousands of servers storing AI workloads and network credentials have been hacked in an ongoing attack campaign targeting a reported vulnerability in Ray, a computing framework used by OpenAI, Uber, and Amazon.
The attacks, which have been active for at least seven months, have led to the tampering of AI models. They have also resulted in the compromise of network credentials, allowing access to internal networks and databases and tokens for accessing accounts on platforms including OpenAI, Hugging Face, Stripe, and Azure. Besides corrupting models and stealing credentials, attackers behind the campaign have installed cryptocurrency miners on compromised infrastructure, which typically provides massive amounts of computing power. Attackers have also installed reverse shells, which are text-based interfaces for remotely controlling servers.
In the default configuration, Ray does not enforce authentication. As a result, attackers may freely submit jobs, delete existing jobs, retrieve sensitive information, and exploit the other vulnerabilities described in this advisory.
Anyscale plans to publish a script that will allow users to easily verify whether their Ray instances are exposed to the Internet.
The ongoing attacks underscore the importance of properly configuring Ray.

 

UnitedHealth said last week it was beginning to clear a medical claims backlog of more than $14 billion as it brought its services back online following the cyberattack, which caused wide-ranging disruption starting in late February.
Hackers said earlier this month that UnitedHealth paid a $22 million ransom in a bid to recover its systems, but whether Blackcat honored its end of the bargain has not been made public.

 

Despite widespread knowledge and documentation of SQLi vulnerabilities over the past two decades, along with the availability of effective mitigations, software manufacturers continue to develop products with this defect, which puts many customers at risk. Vulnerabilities like SQLi have been considered by others an ‘unforgivable’ vulnerability since at least 2007. Despite this finding, SQL vulnerabilities (such as CWE-89) are still a prevalent class of vulnerability.
For more information on recommended principles and best practices to achieve this goal, visit CISA’s Secure by Design page.

 

The authoring academic research team noted that services in wide use from vendors such as Proofpoint, Barracuda, Mimecast, and others could be bypassed in at least 80% of major domains that they examined. The filtering services can be bypassed if the email hosting provider is not configured to only accept messages that arrive from the email filtering service. "Mail administrators that don't properly configure their inbound mail to mitigate this weakness are akin to bar owners who deploy a bouncer to check IDs at the main entrance but allow patrons to enter through an unlocked, unmonitored side door as well."

 

 

HACKING

Several Apple customers recently reported being targeted in elaborate phishing attacks that involve what appears to be a bug in Apple’s password reset feature. In this scenario, a target’s Apple devices are forced to display dozens of system-level prompts that prevent the devices from being used until the recipient responds “Allow” or “Don’t Allow” to each prompt. Assuming the user manages not to fat-finger the wrong button on the umpteenth password reset request, the scammers will then call the victim while spoofing Apple support in the caller ID, saying the user’s account is under attack and that Apple support needs to “verify” a one-time code.
Parth Patel received a call on his iPhone that said it was from Apple Support (the number displayed was 1-800-275-2273, Apple’s real customer support line). “I pick up the phone and I’m super suspicious,” Patel recalled. “So I ask them if they can verify some information about me, and after hearing some aggressive typing on his end he gives me all this information about me and it’s totally accurate.”

 

The compression utility, known as xz Utils, introduced the malicious code in versions ​​5.6.0 and 5.6.1 and has been circulating for more than a month.
Because the backdoor was discovered before the malicious versions of xz Utils were added to production versions of Linux, it's not really affecting anyone in the real world. BUT that's only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.
Someone using the developer's name took to a developer site for Ubuntu to ask that the backdoored version 5.6.1 be incorporated into production versions because it fixed bugs that caused a tool known as Valgrind to malfunction.
“This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass,” the person warned.
“We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added),” the Ubuntu maintainer said. "He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise."
Anyone using Linux should check with their distributor immediately to determine if their system is affected. Freund provided a script for detecting if an SSH system is vulnerable.

 

PyPI, a vital repository for open source developers, temporarily halted new project creation and new user registration following an onslaught of package uploads that executed malicious code on any device that installed them. Short for the Python Package Index, PyPI is the go-to source for apps and code libraries written in the Python programming language. Fortune 500 corporations and independent developers alike rely on the repository to obtain the latest versions of code needed to make their projects run.
PyPI came under attack by users who likely used automated means to upload malicious packages that, when executed, infected user devices. The attackers used a technique known as typosquatting, which capitalizes on typos users make when entering the names of popular packages into command-line interfaces. By giving the malicious packages names that are similar to popular benign packages, the attackers count on their malicious packages being installed when someone mistakenly enters the wrong name.
Similar attacks are a fact of life for virtually all open source repositories, including npm pack picks and RubyGems.
Ten hours later, it lifted the suspension.
[rG: Always ensure 3rd party components are centrally managed within an organization, and use a robust SCA vulnerability scanner to check daily for vulnerabilities.]

 

The software supply chain attack is said to have led to the theft of sensitive information, including passwords, credentials, and other valuable data. It chiefly entailed setting up a clever typosquat of the official PyPI domain known as "files.pythonhosted[.]org," giving it the name "files.pypihosted[.]org" and using it to host trojanized versions of well-known packages like colorama.
The threat actors took Colorama (a highly popular tool with 150+ million monthly downloads), copied it, and inserted malicious code. They then concealed the harmful payload within Colorama using space padding and hosted this modified version on their typosquatted-domain fake-mirror.
The threat actors behind the campaign are said to have pushed multiple changes to the rogue repositories in one single commit, altering as many as 52 files in one instance in an effort to conceal the changes.

 

The "wall" command is used to write a message to the terminals of all users that are currently logged in to a server, essentially allowing users with elevated permissions to broadcast key information to all local users (e.g., a system shutdown).
The util-linux wall command does not filter escape sequences from command line arguments. This allows unprivileged users to put arbitrary text on other users' terminals, if mesg is set to "y" and wall is setgid.
CVE-2024-28085, codenamed WallEscape, exploits improperly filtered escape sequences provided via command line arguments to trick users into creating a fake sudo (aka superuser do) prompt on other users' terminals and trick them into entering their passwords.

 

GEOBOX is sold on Telegram channels for a subscription of $80 per month or $700 for a lifetime license, payable in cryptocurrency.

  • GPS spoofing even on devices without a receiver, allowing users to fake their geographic location and bypass location-based security or engage in location-specific fraud.

  • Emulates specific network settings and Wi-Fi access points to disguise illicit activities as legitimate network traffic.

  • Anti-fraud circumvention to support activities like financial fraud and identity theft.

  • Routing traffic through anonymizing proxies to obfuscate the threat actor's location.

  • WebRTC IP masking and Wi-Fi MAC Address masquerading to hide the user's real IP address and mimic Wi-Fi network identifiers, complicating digital footprint tracking.

  • Extensive support for VPN protocols, including DNS configurations for specific locations to prevent data leaks.

  • Support for LTE modems for mobile internet connectivity, adding another layer of anonymity.

The most enticing part is that the above tools are packaged in a user-friendly environment that is easy to use even by low-skilled threat actors, who are given clear and detailed instructions in the accompanying user manual.

 

Last year's count reached 97 zero-days exploited in attacks, representing a surge of over 50 percent compared to the previous year's 62 vulnerabilities. Despite this rise, the figure remains below the peak of 106 zero-day bugs exploited in 2021.

 

 

AISEC

Several big businesses have published source code that incorporates a software package previously hallucinated by generative AI.
Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI's bad advice. If the package is laced with malware, the results can be disastrous.
Lanyado published research detailing how one might pose a coding question to an AI model like ChatGPT and receive an answer that recommends the use of a software library, package, or framework that doesn't exist. "When an attacker runs such a campaign, he will ask the model for packages that solve a coding problem, then he will receive some packages that don’t exist," Lanyado explained to The Register. "He will upload malicious packages with the same names to the appropriate registries, and from that point on, all he has to do is wait for people to download the packages."

 

MyCity's Microsoft Azure-powered chatbot uses a complex process of statistical associations across millions of tokens to essentially guess at the most likely next word in any given sequence, without any real understanding of the underlying information being conveyed. That can cause problems when a single factual answer to a question might not be reflected precisely in the training data.
NYC's "MyCity" ChatBot launched as a "pilot" program last October. The announcement touted the ChatBot as a way for business owners to "save ... time and money by instantly providing them with actionable and trusted information from more than 2,000 NYC Business webpages and articles on topics such as compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines." But a new report found the MyCity chatbot giving dangerously wrong information about some pretty basic city policies. To cite just one example, the bot said that NYC buildings "are not required to accept Section 8 vouchers," when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination. The Markup also received incorrect information in response to chatbot queries regarding worker pay and work hour regulations, as well as industry-specific information like funeral home pricing.

 

The AI Act is expected to be published and go into effect in later spring or early summer of 2024. In the meantime, employers can expect other countries to quickly follow suit with legislation modeled on the AI Act.
Using prohibited AI practices can result in hefty penalties, with fines of up to €35 million, or 7 percent of worldwide annual turnover for the preceding financial year—whichever is higher. Similarly, failure to comply with the AI Act’s data governance and transparency requirements can lead to fines up to €15 million, or 3 percent of worldwide turnover for the preceding financial year. Violation of the AI Act’s other requirements can result in fines of up to €7.5 million or 1 percent of worldwide turnover for the preceding financial year.

  1. Unacceptable Risk applications are banned.
    The scraping of faces from the internet or security footage to create facial recognition databases; emotion recognition in the workplace and educational institutions; cognitive behavioral manipulation; biometric categorization to infer sensitive data, such as sexual orientation or religious beliefs; and certain cases of predictive policing for individuals.

  2. High Risk applications, including the use of AI in employment applications and other aspects of the workplace, are subject to a variety of requirements.
    Limited Risk applications, such as chatbots, must adhere to transparency obligations.

  3. Minimal Risk applications, such as games and spam filters, can be developed and used without restriction.

 

AI applications require a different approach to the way in which traditional cyber threats are detected. Traditionally, security has mostly relied on pattern-based approaches for detecting both vulnerabilities and attacks. That approach does not work for generative AI applications - the intent of the user matters more than the exact input of the user.
SydeLabs has developed two products that give enterprises and other AI users an opportunity to fight back. Its Sydebox solution, now being used by around 15 early-adopting customers, enables organisations to scan their AI applications to identify vulnerabilities that an attacker might exploit so these can be addressed. Kumari says the organisations using this software have already found more than 15,000 potential weak points in 50 different applications they have deployed.

 

As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting "safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition. Most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.
Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers.

 

The immaturity of the AI measurement and evaluation ecosystem is a significant roadblock to the implementation of the Biden administration’s AI procurement priorities.
A case in point: Between 2013 and 2015, one ill-conceived automated system called MiDAS wrongfully accused over 34,000 individuals of unemployment fraud in Michigan. The damage caused by this algorithmic flaw was immense: People had their credit destroyed, went bankrupt, and lost their homes. Cases like this show the importance of high ethical and safety standards matter when it comes to government AI systems.
But there’s a hitch: The science of evaluating whether a given AI system is up to scratch is still in its infancy. Tools to evaluate AI systems’ reliability, fairness, and security are currently lacking—not just within the federal government, but everywhere. Efforts like the National Institute of Standards and Technology’s (NIST’s) newly established AI Safety Institute are a step in the right direction, but the institute will need significant funding to achieve the vision laid out for it.

 

The new AI Assurance and Discovery Lab will evaluate AI-enabled systems intended for use in consequential applications including national security, healthcare, and transportation. Government agencies can also use the lab to inform requirements development for new AI-enabled systems, create and evaluate proposed risk mitigation plans, and develop long-term AI assurance strategies for their organizations.

 

The Ensuring Likeness Voice and Image Security Act, or ELVIS Act, is an updated version of the state's old right of publicity law. While the old law protected an artist's name, photograph or likeness, the new legislation includes AI-specific protections. Once the law takes effect on July 1, people will be prohibited from using AI to mimic an artist's voice without permission.

 

The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services.
It's the latest example of the federal government trying to navigate its internal use of AI while simultaneously attempting to craft regulations for the burgeoning technology.
Microsoft hopes the suite of government-oriented tools they plan to roll out this summer will address Congress' concerns.

 

We provide evidence that alleged emergent abilities evaporate with different metrics or with better statistics, and may not be a fundamental property of scaling AI models.

 

 

 

APPSEC, DEVSECOPS, DEV

Screw the RFC, let's boil it down to the absolute basics and define a list of characters that can appear anywhere in the alias.

 

The criminal gang responsible for the attack copied and exfiltrated (illegally removed) some 600GB of files, including personal data of Library users and staff. As well as the exfiltration of data for ransom, the attackers’ methods included the encryption of data and systems, and the destruction of some servers to inhibit system recovery and to cover their tracks. Our major software systems cannot be brought back in their pre-attack form, either because they are no longer supported by the vendor or because they will not function on the new secure infrastructure that is currently being rolled out.

 

 

VENDORS & PLATFORMS

This feature is in public beta and automatically enabled on all private repositories for GitHub Advanced Security (GHAS) customers.
Known as Code Scanning Autofix and powered by GitHub Copilot and CodeQL, it helps deal with over 90% of alert types in JavaScript, Typescript, Java, and Python. After being toggled on, it provides potential fixes that GitHub claims will likely address more than two-thirds of found vulnerabilities while coding with little or no editing.

 

96% of all codebases contain open-source software. Lately, though, there's been a very disturbing trend. A company will make its program using open source, make millions from it, and then — and only then — switch licenses, leaving their contributors, customers, and partners in the lurch as they try to grab billions.
The latest IT melodrama baddie is Redis. Its program, which goes by the same name, is an extremely popular in-memory database. Before this latest round of license changes, MongoDB and Elastic made similar shifts.

 

 

LEGAL & REGULATORY

  • Artificial Intelligence:
    Amazon Alexa violated the Children’s Online Privacy Protection Act (COPPA) by indefinitely retaining children’s voice recordings, which it used to improve its speech recognition algorithm. Rite Aid over charges it failed to take reasonable steps to ensure that the AI facial recognition technology it deployed in its retail stores did not erroneously flag people as shoplifters or other wrongdoers.

  • Health Privacy:
    Banning BetterHelp, an online counseling service, from sharing sensitive health data for advertising with Facebook and other third parties and requiring it to pay $7.8 million to provide partial refunds to consumers. Also in 2023, the FTC banned GoodRx from sharing sensitive health data with applicable third parties for advertising and also required the company to pay a civil penalty for violating the Health Breach Notification Rule, the agency’s first action under the rule.

  • Children’s Privacy:
    FTC obtained a record $275 million penalty against Fortnite maker Epic Games, which also was required to adopt strong privacy default settings for both children and teens and other protections, and brought an action against ed tech provider Edmodo for using children’s personal information for advertising in violation of COPPA and outsourcing its responsibilities under COPPA to schools. In late 2023, the FTC also proposed key changes to strengthen and update the COPPA Rule that would further limit the ability of companies to condition access to services on monetizing children’s data.

  • Geolocation Data:
    Can reveal highly sensitive information about people by tracking their visits to such places as reproductive health clinics, houses of worship, and domestic violence shelters. In 2022, the FTC sued data broker Kochava Inc. for selling geolocation data from hundreds of millions of mobile devices that can be used to trace the movements of individuals to and from sensitive locations.

 

A British court has ruled that Julian Assange, an Australian citizen, can't be extradited to the United States on espionage charges unless U.S. authorities guarantee he won't get the death penalty, giving the WikiLeaks founder a partial victory in his long legal battle over the site's publication of classified American documents.
The ruling means the legal saga, which has dragged on for more than a decade, will continue -- and Assange will remain inside London's high-security Belmarsh Prison, where he has spent the last five years.

 

 

And Now For Something Completely Different …

Epigenetics is the study of cellular variations that are caused by external, environmental causes that switch genes “on” or “off.”Before you were conceived, your mother's egg for you was formed while in utero in your grandmother.
As a result, emotional and physical effects can manifest two generations later.