- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2023-12-30
Robert Grupe's AppSecNewsBits 2023-12-30
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
U.S. water utilities were hacked after leaving their default passwords set to ‘1111’
Since the start of the Israel-Hamas war, an Iranian hacking group known as CyberAv3ngers has been targeting U.S. water utilities that use Israel-manufactured Unitronics programmable logic controllers—common multipurpose industrial devices used for monitoring and regulating water systems. Such infrastructure is often forgotten about, neglected, or both and presents an attractive target for nation-states.
The attacks hit at least 11 different entities using Unitronics devices across the United States, which included six local water facilities, a pharmacy, an aquatics center, and a brewery. After taking control of the devices, hackers replaced their screens with the message “You have been hacked, down with Israel. Every equipment ‘made in Israel’ is CyberAv3ngers legal target.”
Some of the compromised devices had been connected to the open internet with a default password of “1111,” making it easy for hackers to find them and gain access. Fixing that doesn’t cost any money. and those are the kinds of basic things that we really want companies urgently to do.
CBS, Paramount owner National Amusements says it was hacked
The private media conglomerate said in a legally required filing with Maine’s attorney general that hackers stole personal information on 82,128 people during a December 2022 data breach.
the company discovered the breach months later in August 2023. The data breach notice filed with Maine said that hackers also stole financial information, such as banking account numbers or credit card numbers in combination with associated security codes, passwords or secrets.
January 11, NOTAM went down, causing a nationwide “ground stop” that halted all takeoffs, though planes in the air were allowed to continue to their destinations. The outage was traced to a damaged database file; a contractor was working to correct a problem with the synchronization between live and backup databases and ended up corrupting both. The engineer “replaced one file with another” in “an honest mistake that cost the country millions,” with the incident holding some obvious lessons for ensuring critical data is backed up redundantly, especially if you’re going to be mucking around with the backup system.
January 24, a New York Stock Exchange employee failed to turn the backup server off at the appropriate time. As a result, when trading began in New York at 9:30 a.m., the NYSE computers thought they were continuing the previous day’s trading session and ignored the day’s opening auctions, which are supposed to set initial prices for many stocks.
A report this year from the OIG focused on numerous licenses NASA purchased for Oracle products to support the Space Shuttle program, which wrapped up more than a decade ago; not only is the agency locked into Oracle tech as a result, but poor documentation processes means that NASA isn’t really sure how many of those Oracle systems they’re actually using. As a result, the agency spent $15 million over the past three years on software it may not be using, but didn’t want to risk a software audit from Oracle that might end in a fine that’s even more costly.
Nutanix software from two different vendors for the purposes of “interoperability testing, validation and customer proofs of concept, training and customer support.” Unfortunately, they did all that using versions of the software that were marked for evaluation purposes only, an “evaluation” process that lasted for years. The issue was discovered by an internal review, and because the vendors needed to be paid for the noncompliant use, Nutanix was unable to file its quarterly earnings report to the SEC on time because it was trying to get a handle on what it owed.
Minnechaug Regional High School in Massachusetts had been happily running a “green lighting” system installed by 5th Light that automatically adjusted the lights inside and outside the school as needed. It turns out the system had been hit by malware, and had gone into a fallback mode in which the lights never turned off. The high-tech lighting system had no manual switches that could simply be turned on and off, and the software was integrated into other school systems and could not be easily replaced. The original vendor no longer existed, and the IP had been bought and sold multiple times. Finally, after nearly 18 months of leaving the lights burning continuously (and occasionally screwing bulbs in an out by hand as needed), the system was updated this year.
The MRH-90 Taipan, a military helicopter used in Australia; in 2010, a “catastrophic” engine failure occurred when a pilot tried a so-called “hot start” — powering down and then restarting the engine mid-mission. This mechanical problem was fixed in software. Unfortunately, the first rule of software patches is that they work only if you actually roll them out, and despite the fact that this patch has been available for the better part of a decade, it wasn’t installed on all of Australia’s Taipans, resulting in a hot start that led to a helicopter crash during a training mission this past April.
Optus, Australia's’s second-largest telecom provider, went down for 12 hours, leaving half of Australians without phone or Internet connectivity. The fault could be ultimately traced to routing changes; this information was apparently such a large wave of data that it overwhelmed Optus’s routers, which then had to be physically restarted — something that took quite a long time.
AI Failures: lawyers at Levidow, Levidow & Oberman turned to ChatGPT to help them draft legal briefs related to a client of theirs suing an airline over a personal injury. Unfortunately for them and their client, ChatGPT did what it’s becoming increasingly well known for: produce an extremely plausible document that included a number of factual errors, including citations of multiple court cases that did not exist (a “hallucination,” in AI lingo). AI failures also hit the tech journalism world, with CNET being forced to retract more than 35 stories.
.
HACKING
Cybercriminals Launched “Leaksmas” Event In The Dark Web Exposing Massive Volumes Of Leaked PII And Compromised Data
On Christmas Eve, Resecurity protecting Fortune 100 and government agencies globally, observed multiple actors on the Dark Web releasing substantial data leaks. Over 50 million records containing PII of consumers from around the world have been leaked. Numerous leaks disseminated in the underground cyber world were tagged with ‘Free Leaksmas,’ indicating that these significant leaks were shared freely among various cybercriminals as a form of mutual gratitude. Ironically, this display of generosity among cybercriminals is far from a cause for celebration for victims globally. It will inevitably result in them facing a host of adverse effects, such as account takeovers (ATO), business email compromises (BEC), identity theft, and financial fraud. Significantly, the data breaches weren’t confined to the United States; they extended globally.
The mass backdooring campaign, which according to Russian officials also infected the iPhones of thousands of people working inside diplomatic missions and embassies in Russia, according to Russian government officials, came to light in June. Over a span of at least four years, the infections were delivered in iMessage texts that installed malware through a complex exploit chain without requiring the receiver to take any action.
With that, the devices were infected with full-featured spyware that, among other things, transmitted microphone recordings, photos, geolocation, and other sensitive data to attacker-controlled servers. Although infections didn’t survive a reboot, the unknown attackers kept their campaign alive simply by sending devices a new malicious iMessage text shortly after devices were restarted.
The attack began by exploiting CVE-2023-41990, a vulnerability in Apple’s implementation of the TrueType font. This initial chain link, which used techniques including return oriented programming and jump oriented programming to bypass modern exploit defenses, allowed the attackers to remotely execute code, albeit with minimum system privileges.
The next link in the exploit chain targeted the iOS kernel, the core of the OS reserved for the most sensitive device functions and data. CVE-2023-32434 is a memory-corruption vulnerability in XNU, a mechanism designed to withstand attempts to corrupt the memory inside the iOS kernel. This link went on to exploit CVE-2023-38606, the vulnerability residing in the secret MMIO registers. It allowed the bypassing of the Page Protection Layer, the defense discussed earlier that prevents malicious code injection and kernel modification even after a kernel has been compromised. The chain then exploited a Safari vulnerability tracked as CVE-2023-32435 to execute shellcode. The resulting shellcode, in turn, went on to once again exploit CVE-2023-32434 and CVE-2023-38606 to finally achieve the root access required to install the last spyware payload.
Besides affecting iPhones, these critical zero-days and the secret hardware function resided in Macs, iPods, iPads, Apple TVs, and Apple Watches. What’s more, the exploits Kaspersky recovered were intentionally developed to work on those devices as well. Apple has patched those platforms as well.
Amnesty confirms Apple warning: Indian journalists’ iPhones infected with Pegasus spyware
Apple issued notifications warning over a half dozen India lawmakers of their iPhones being targets of state-sponsored attacks. The Modi government responded by criticizing Apple's security and demanding explanations to mitigate political impact.
Officials from the ruling Bharatiya Janata Party (BJP) publicly questioned whether the Silicon Valley company's internal threat algorithms were faulty and announced an investigation into the security of Apple devices. In private senior Modi administration officials called Apple's India representatives to demand that the company help soften the political impact of the warnings. They also summoned an Apple security expert from outside the country to a meeting in New Delhi, where government representatives pressed the Apple official to come up with alternative explanations for the warnings to users. "They were really angry."
The Modi government has never confirmed or denied using spyware, and it has refused to cooperate with a committee appointed by India's Supreme Court to investigate whether it had. But two years ago, the Forbidden Stories journalism consortium found that phones belonging to Indian journalists and political figures were infected with Pegasus, which grants attackers access to a device's encrypted messages, camera and microphone.
To stem North Korea’s missiles program, White House looks to its hackers
Last year, Pyongyang-linked hackers stole roughly $1.7 billion worth of digital money. About half of North Korea’s missile program is funded by cyberattacks and cryptocurrency theft.
They famously burst onto the public consciousness in 2014, when Pyongyang’s operatives hacked into Sony Pictures Entertainment and threatened the movie studio against releasing “The Interview,” a raunchy comedy that portrayed the assassination of Kim Jong Un. Years later, in 2017, they unleashed a self-spreading computer virus that is estimated to have caused billions of dollars in damages in a matter of hours.
By some metrics, North Korea has launched more than a dozen supply-chain attacks in the last year — a sophisticated tactic in which hackers compromise the software delivery pipeline to get nearly unfettered access to a wide range of companies. In April, researchers at cybersecurity firm Mandiant uncovered that North Korean hackers had pulled off the first publicly known instance of a “double” software supply-chain hack — jumping from one software maker into a second and from there to the company’s customers. Mandiant assessed the hackers were after cryptocurrency. Had they wanted to, however, the North Koreans could have used tactics like that to inflict “a massive level of damage.” What North Korea “is able to do on a global scale, no one has replicated."
Ransomware gangs are increasingly turning to remote encryption, and that's a huge problem
Remote encryption is a form of ransomware in which threat actors leverage a single compromised, unprotected endpoint to encrypt data on other devices connected to the same network. All it takes is one under protected device to compromise the entire network. Attackers know this, so they hunt for that one ‘weak spot’ - and most companies have at least one.
Remote encryption is a super destructive method of ransomware attack, and its deliberate use is growing more popular by the day, with a 62% increase, year-on-year, in intentional remote encryption attacks.
.
APPSEC, DEVSECOPS
Robert Grupe: New Year Resolution: Software Development Application Security Strategic Prioritization, SSDF Top 10
Why 2024 will be the year of the CISO
In May, former Uber CISO, Joe Sullivan, was sentenced to serve three years' probation and pay a $50,000 fine. Sullivan failed to disclose a data breach and paid off hackers to remain silent. Sullivan has appealed the conviction.
In October, Tim Brown, CISO at SolarWinds, was charged by the US Securities and Exchange Commission (SEC). Brown is accused of fraud and internal control failures relating to allegedly known cybersecurity risks and vulnerabilities.
New SEC cybersecurity rules call for mandatory cyber-incident reporting for all US-listed companies. Domestic issuers must disclose material cybersecurity incidents within four days and disclose material cybersecurity incidents in Form 8-K filings. Private foreign issuers must submit Form 6-K filings to disclose material cyber-incidents. Organizations must also have cybersecurity expertise on their boards, a documented risk management program, and specific cybersecurity leadership.
Financial services firms also face changes to New York State Department of Financial Services 23 NYCRR 500, including new requirements for larger companies, expanded governance requirements for boards, expanded cyber incident notice, new requirements for incident response and business continuity planning, and additional multifactor authentication requirements.
In Europe, NIS2 takes effect in October 2024. While NIS1 covered critical industries like healthcare, energy, transport, digital infrastructure, or financial market infrastructures, NIS2 expands industries affected to include the food sector (production, processing, and distribution), social networking services platforms, cloud computing services and data centers. NIS2 focuses on four primary areas: risk management, corporate accountability, reporting obligations, and business continuity. At a more granular level, NIS2 impacts policies and procedures for the use of cryptography, vulnerability management programs, employee access to sensitive data, multi-factor authentication, evaluating security technology efficacy, employee training, and securing their supply chain.
Why would CISOs move on from cybersecurity? Sixty-five percent say they have considered an exit due to the high stress associated with a cybersecurity job, 43% claim they are frustrated because their organization doesn't take cybersecurity seriously.
Understanding the NSA’s latest guidance on managing OSS and SBOMs
The latest publication, "Securing the Software Supply Chain: Recommended Practices for Managing Open-Source Software and Software Bills of Material," comes from the US National Security Agency (NSA). It also builds on previous publications such as the White House Cybersecurity Executive Order (EO) and memos and forthcoming requirements for Federal agencies, such as the Office of Management and Budget's (OMB) memos 22-18 and 23-16, which require software suppliers selling to the US federal government to self-attest to aligning with publications such as the National Institute of Standards and Technology's (NIST) Secure Software Development Framework (SSDF) and even providing SBOMs in some cases.
A Proposed Rule by the US Defense Department on 12/26/2023
In 2019, DoD announced the development of CMMC in order to move away from a “self-attestation” model of security.
DoD is proposing to establish requirements for a comprehensive and scalable assessment mechanism to ensure defense contractors and subcontractors have, as part of the Cybersecurity Maturity Model Certification (CMMC) Program, implemented required security measures to expand application of existing security requirements for Federal Contract Information (FCI) and add new Controlled Unclassified Information (CUI) security requirements for certain priority programs. DoD currently requires covered defense contractors and subcontractors to implement the security protections set forth in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800–171 Rev 2 to provide adequate security for sensitive unclassified DoD information that is processed, stored, or transmitted on contractor information systems and to document their implementation status, including any plans of action for any NIST SP 800–171 Rev 2 requirement not yet implemented, in a System Security Plan (SSP). The CMMC Program provides the Department the mechanism needed to verify that a defense contractor or subcontractor has implemented the security requirements at each CMMC Level and is maintaining that status across the contract period of performance, as required.
Getting Identity and Authz Right in Kubernetes
The best solutions start by capturing business requirements, then continue by producing a solid architectural design that puts security in the hands of engineers with a strong understanding of the domain logic. The main requirement is to protect data, such as the organization’s intellectual property; personal data belonging to citizens, end users or employees; or sensitive data from business partners. User-facing applications access this data over the internet using APIs. Therefore, securing APIs is the primary focus when protecting data. It requires three main building blocks of authorization, user identity and workload identity.
The OAuth 2.0 authorization framework provides a security architecture for digital services, with the API message credential at the center. This article does not cover any OAuth technical details and instead only explains the behaviors the framework enables.
Saving Schrödinger’s Cat: Getting serious about post-quantum encryption in 2024
A White House National Security Memorandum issued last year gave federal agencies until 2035 to complete their migration to post-quantum encryption. But that deadline assumed it would take many years for today’s experimental quantum computers to evolve into “cryptographically relevant” machines able to break RSA, an assumption challenged by a recent breakthrough by a DARPA-funded, Harvard-led research team. That advance — a quantum leap in quantum computing — could bring the end of RSA and other long-used encryption years closer for everyone.
Late last month, NIST formally closed the public comment period for three PQC algorithms it plans to finalize for widespread use next year. But NIST finalizing algorithms doesn’t solve the problem: That takes everybody implementing them.
Government agencies and private companies need to begin combing through countless lines of software code to find every instance of RSA and other long-standard protocols, so they can ultimately replace them with Post-Quantum Cryptography (PQC), a new set of algorithms designed to resist rapidly advancing quantum computers which could, in theory, crack any existing encryption.
AI Cybersecurity in Healthcare: Key Risks and Security Measures
The advent of AI requires us to build new approaches to cybersecurity policies, strategies, and tactics on top of our already well-established foundation. Status quo is important, but not enough. Traditional security measures are better positioned to manage AI-related threats from cyber-criminals.
Multi-Point Defense
Data Encryption and Access Control
Third-Party Vendor Assessment
Incident Response Plans
Ongoing Security Audits and Updates
Employee Training and Awareness
.
DEV
The hardest part of building software is not coding, it's requirements
Why replacing programmers with AI won’t be so easy.
In order to produce a functional piece of software from AI, you need to know what you want and be able to clearly and precisely define it. There are times when I'm writing software just for myself where I don't realize some of the difficulties and challenges until I actually start writing code.
An OpenAI employee says prompt engineering is not the skill of the future — but knowing how to talk to humans will be
The reality is that prompting AI systems is no different than being an effective communicator with other humans.
While prompt engineering is an increasingly hot area of expertise, the three underlying skills that truly matter in 2024, the OpenAI employee said, are reading, writing, and speaking. Honing these will give humans a competitive edge over highly intelligent robots in the future as AI technology continues to advance.
Generative AI Learned Nothing From Web 2.0
Companies like Meta never did get the upper hand over mis- and disinformation, sketchy labor practices, and nonconsensual pornography, to name just a few of their unintended consequences. Now those issues are gaining a challenging new life, with an AI twist.
AI companies talk about putting “safeguards” and “acceptable use” policies in place on certain generative AI models, just as platforms have their terms of service around what content is and is not allowed. As with the rules of social networks, AI policies and protections have proven relatively easy to circumvent.
The unintended consequences of social platforms came to be associated with a slogan once popular with Facebook CEO Mark Zuckerberg: “Move fast and break things.” As AI companies jockey for supremacy with generative algorithms, we now seeing the same reckless approach. There’s a sense of release after release without much consideration.
Although regulators around the globe seem determined to react to generative AI less sluggishly than they did to social media, regulation is lagging far behind AI development. That means there’s no incentive for the new crop of generative-AI-focused companies to slow down out of concern for penalties.
.
VENDORS & PLATFORMS
EFF: Think Twice Before Giving Surveillance for the Holidays
One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories. This way, instead of just buying any old smart-device at random because it's on sale, you at least have the context of what sort of data it might collect, how the company has behaved in the past, and what sorts of potential dangers to consider. U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches.
Giving the gift of electronics shouldn’t come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.
Employers Increase Access to Mental Health-Related Chatbots or Apps
About two-thirds of large employers have added access to mental health-related chatbots or apps over the past three years. These chatbots use artificial intelligence (AI) to hold therapist-like conversations and provide mental health support to employees, according to the report. This is becoming more important as the demand for mental health counselors continues to rise while the supply of providers decreases. However, some researchers caution that there isn’t sufficient evidence to prove the effectiveness of these programs, the report said. Additionally, concerns about data security and privacy have been raised.
ChatGPT Exploit Finds 24 Email Addresses, Amid Warnings of 'AI Silo'
Researchers used the API for accessing ChatGPT where "requests that would typically be denied in the ChatGPT interface were accepted..." The vulnerability is particularly concerning because no one — apart from a limited number of OpenAI employees — really knows what lurks in ChatGPT's training-data memory.
The success of the researchers' experiment should ring alarm bells because it reveals the potential for ChatGPT, and generative A.I. tools like it, to reveal much more sensitive personal information with just a bit of tweaking. When you ask ChatGPT a question, it does not simply search the web to find the answer. Instead, it draws on what it has "learned" from reams of information — training data that was used to feed and develop the model — to generate one. L.L.M.s train on vast amounts of text, which may include personal information pulled from the Internet and other sources. That training data informs how the A.I. tool works, but it is not supposed to be recalled verbatim... In the example output they provided for Times employees, many of the personal email addresses were either off by a few characters or entirely wrong. But 80 percent of the work addresses the model returned were correct.
Top 7 DAST Tools in 2024: Analysis of 400+ Reviews
PortSwigger Burp Suite 4.8 based on 136 reviews
Invicti 4.6 based on 72 reviews
NowSecure 4.6 based on 27 reviews
Indusface WAS 4.5 based on 50 reviews
Contrast Assess 4.5 based on 49 review
Checkmarx DAST 4.2 based on 33 reviews
HCL AppScan 4.1 based on 49 reviews
GitLab Launches Browser-Based Dynamic Application Security Testing (DAST) Scan
GitLab has recently introduced a browser-based Dynamic Application Security Testing (DAST) feature in version 16.4 (or DAST 4.0.9). This development is part of GitLab's ongoing efforts to enhance browser-based DAST by integrating passive checks.
These are the top 10 most popular AI tools of 2023, and how to use them to make more money
1. ChatGPT: AI Chatbot
2. Character.ai: AI Chatbot
3. Quillbot: AI Writing
4. Midjourney: Image Generator
5. Hugging Face: Data Science
6. Bard: AI Chatbot
7. NovelAI: AI Writing
8. Capcut: Video Generator
9. Janitor AI: AI Chatbot
10. Civitai - Image Generator
.
LEGAL
OpenAI and Microsoft Face New York Times Copyright Lawsuit
The Times, which counts over 10 million subscribers to its print and digital editions, alleged in a complaint filed in the Southern District of New York that OpenAI had used without permission "millions" of the daily newspaper's copyrighted articles to train the large language models that power generative artificial intelligence chatbots such as ChatGPT.
What comes after open source? Bruce Perens is working on it
Bruce Perens, one of the founders of the Open Source movement, is ready for what comes next: the Post-Open Source movement. "I've written papers about it, and I've tried to put together a prototype license." Perens believes that the General Public License (GPL) is insufficient for today's needs and advocates for enforceable contract terms. He also criticizes non-Open Source licenses, particularly the Commons Clause, for misrepresenting and abusing the open-source brand. As for AI, Perens views it as inherently plagiaristic and raises ethical concerns about compensating original content creators. He also weighs in on U.S.-China relations, calling for a more civil and cooperative approach to sharing technology.
.
And Now For Something Completely Different …
War of the workstations: How the lowest bidders shaped today's tech landscape
The winners of the battles got to write the histories, as they always do, and that means that what is now received wisdom, shared and understood by almost everyone, contains and conceals propaganda and dogma. Things that were once just marketing BS are now holy writ, and when you discover how the other side saw it, dogma is uncovered as just big fat lies.
If you want to become a billionaire from software, you don't want rockstar geniuses; you need fungible cogs, interchangeable and inexpensive. Software built with tweezers and glue is not robust, no matter how many layers you add. It's terribly fragile, and needs armies of workers constantly fixing its millions of cracks and holes. There was once a better way, but it lost out to cold hard cash, and now, only a few historians even remember it existed.
.