- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2024-07-28
Robert Grupe's AppSecNewsBits 2024-07-28
Software Development Security What’s Weak This Week: CloudStrike, Google, Secure Boot vulnerability exposing hundreds of devices, North Korea KnowBe4 attack
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
97% of CrowdStrike systems are back online; Microsoft suggests Windows changes
CrowdStrike CEO George Kurtz said Thursday that 97 percent of all Windows systems running its Falcon sensor software were back online. The update, which caused Windows PCs to throw the dreaded Blue Screen of Death and reboot, affected about 8.5 million systems by Microsoft's count, leaving roughly 250,000 that still need to be brought back online. Microsoft VP John Cable said in a blog post that the company has "engaged over 5,000 support engineers working 24x7" to help clean up the mess created by CrowdStrike's update and hinted at Windows changes that could help—if they don't run afoul of regulators.
Cable pointed to VBS enclaves and Azure Attestation as examples of products that could keep Windows secure without requiring kernel-level access, as most Windows-based security products (including CrowdStrike's Falcon sensor) do now. But past efforts by Microsoft to lock third-party security companies out of the Windows kernel—most recently in the Windows Vista era—have been met with pushback from European Commission regulators. That level of skepticism is warranted, given Microsoft's past (and continuing) record of using Windows' market position to push its own products and services. Any present-day attempt to restrict third-party vendors' access to the Windows kernel would be likely to draw similar scrutiny. Microsoft has also had plenty of its own security problems to deal with recently, to the point that it has promised to restructure the company to make security more of a focus.
CrowdStrike blames a test software bug for that giant global mess it made
Whatever the validator does or is supposed to do, it did not prevent the release to customers of the dodgy July 19 template instance despite it being a dud. This test software should have detected that the content update was broken but approved it anyway because the validator was buggy. CrowdStrike thus assumed the July 19 channel file release would be OK; the tests had after all passed the IPC template type delivered in March, and subsequent related IPC template instances, without a hitch on Windows.
There have been calls for CrowdStrike to scrutinize its releases for errors prior to distribution; well, it tried and failed. The incident report includes promises to test future rapid response content more rigorously – we recommend sandboxing releases if it's not already doing that – plus stagger releases, offer users more control over when to deploy it, and provide release notes. You read that right: Release notes.
Administrators have update lessons to learn from the CrowdStrike outage
One of the reasons the CrowdStrike update caused so many problems was that administrators assumed the faulty update would have been pulled and fixed long before it troubled their systems. Many were cheerfully running on N-2 or N-1, meaning they were set to use a release two or one version behind the latest.
The problem for many users was understanding that the version policy only applied to part of the CrowdStrike system. One posted: "We learned the N-1 policy we had in place only applies to agent updates, and not signature files." "As far as we can tell there is not a good way to delay what signature files get pushed, hence everybody receiving the 7/18 23:09 (central time) signature file that blew up the world over the next hour."
Another user, having noted that they had CrowdStrike set to be one version behind on non-critical infrastructure and two versions behind on critical infrastructure, glumly said: "We got hit anyway because it was a 'content file' and so ignored our auto update restrictions."
It's a tool in the security stack that completely broke the affected systems with no warning. There was no indication in the beginning of how or when recovery might be obtained. The information was so slow flowing, some were unsure if they should wait for a fix or just go on and reimage the machine. Most of the information that did flow was either to their largest partners or behind their login-walled documentation portal which is not publicly searchable. That's why some organizations didn't know whether to send their people home for the day or not on Friday.
How did a CrowdStrike file crash millions of Windows computers? We take a closer look at the code
The files use a naming convention that starts with "C-" followed by a unique identifying number. The errant file's name in this case started with "C-00000291-", followed by various other numbers, and ended with the .sys extension. But these are not kernel drivers, according to CrowdStrike; indeed, they are data files used by Falcon, which does run at the driver level. That is to say, the broken file was not a driver executable but it was processed by CrowdStrike's highly trusted code that is allowed to run within the operating system context, and when the bad file caused that code to go off the rails, it brought down the whole surrounding operating system.
It appears Falcon reads entries from a table in memory in a loop and uses those entries as pointers into memory for further work. When at least one of those entries was not correct or present, as a result of the channel file's contents, and instead contained a garbage value, the kernel-level code used that garbage as if it was valid, causing it to access unmapped memory.
The months and days before and after CrowdStrike's fatal Friday
CrowdStrike deployed a fix at 0527 UTC the same day, but in the time it took its engineering team to remediate the issue — 78 minutes — at least 8.5 million Windows devices were put out of action. That's more than one million machines every ten minutes on average over that span; imagine if the fix wasn't deployed for longer – eg, for hours.
Even if the vast majority of endpoints have been restored, full recovery may take weeks for some systems.
CrowdStrike isn't the first tech company to cause a global disaster because of a botched update. A routine McAfee antivirus update in 2010 similarly bricked massive numbers of Windows machines. CrowdStrike boss Kurtz, at the time, was McAfee's CTO.
CrowdStrike CEO to testify about massive outage that halted flights and hospitals
The US Department of Transportation (DoT) is investigating Delta Air Lines over its handling of the global IT outage caused by CrowdStrike's content update.
CrowdStrike update blunder may cost world billions – and insurance ain't covering it all
Parametrix says insurance might only pay out about $540 million to $1.1 billion of that hit for the Fortune 500, or between 10 and 20 percent. That's apparently "due to many companies' large risk retentions, and to low policy limits relative to the potential outage loss. Thankfully, CrowdStrike is working hard to make it up to its teammates and partners that sell the software and provide support for it to customers. These folks were generously offered $10 gift codes for Uber Eats, which should help pay for maybe half of someone's lunch, some of which were promptly denied due to Uber suspecting the high rate of redemption was an indication of fraud.
Google Says Sorry After Passwords Vanish For 15 Million Windows Users
Google has said it is sorry after a bug prevented a significant number of Windows users from finding or saving their passwords. The issue, which Google noted started on July 24 and continued for nearly 18 hours before being fixed on July 25, was due to “a change in product behavior without proper feature guard,” an excuse that may sound familiar to anyone caught up in the CrowdStrike disruption this month.
Data breach exposes US spyware maker behind Windows, Mac, Android and Chromebook malware
Spytech is the latest spyware maker in recent years to have itself been compromised, and the fourth spyware maker known to have been hacked this year alone. Spytech’s spyware — Realtime-Spy and SpyAgent, among others — has been used to compromise more than 10,000 devices since the earliest-dated leaked records from 2013, including Android devices, Chromebooks, Macs, and Windows PCs worldwide.
A person with knowledge of the breach provided TechCrunch with a cache of files taken from the company’s servers containing detailed device activity logs from the phones, tablets, and computers that Spytech monitors, with some of the files dated as recently as early June. While the data contains reams of sensitive data and personal information obtained from the devices of individuals — some of whom will have no idea their devices are being monitored — the data does not contain enough identifiable information about each compromised device to notify victims of the breach.
Secure Boot is completely broken on 200+ models from 5 big device makers
The threat posed by PKfail is that anyone with (1) knowledge of the private portion of an affected platform key and (2) administrative system rights to an affected device can completely bypass Secure Boot protections.
In 2012, an industry-wide coalition of hardware and software makers adopted Secure Boot to protect against a long-looming security threat. The threat was the specter of malware that could infect the BIOS, the firmware that loaded the operating system each time a computer booted up. From there, it could remain immune to detection and removal and could load even before the OS and security apps did. Secure Boot used public-key cryptography to block the loading of any code that wasn’t signed with a pre-approved digital signature.
Researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what’s known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. Experts say it effectively torpedoes the security assurances offered by Secure Boot.
The researchers soon discovered that the compromise of the key was just the beginning of a much bigger supply-chain breakdown that raises serious doubts about the integrity of Secure Boot on more than 300 additional device models from virtually all major device manufacturers. In addition to the five makers mentioned earlier, they include Aopen, Foremelife, Fujitsu, HP, Lenovo, and Supermicro. As is the case with the platform key compromised in the 2022 GitHub leak, an additional 21 platform keys contain the strings “DO NOT SHIP” or “DO NOT TRUST.”
There's little that users of an affected device can do other than install a patch if one becomes available from the manufacturer. “My takeaway is ‘yup, [manufacturers] still screw up Secure Boot, this time due to lazy key management,’ but it wasn't obviously a change in how I see the world (secure boot being a fig leaf security measure in many cases),” HD Moore, a firmware security expert and CTO and co-founder at runZero, said after reading the Binarly report. “The story is that the whole UEFI supply chain is a hot mess and hasn't improved much since 2016.”
Data from deleted GitHub repos may not actually be deleted
Cross Fork Object Reference (CFOR) vulnerability occurs when one repository fork can access sensitive data from another fork (including data from private and deleted forks).
One can fork a repository, commit data to it, delete the fork, and then access the supposedly deleted commit data via the original repository. Also data not synced with the fork continues to be accessible through the fork after the original repo is deleted.
This scenario came up last week with the submission of a critical vulnerability report to a major technology company involving a private key for an employee GitHub account that had broad access across the organization. The key had been publicly committed to a GitHub repository. Upon learning of the blunder, the tech biz nuked the repo thinking that would take care of the leak. While deleting a branch, removes the reference to a particular commit chain, the commits themselves are not deleted from the repository's object database.
There's an adjacent issue having to do with forks – copied repositories – that's more specific to GitHub. Forks are not part of the git spec, so each platform has its own implementation. For GitHub, dangling commits can be downloaded via a fork if you have the identifying hash, or some portion of it.
When informed of the situation through its Vulnerability Disclosure Program, GitHub responded: "This is an intention design decision and is working as expected as noted in our [documentation]." This, evidentially, has been known for years. One individual claims to have notified GitHub of the vulnerability back in 2018 and received a similar response.
[rG: Whether or not a secret is removed from an exposed location, remediation requires that the secret is immediately changed so that it cannot be used in any unauthorized way by whoever/whatever may have seen and noted it. It is never sufficient simply to remove, or restrict access to, the original exposure point.]
Telegram zero-day for Android allowed malicious files to masquerade as videos
The exploit takes advantage of Telegram’s default setting to automatically download media files. The option can be disabled manually, but in that case, the payload could still be installed on the device if a user tapped the download button in the top left corner of the shared file. If the user tried to play the “video,” Telegram displayed a message that it was unable to play it and suggested using an external player. The hackers disguised a malicious app as this external player.
In the patched version of Telegram, the malicious file in the chat is now correctly displayed to the user as an application rather than a video.
ESET discovered the exploit on an underground forum in early June. It was sold for an unspecified price by a user with the username “Ancryno.” In its post, the seller showed screenshots and a video of testing the exploit in a public Telegram channel. Threat actors had about five weeks to exploit the zero-day before it was patched, but it’s not clear if it was used in the wild.
[rG: Application Input Validation of externally provided file types is a fundamental security defense requirement. Reliance on user provided file type or file name extensions should never be relied on, and application actions need to be designed for both allowed and prohibited file type – so as to minimize potential abuse exploitation such as this.]
Forget security – Google's reCAPTCHA v2 is exploiting users for profit
Google promotes its reCAPTCHA service as a security mechanism for websites, but researchers affiliated with the University of California, Irvine, argue it's harvesting information while extracting human labor worth billions. “We estimate that – during over 13 years of its deployment – 819 million hours of human time has been spent on reCAPTCHA, which corresponds to at least $6.1 billion USD in wages," the authors state in their paper.
Traffic resulting from reCAPTCHA consumed 134 petabytes of bandwidth, which translates into about 7.5 million kWhs of energy, corresponding to 7.5 million pounds of CO2. In addition, Google has potentially profited $888 billion from cookies [created by reCAPTCHA sessions] and $8.75–32.3 billion per each sale of their total labeled data set.
Even back in 2016 researchers were able to defeat reCAPTCHA v2 image challenges 70 percent of the time. The reCAPTCHA v2 checkbox challenge is even more vulnerable – the researchers claim it can be defeated 100 percent of the time. reCAPTCHA v3 has fared no better. In 2019, researchers devised a reinforcement learning attack that breaks reCAPTCHA v3's behavior-based challenges 97 percent of the time.
“We estimate that – during over 13 years of its deployment – 819 million hours of human time has been spent on reCAPTCHA, which corresponds to at least $6.1 billion USD in wages. Traffic resulting from reCAPTCHA consumed 134 petabytes of bandwidth, which translates into about 7.5 million kWhs of energy, corresponding to 7.5 million pounds of CO2. Google has potentially profited $888 billion from cookies [created by reCAPTCHA sessions] and $8.75–32.3 billion per each sale of their total labeled data set.
HACKING
Cybercrooks spell trouble with typosquatting domains amid CrowdStrike crisis
One example, the now-dead URL for which was fix-crowdstrike-apocalypse[.]com, and showed how an executable to fix the BSOD issues was selling for €500,000 ($543,000) and the source code for it selling for double.
Looking at that URL, who's getting fooled by this, really? A tech-illiterate user, maybe. CrowdStrike caters to the enterprise crowd, the professionals, so it's difficult to see how successful this would be, especially with prices like that.
Every campaign is different and potentially not quite as vacuous as this one. Some of the other domains, for example, are ever so slightly trickier:
crowdstrikefix[.]com
crowdstrike-helpdesk[.]com
crowdstrikebsod[.]com
Financial extortion isn't the only play either. Some researchers were reporting as early as Saturday, the day after the outage began, that phishing campaigns were under way designed to deliver remote access trojans such as Remcos disguised as hotfixes. The incident wasn't isolated and CrowdStrike was forced to issue a public memo on the same day warning against opportunistic cybercriminals exploiting the situation.
Beware of fake CrowdStrike domains pumping out Lumma infostealing malware
How a cheap barcode scanner helped fix CrowdStrike'd Windows PCs in a flash
The firm had the BitLocker keys for all its PCs, so Woltz and colleagues wrote a script that turned them into barcodes that were displayed on a locked-down management server's desktop. The script would be given a hostname and generate the necessary barcode and LAPS password to restore the machine. Woltz went to an office supplies store and acquired an off-the-shelf barcode scanner for AU$55 ($36). At the point when rebooting PCs asked for a BitLocker key, pointing the scanner at the barcode on the server's screen made the machines treat the input exactly as if the key was being typed. That's a lot easier than typing it out every time, and the server's desktop could be accessed via a laptop for convenience. Woltz, Watson, and the team scaled the solution – which meant buying more scanners at more office supplies stores around Australia.
North Korean hacker got hired by US security vendor, immediately loaded malware
KnowBe4 operates provides security awareness training, including phishing security tests, to corporate customers. If you occasionally receive a fake phishing email from your employer, you might be working for a company that uses the KnowBe4 service to test its employees' ability to spot scams.
KnowBe4's HR team conducted four video conference based interviews on separate occasions, confirming the individual matched the photo provided on their application. Additionally, a background check and all other standard pre-hiring checks were performed and came back clear due to the stolen identity being used. This was a real person using a valid but stolen US-based identity. The picture was AI 'enhanced.’
How this works is that the fake worker asks to get their workstation sent to an address that is basically an "IT mule laptop farm." They then VPN in from where they really physically are (North Korea or over the border in China) and work the night shift so that they seem to be working in US daytime. The scam is that they are actually doing the work, getting paid well, and give a large amount to North Korea to fund their illegal programs.
US Urges Vigilance by Tech Startups, VC Firms on Foreign Funds
“Unfortunately our adversaries continue to exploit early-stage investments in US startups to take their sensitive data,” said Michael Casey, director of the NCSC. “These actions threaten US economic and national security and can directly lead to the failure of these companies.” Washington has ramped up scrutiny of investments related to countries it considers adversaries, most notably China, as advanced technologies with breakthrough commercial potential, such as artificial intelligence, can also be used to enhance military or espionage capabilities. The bulletin also warned startups that they could be denied US government contracts or funding “if foreign threat actors gain a foothold in their firms.”
[rG: Startups looking for investment are required to expose all details of their market analysis, strategy, financials, and technology; which makes them easy prey.]
How Russia-linked malware cut heat to 600 Ukrainian buildings in deep winter
FrostyGoop, represents one of less than 10 specimens of code ever discovered in the wild that's designed to interact directly with industrial control-system software with the aim of having physical effects. It's also the first malware ever discovered that attempts to carry out those effects by sending commands via Modbus, a commonly used and relatively insecure protocol designed for communicating with industrial technology.
The malware altered temperature readings to trick control systems into cooling the hot water running through buildings' pipes, marks the first confirmed case in which hackers have directly sabotaged a heating utility.
Hackers used their breach of the heating utility's network to send FrostyGoop's Modbus commands that targeted the ENCO devices and crippled the utility's service, the malware appears to have been hosted on the hackers' own computer, not on the victim's network. That means simple antivirus alone, rather than network monitoring and segmentation to protect vulnerable Modbus devices, likely won't prevent future use of the tool. The fact that it can interact with devices remotely means it doesn't necessarily need to be deployed to a target environment. You may potentially never see it in the environment, only its effects.
DHS Has a DoS Robot to Disable Internet of Things ‘Booby Traps’ Inside Homes
The Department of Homeland Security bought a dog-like robot that it has modified with an “antenna array” that gives law enforcement the ability to overload people’s home networks in an attempt to disable any internet of things devices they have. The DHS has also built an “Internet of Things” house to train officers on how to raid homes that suspects may have “booby trapped” using smart home devices.
A suspect who has been searched and is under the control of officers can cause these actions to happen with a simple voice command which can start a chain of events to occur within a house, such as turning off lights, locking doors, activating the HVAC system to introduce chemicals into the environment and cause a fire or explosion to take place.
US Prepares Jamming Devices Targeting Russia, China Satellites
The devices aren’t meant to protect US satellites from Chinese or Russian jamming but “to responsibly counter adversary satellite communications capabilities that enable attacks.”
The Pentagon strives — on the rare occasions when it discusses such space capabilities — to distinguish its emerging satellite-jamming technology as purely defensive and narrowly focused. That’s as opposed to a nuclear weapon the US says Russia is developing that could create high-altitude electromagnetic pulses that would take out satellites and disrupt entire communications networks.
The first 11 of 24 Remote Modular Terminal jammers will be deployed in several months, and all of them could be in place by Dec. 31 at undisclosed locations.
The new terminals augment a much larger jamming weapon called the Counter Communications System that’s already deployed and a mid-sized one called Meadowlands by providing the ability to have a proliferated, remotely controlled and relatively relocatable capability. Meadowlands system has encountered technical challenges that have delayed its delivery until at least October, about two years later than planned.
China has “hundreds and hundreds of satellites on orbit designed to find, fix, track, target and yes, potentially engage, US and allied forces across the Indo-Pacific,” General Stephen Whiting, head of US Space Command, said Wednesday at the annual Aspen Security Forum. “So we’ve got to understand that and know what it means for our forces.”
Cellebrite got into Trump shooter's Samsung device in just 40 minutes
With cooperation refused by smartphone-makers, Cellebrite relies on zero-days and undiscovered vulnerabilities in devices to break through systems without vendor permission. But according to recently-leaked internal documents from Cellebrite, Apple users might not have that much to worry about – many newer iPhones and versions of iOS remain inaccessible to the cracker’s tools. 404 Media reported it had obtained internal Cellebrite documents from April 2024 indicating that the biz was (as of April, at least) unable to access any Apple device running iOS 17.4 or later, and most devices running iOS 17.1 to 17.3.1 – with the exception of the iPhone XR and 11.
APPSEC, DEVSECOPS, DEV
Despite Bans, AI Code Tools Widespread in Organizations
15% of organizations explicitly prohibit the use of AI tools for code generation, however 99% say that AI code-generating tools are being used regardless.
Generative AI is currently unable to follow secure coding practices or to produce truly secure code, which motivates some security teams to consider AI-driven security tools to help manage the proliferation of development teams’ AI-generated code. Many are worried about GenAI attacks like AI hallucinations and 80% are worried about security threats stemming from developers using AI.
The First Steps of Establishing Your Cloud Security Strategy
Cloud Data Security with CIS Control 3
Cloud Application Security with CIS Control 16
Hardening Your Cloud-Based Assets with MFA, Lack of Public Access
Set up MFA for the 'Root' User Account
Block Public Access on Your S3 Buckets
Streamlining Your Use of Cloud Security Best Practices
The Linux Foundation and OpenSSF Release Report on the State of Education in Secure Software Development
Survey results indicate that the lack of security awareness is likely due to most current educational programs prioritizing functionality and efficiency while often neglecting essential security training.
Additionally, most professionals (69%) rely on on-the-job experience as a main learning resource, yet it takes at least five years of such experience to achieve a minimum level of security familiarity.
Other key findings of the survey include the following:
Lack of time (58%) and lack of awareness and training (50%) are the top two most common challenges in implementing secure software development practices within organizations.
The top reason (44%) for not taking a course on secure software development is lack of knowledge about a good course on the topic.
Emerging security concerns such as AI (57%) and supply chain (56%) are seen as critical future areas for innovation and attention.
OpenSSF will create a new course on security architecture which will be available later this year which will help promote a ’security by design’ approach to software developer education.
Patch management still seemingly abysmal because no one wants the job
This whole CrowdStrike outage is waking a lot of people up to how dangerous it can be to automate updates. It's a very unsexy thing to work on, nobody wants to do it, and everyone feels like it should be automated – but nobody wants to take responsibility for doing it. The net effect is that nothing gets done and people stay in this state of technical debt where they're not able to prioritize it. Hopefully this CrowdStrike thing will help.
While organizations tend to strive for a 97 to 99 percent patch rate, they typically only manage to successfully fix between 75 and 85 percent of issues in their software. Patches don't always work as expected, and may make things worse, so testing and evaluation adds time and stress to the deployment process. Coupled with an exploding ecosystem of third-party apps, endpoint management tools that aren't really designed to handle patch management, bandwidth, and architectural challenges, IT teams have an overwhelming amount of work to do. The tech industry has become overwhelmed by vulnerabilities to the point where it can't keep up - pointing to NIST's massive backlog in vulnerability processing that's been ongoing since February. Without a CVE, it's hard to tie a vulnerability to a patch that needs to be done.
People don't take jobs in IT operations to sit and update systems all day. They take those jobs to work on cool projects and cutting-edge technology. Going through and applying Windows updates isn't what a lot of people have signed up for.
Security teams and IT operations teams jostle to offload responsibility for the task. Layoffs and downsizing complicate matters further. You can open a Jira ticket and send it to [your IT ops team] or whoever, but who's the sysadmin or who's the business owner that is actually responsible for patching this? That gets even more complicated as you start to find application-level vulnerabilities.
Update responsibilities are further confused by too much siloing of data in enterprises, which leaves various teams working with different sets of data. Oftentimes you have one platform for vulnerability management and another for patch management with no common dataset underneath.
Wanted: An SBOM Standard to Rule Them All
Federal agencies and security teams now require SBOMs from third-party vendors as part of their audits and approval processes. More and more enterprises are adding SBOM generation to their security checklist for any included components — even for software-as-a-service (SaaS) providers. This is logical. The rise of nasty supply chain attacks like Log4j and xz underscores the necessity of SBOMs. However, to date, the SBOM has largely failed to deliver on its promise because of competing standards and varied implementation methods across a wide variety of tools.
Two major SBOM standards have emerged, each backed by influential industry groups. SPDX (Software Package Data Exchange), introduced in 2010 by the Linux Foundation, communicates detailed SBOM information, including components, licenses, copyrights, and security references. CycloneDX, developed by OWASP in 2017, is another SBOM standard designed for easy integration into existing development tools and processes. In theory, these standards are compatible and can coexist in the same enterprise security stack. In practice, this is rarely the case.
Broad industry participation is crucial, but the involvement of large incumbents is essential. Cloud hyperscalers, major cybersecurity firms, and developer tooling giants (like GitHub, GitLab, Atlassian, Microsoft, and Google) must participate, because developers, security operations, DevOps, and platform teams are the primary consumers and users of any unified SBOM format. The ultimate test is whether the new SBOM format reduces toil and enhances security and transparency compared to current SBOMs. Network effects and compliance standards, such as SOC2 or ISO, promoting a new unified standard, would strongly influence adoption and potentially tip the scales toward a unified approach.
VENDORS & PLATFORMS
Phish-Friendly Domain Registry “.top” Put on Notice
The Chinese company in charge of handing out domain names ending in “.top” has been given until mid-August 2024 to show that it has put in place systems for managing phishing reports and suspending abusive domains, or else forfeit its license to sell domains. The warning comes amid the release of new findings that .top was the most common suffix in phishing websites over the past year, second only to domains ending in “.com.”
Chrome will now prompt some users to send passwords for suspicious files
Not all deep scans can be conducted automatically. A current trend in cookie theft malware distribution is packaging malicious software in an encrypted archive—a .zip, .7z, or .rar file, protected by a password—which hides file contents from Safe Browsing and other antivirus detection scans. With Enhanced Mode turned on, Google will prompt users to upload suspicious files that aren’t allowed or blocked by its detection engine. Under the new changes, Google will prompt these users to provide any password needed to open the file.
After 15 years, the maintainer of Homebrew plans to make a living
Originally created by Max Howell in 2009 in the Ruby programming language, Homebrew has remained consistently popular, well-maintained, and updated. It always keeps on top of major macOS and Apple updates, such as the loss of support for 32-bit applications in macOS Catalina and the switch to Apple Silicon. Whilst there’s no Windows version, Chocolatey works similarly and was likely inspired by Homebrew.
One area Homebrew has always lacked is the ability to work well with teams of users. This is where Workbrew, a company Mike founded with two other Homebrew maintainers, steps in. Workbrew ties together various Homebrew features with custom glue to create a workflow for setting up and maintaining Mac machines. It adds new features that core Homebrew maintainers had no interest in adding, such as admin and reporting dashboards for a computing fleet, while bringing more general improvements to the core project.
Switzerland now requires all government software to be open source
Professor Dr. Matthias Stürmer, head of the Institute for Public Sector Transformation at the Bern University of Applied Sciences, led the fight for this law. He hailed it as "a great opportunity for government, the IT industry, and society." Stürmer believes everyone will benefit from this regulation, as it reduces vendor lock-in for the public sector, allows companies to expand their digital business solutions, and potentially leads to reduced IT costs and improved services for taxpayers.
LEGAL & REGULATORY
Oracle coughs up $115M to make privacy case go away
The complainants in the settlement number 220 million people. Oracle signaled last month that it is closing its ad tech business. In June, CEO Safra Catz said the company had "decided to exit the advertising business, which had declined to about $300 million in revenue in fiscal year 2024." The business reported around $2 billion in 2022.
The settlement ensures Big Red commits to address the alleged privacy violations through binding promises that it will "not capture certain complained-of electronic communications and will implement an audit program to review its customers' compliance with contractual consumer privacy obligations."
Courts Close the Loophole Letting the Feds Search Your Phone at the Border
Judge Nina Morrison ruled that cellphone searches at the border are "nonroutine" and require probable cause and a warrant, likening them to more invasive searches due to their heavy privacy impact. As reported by Reason, this decision closes the loophole in the Fourth Amendment's protection against unreasonable searches and seizures, which Customs and Border Protection (CBP) agents have exploited. Courts have previously ruled that the government has the right to conduct routine warrantless searches for contraband at the border.
And in a victory for journalists, the judge specifically acknowledged the First Amendment implications of cellphone searches too. She cited reporting by The Intercept and VICE about CPB searching journalists' cellphones "based on these journalists' ongoing coverage of politically sensitive issues" and warned that those phone searches could put confidential sources at risk. Wednesday's ruling adds to a stream of cases restricting the feds' ability to search travelers' electronics. The 4th and 9th Circuits, which cover the mid-Atlantic and Western states, have ruled that border police need at least "reasonable suspicion" of a crime to search cellphones. Last year, a judge in the Southern District of New York also ruled that the government "may not copy and search an American citizen's cell phone at the border without a warrant absent exigent circumstances."