- Robert Grupe's AppSecNewsBits
- Posts
- Robert Grupe's AppSecNewsBits 2024-09-28
Robert Grupe's AppSecNewsBits 2024-09-28
Epic Fails: Sonos, Meta, Kia, Linux - tech debt, passwords, authorization, CUPS, certificates, AI, ...
EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
How Sonos Botched An App and Infuriated its customers
The software release, plagued with missing features and bugs, has sparked widespread customer outrage and led to a $200 million revenue shortfall. Sonos shares have plummeted 25% this year.
What has happened to Sonos is at its heart a cautionary tale of company leadership ignoring the perils of "technical debt.”
For two decades, Sonos had allowed its tech debt to pile high. When it undertook in earnest its effort to revamp its app in mid-2022, the company knew it was sitting on infrastructure and code written in languages that were pretty much obsolete. The Sonos app had been adapted and spliced and tinkered with so often, the vast majority of work being performed for the new app was less about introducing new functionality than sorting out the existing mess.
Meta pays the price for storing hundreds of millions of passwords in plaintext
Meta-owned social networks had logged user passwords in plaintext and stored them in a database that had been searched by roughly 2,000 company engineers, who collectively queried the stash more than 9 million times. Meta officials said at the time that the error was found during a routine security review of the company’s internal network data storage practices.
The commission has been investigating the incident since Meta disclosed it more than five years ago. The government body, the lead European Union regulator for most US Internet services, imposed a fine of $101 million (91 million euros) this week. To date, the EU has fined Meta more than $2.23 billion (2 billion euros) for violations of the General Data Protection Regulation (GDPR), which went into effect in 2018.
Flaw in Kia’s web portal let researchers track, hack cars
A group of independent security researchers revealed that they'd found a flaw in a web portal operated by the carmaker Kia that let the researchers reassign control of the Internet-connected features of most modern Kia vehicles—dozens of models representing millions of cars on the road—from the smartphone of a car’s owner to the hackers’ own phone or computer. By exploiting that vulnerability and building their own custom app to send commands to target cars, they were able to scan virtually any Internet-connected Kia vehicle’s license plate and within seconds gain the ability to track that car’s location, unlock the car, honk its horn, or start its ignition at will.
When the researchers sent commands directly to the API of that website—the interface that allows users to interact with its underlying data—they say they found that there was nothing preventing them from accessing the privileges of a Kia dealer, such as assigning or reassigning control of the vehicles' features to any customer account they created. “It’s really simple. They weren't checking if a user is a dealer.”
The web bug they used to hack Kias is, in fact, the second of its kind that they’ve reported to the Hyundai-owned company; they found a similar technique for hijacking Kias' digital systems last year. And those bugs are just two among a slew of similar web-based vulnerabilities they’ve discovered within the last two years that have affected cars sold by Acura, Genesis, Honda, Hyundai, Infiniti, Toyota, and more.
The extraordinary number of vulnerabilities in carmakers' websites that allow remote control of vehicles is a direct result of companies' push to appeal to consumers with smartphone-enabled features. Once you have these user features tied into the phone, this cloud-connected thing, you create all this attack surface you didn’t have to worry about before.
Critical Unauthenticated RCE Flaw Impacts all GNU/Linux systems
A critical unauthenticated Remote Code Execution (RCE) vulnerability has been discovered, impacting all GNU/Linux systems. The flaw, which has existed for over a decade, will be fully disclosed in less than two weeks. Despite the severity of the issue, no Common Vulnerabilities and Exposures (CVE) identifiers have been assigned yet, although experts suggest there should be at least three to six. Leading Linux distributors such as Canonical and RedHat have confirmed the flaw’s severity, rating it 9.9 out of 10. This indicates the potential for catastrophic damage if exploited.
The researcher who uncovered this flaw has expressed deep frustration over handling the disclosure process. Having dedicated three weeks of sabbatical to this effort, they report being met with resistance and patronization from developers reluctant to accept flaws in their code. The researcher notes that progress has been slow despite providing multiple proofs of concept (PoCs) systematically disproving developers’ assumptions. This sentiment underscores the critical need for responsible vulnerability handling. The unfolding situation serves as a stark example of how not to handle security disclosures.
UPS supplier's password policy flip-flops from unlimited, to 32, then 64 characters
A user discovered that he could no longer authenticate into CyberPower's PowerPanel Cloud iOS app using his account's usual 35-character password. The app monitors customers' UPS data, battery backups, and other related tasks. Confused, he asked for a reason from the company's technical support team. The team said: "Due to the recent security patch updates, the length limitation of the password has been set to 32 characters." CyberPower said it was a recommendation made by a third-party security auditor. "The third party recommended a limit on character length of the password, we previously did not have one. Based on customer feedback, we will be changing the password limit to 64 characters. This will take approximately two weeks to implement but has been made a priority by our software team."
[rG: Passwords do need to have a maximum length based on solution technology chain limitations, so as to avoid potential processing resource overloading resulting in denial of service. However, it should be as large as possible to provide the maximum security: which for most common corporate architectures would be 128-256 characters.]
Hacker plants false memories in ChatGPT to steal user data in perpetuity
When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user’s long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern. So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month.
Memory with ChatGPT stores information from previous conversations and uses it as context in all future conversations. Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations. These false memories could be planted by storing files in Google Drive or Microsoft OneDrive, uploading images, or browsing a site like Bing—all of which could be created by a malicious attacker.
LLM users who want to prevent this form of attack should pay close attention during sessions for output that indicates a new memory has been added. They should also regularly review stored memories for anything that may have been planted by untrusted sources.
11 million devices infected with botnet malware hosted in Google Play
Five years ago, researchers made a grim discovery—a legitimate Android app in the Google Play market that was surreptitiously made malicious by a library the developers used to earn advertising revenue. With that, the app was infected with code that caused 100 million infected devices to connect to attacker-controlled servers and download secret payloads. Now, history is repeating itself. Researchers from the same security firm reported that they found two new apps, downloaded from Play 11 million times, that were infected with the same malware family.
Public Wi-Fi operator investigating cyberattack at UK's busiest train stations
Public Wi-Fi, often unencrypted and easily accessible, provides an ideal entry point for attackers. Unlike the security of home Wi-Fi, which is password-protected and encrypted, public Wi-Fi leaves users' data exposed to anyone on the network. In this attack, passengers logging in were shown disturbing messages about terror attacks in Europe, underlining the ease with which attackers can manipulate public networks.
Through investigations with Global Reach, the provider of the Wi-Fi landing page, it has been identified that an unauthorised change was made to the Network Rail landing page from a legitimate Global Reach administrator account and the matter is now subject to criminal investigations by the British Transport Police. industry commentators speculate the issues is far from being contained and Telent has various endpoints exposed to the web. "[Telent] have everything from Outlook Web App facing the internet to a Cisco AnyConnect box without MFA to Juniper management interfaces to documentation servers, etc.”
ServiceNow root certificate blunder leaves users high and dry
The error stems from ServiceNow's management, instrumentation, and discovery (MID) Server, a Java app that sits on local client servers inside their firewalls and integrates applications into the platform, either using a Windows service or Unix daemon. According to a service advisory, the issue started at 0216 UTC on Monday after a MID Server Root G2 SSL certificate expired.
Many users went online to vent their frustrations. "It would be nice if they actually notified all impacted customers (basically everyone) before I page out several other teams while waiting on SN to even pick up my ticket. After confirming there was no network or host issues at any of our sites, I called SN and they told me about the issue, and another 90 minutes before they actually linked my CASE to the outage.”
It appears that the certificate expiration error was flagged with ServiceNow two weeks ago, according to some reports, but that the certificate replacement job was botched.
HACKING
Timeshare Owner? The Mexican Drug Cartels Want You
The FBI is warning timeshare owners to be wary of a prevalent telemarketing scam involving a violent Mexican drug cartel that tries to trick people into believing someone wants to buy their property. This is the story of a couple who recently lost more than $50,000 to an ongoing timeshare scam that spans at least two dozen phony escrow, title and realty firms.
AI bots now beat 100% of those traffic-image CAPTCHAs
To craft a bot that could beat reCAPTCHA v2, the researchers used a fine-tuned version of the open source YOLO ("You Only Look Once") object-recognition model, which has also been used in video game cheat bots. The researchers say the YOLO model is "well known for its ability to detect objects in real-time" and "can be used on devices with limited computational power, allowing for large-scale attacks by malicious users."
That doomsday critical Linux bug: It's CUPS. May lead to remote hijacking of devices
The bugs can be used to form a "high-impact exploit chain." The exploit chain is not completed unless a print job is sent – so if you never print, no command execution could have happened, even if the vulnerable packages were installed and a malicious actor attempted the exploit.
APPSEC, DEVSECOPS, DEV
NIST released its second public draft of SP 800-63-4, the latest version of its Digital Identity Guidelines.
Verifiers and CSPs SHALL require passwords to be a minimum of eight characters in length and SHOULD require passwords to be a minimum of 15 characters in length.
Verifiers and CSPs SHOULD permit a maximum password length of at least 64 characters.
[rG: I think that this is an uninformed error, that hopefully will get addressed in the next version update. There is no explanation for why only 64, and other best practice sources do not provide this guidance. There is a 256 character technical limitation for Active Directory implementation, and 127 characters for Windows GUI. ]Verifiers and CSPs SHOULD accept all printing ASCII [RFC20] characters and the space character in passwords.
Verifiers and CSPs SHOULD accept Unicode [ISO/ISC 10646] characters in passwords. Each Unicode code point SHALL be counted as a single character when evaluating password length.
Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.
Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
Verifiers and CSPs SHALL NOT permit the subscriber to store a hint that is accessible to an unauthenticated claimant.
Verifiers and CSPs SHALL NOT prompt subscribers to use knowledge-based authentication (KBA) (e.g., “What was the name of your first pet?”) or security questions when choosing passwords.
Verifiers SHALL verify the entire submitted password (i.e., not truncate it).
[rG: NOTE these guidelines are intended to be minimum criteria, but organizations and applications can establish their own stronger security requirements.]
NIST Releases CSWP 31, Proxy Validation and Verification for Critical AI Systems: A Proxy Design Process
The goal of this work is to increase trust in critical AI systems (CAIS) by developing proxy systems to verify and validate a CAIS. This document presents a five-phase process for identifying and/or building non-critical proxy systems that have high similarity to critical artificial intelligence (AI) or machine learning (ML) systems.
CISA Strategy for Migrating to Automated Post-Quantum Cryptography Discovery and Inventory Tools
CISA is asking civilian agencies to first identify potential vulnerabilities and migrate high-impact information systems, or assets storing sensitive information on a given network. The guidance also prioritizes assets that “contain data expected to remain mission-sensitive in 2035.” CISA’s wants agencies to launch their migration processes sooner rather than later. The guidance notes that the inventory process requires both manual data collection and the use of automated support.
Microsoft details ‘largest cybersecurity engineering effort in history’ — securing its own code
In a 25-page Secure Future Initiative (SFI) progress report, the company explained a series of technical and governance changes, following the framework set out in a critical report by the Cyber Safety Review Board (CSRB) in April 2024 that described Microsoft’s security culture as “inadequate.” “In May 2024, we expanded the initiative to focus on six key security pillars, incorporating industry feedback and our own insights. Since the initiative began, we’ve dedicated the equivalent of 34,000 full-time engineers to SFI — making it the largest cybersecurity engineering effort in history.” Microsoft also named the 13 people who now serve as deputy chief information security officers (CISOs) in its product groups. The company’s senior leadership team reviews its security progress weekly, and Microsoft’s board gets updates quarterly.
So how's Microsoft's Secure Future Initiative going?
Redmond launched the Microsoft Security Academy in July. This is a "personalized learning experience of security-specific, curated trainings for all worldwide employees," we're told.
[rG: Role based security training per NIST 800-218, requirement PO 2.2.]|
The six SFI engineering "pillars," however, are slightly easier to measure. Here's how Redmond says it's doing in those areas:
Protect identities and secrets: Microsoft Entra ID and Microsoft Account (MSA) for public and US government clouds will now generate, store, and automatically rotate access token signing keys using the Azure Managed Hardware Security Module (HSM) service. Plus, Redmond's standard identity SDKs, used to validate security tokens, now cover more than 73 percent of those issued by Microsoft Entra ID for Microsoft-owned applications. Additionally, Microsoft production environments now use so-called "phishing resistant" credentials, and 95 percent of internal users have been set up on video-based user verification in productivity environments to ensure they're not sharing passwords.
Protect tenants and isolate production systems: Microsoft killed off 730,000 unused apps and eliminated 5.75 million inactive tenants. It also claims to have "deployed over 15,000 new production-ready locked-down devices in the last three months."
Protect networks: Redmond says it has recorded more than 99 percent of physical assets on the production network in a central inventory system, and isolated virtual networks with back-end connectivity from the corporate network.
Protect engineering systems: We're told that 85 percent of Microsoft's production build pipelines for its commercial cloud now use centrally governed pipeline templates.
Monitor and detect threats: "Significant progress" has been made to adopt standard libraries for security audit logs across all production environments. This includes central management, and a two-year log retention period. More than 99 percent of network devices now have centralized security log collection and retention.
Accelerate response and remediation: Microsoft says it updated processes that have improved mitigation time for critical cloud vulnerabilities and set up a Customer Security Management Office (CSMO) for customer engagement during security incidents. Plus, "we began publishing critical cloud vulnerabilities as common vulnerabilities and exposures (CVEs), even if no customer action is required, to improve transparency," Redmond crowed, although we imagine some bug hunters might see room for improvement around CVEs and transparency.
[rG: Bravo! Excellent leadership response and transparency, but the proof will be on execution over the coming several years – not just by Microsoft, but all industry software development practices globally. Writing meaningful objectives is foundational, and continuous frequent leadership progress tracking is critical for successful holistic cultural change.]
Devs gaining little (if anything) from AI coding assistants
Code analysis firm sees no major benefits from AI dev tool when measuring key programming metrics, though others report incremental gains from coding copilots with emphasis on code review.
Many developers say AI coding assistants make them more productive, but a recent study set forth to measure their output and found no significant gains. Use of GitHub Copilot also introduced 41% more bugs. The study measured pull request (PR) cycle time, or the time to merge code into a repository, and PR throughput, the number of pull requests merged. It found no significant improvements for developers using Copilot.
Making the most of MLOps
Enterprises looking to reap the full business benefits of artificial intelligence are turning to MLOps — an emerging set of best practices and tools aimed at operationalizing AI.
The tools that data scientists use to create AI proofs of concept often don’t translate well into production systems. As a result, it can take more than nine months on average to deploy an AI or ML solution.
MLOps offers a fairly robust framework for operationalizing AI, says Zuccarelli, who’s now innovation data scientist at CVS Health. His team used the H2O MLOps platform and other tools to create a health dashboard for the model. “You don’t want the model to shift substantially,” he says. “And you don’t want to introduce bias. The health dashboard lets us understand if the system has shifted.” Using an MLOps platform also allowed for updates to production systems. “It’s very difficult to swap out a file without stopping the app from working,” Zuccarelli says. “MLOps tools can swap out a system even though it’s in production with minimal disruption to the system itself.”
The tools necessary to protect against bias, to ensure transparency, to provide explainability, and to support ethics platforms — these tools are still being built, he says. “It definitely still needs a lot of work because it’s such a new field.
MLOps vendors tend to fall into three categories, starting with the big cloud providers, including AWS, Azure, and Google cloud, which provide MLOps capabilities as a service. Then there are ML platform vendors such as DataRobot, Dataiku, and Iguazio. The third category is what they used to call data management vendors. The likes of Cloudera, SAS, and DataBricks. Their strength was data management capabilities and data operations and they expanded into ML capabilities and eventually into MLOps capabilities.
Thinking of building your own AI agents? Don’t do it, advisors say
Agentic AI is all the rage as companies push gen AI beyond basic tasks into more complex actions. The challenge is that these architectures are convoluted, requiring multiple models, advanced RAG [retrieval augmented generation] stacks, advanced data architectures, and specialized expertise. The power of agentic AIs is still in its infancy. It may be another two years before they have any chance of meeting inflated automation hopes.
VENDORS & PLATFORMS
Mainframes AI Is All the Rage
Pixel phones now come with Gemini. My video editing software has AI integration. My Opera browser comes with AI. Just about everything has some sort of AI integrated. So, it won’t come as a shock to find mainframers are as keen on AI as everyone else.
86% of IT leader are adopting AI and generative AI to accelerate their mainframe modernization initiatives.
86% of respondents think that mainframes remain essential l (why not 100%?).
96% of respondents are migrating a portion (on average that portion is 36%) of their applications to the cloud.
56% are running their critical workloads on a mainframe. Over half of the respondents said mainframe usage increased this year and 49% expect that trend to continue.
Security skills are in high demand because of increasing regulatory compliance requirements
92% of respondents indicated that a single dashboard is important for monitoring their operations, but 85% stated they find it difficult to do this properly.
CrowdStrike Overhauls Testing and Rollout Procedures to Avoid System Crashes
CrowdStrike software engineers have enhanced existing testing procedures to cover a broader array of scenarios, including testing all input fields under various conditions to detect potential flaws before rapidly-released threat detection configuration information is sent to the sensor. CrowdStrike has also made tweaks to provide customers with additional controls over the deployment of configuration updates to their systems The company has added additional runtime checks to the system to ensure that the data provided matches the system’s expectations before any processing occurs. This extra layer is meant to reduce the likelihood of future code mismatches causing catastrophic system failures.
Controversial Windows Recall AI Search Tool Returns With Proof-of-Presence Encryption, Data Isolation
Three months after pulling previews of the controversial Windows Recall feature due to public backlash, Microsoft says it has completely overhauled the security architecture with proof-of-presence encryption, anti-tampering and DLP checks, and screenshot data managed in secure enclaves outside the main operating system. The feature, which uses artificial intelligence to create a searchable digital memory of everything ever done on a Windows computer, will also be turned off by default and fitted with tools to delete it forever from the Windows operating system.
Access to Recall’s settings or user interface is controlled by Windows Hello Enhanced Sign-in Security, and actions like changing settings or accessing data require user presence verification via camera or fingerprint sensor.
WordPress.org denies service to WP Engine, potentially putting sites at risk
WordPress is an open source CMS which is extensible using plugins. Its home is WordPress[.]org, which also hosts resources such as themes and plugins for the CMS.
WP Engine, a hosting provider, doesn't want or intend to pay. WordPress argues that if WP Engine won't pay, it should not be able to benefit from resources at WordPress[.]org.
The denial of service, which follows mutual cease and desist letters, has been noted in the WP Engine incident log. Preventing WP Engine users from accessing plugin updates is therefore serious, as it could mean users can't update plugins that have security issues, or other fixes.
OpenAI Execs Mass Quit as Company Removes Control From Non-Profit Board and Hands It to Sam Altman
The restructuring could turn the already for-profit company into a more traditional startup and give CEO Sam Altman even more control — including likely equity worth billions of dollars. The move will likely make the company more attractive to investors, who have already poured many billions into it. It has raised billions of dollars in funding, including a juicy $13 billion contract with corporate tech giant Microsoft. OpenAI could lose as much as $5 billion this year as it rapidly expands the infrastructure needed to sustain increasingly electricity- and water-hungry AI models.
OpenAI hasn't been a primarily non-profit company since 2019, when it added a for-profit arm to its organizational structure. The move proved controversial enough for co-founder and multi-hyphenate billionaire Elon Musk to quit in disgust.
OpenAI's chief technology officer Mira Murati, VP of research Barret Zoph, and chief research officer Bob McGrew all announced they would be leaving the company, suggesting major inner turmoil at the company.
[rG: See also 2023 firing and reinstatement of Sam Altman.]
OpenAI Pitched White House on Unprecedented Data Center Buildout
OpenAI has pitched the Biden administration on the need for massive data centers that could each use as much power as entire cities, framing the unprecedented expansion as necessary to develop more advanced artificial intelligence models and compete with China.
The push comes as power projects in the US are facing delays due to long wait times to connect to grids, permitting delays, supply chain issues and labor shortages. But energy executives have said powering even a single 5-gigawatt data center would be a challenge.
Telegram will now hand over IP addresses, phone numbers of suspects to cops
Telegram allows full end-to-end encryption but only in so-called secret chat messages, not by default, and these can only be opened on specific devices.
Original terms and conditions of service: "If Telegram receives a court order that confirms you're a terror suspect, we may disclose your IP address and phone number to the relevant authorities. So far, this has never happened."
Revised now to: "If Telegram receives a valid order from the relevant judicial authorities that confirms you're a suspect in a case involving criminal activities that violate the Telegram Terms of Service, we will perform a legal analysis of the request and may disclose your IP address and phone number to the relevant authorities."
Over the "last few weeks," a team of moderators, supported by AI tools (of course) has been going through posts to find and block scumbags up to no good, COE Durov said, and he urged people to report illegal behavior on the service.
ProtonMail, another internet service that promotes itself as highly private, updated its Ts&Cs in 2021 after it handed over a suspect's IP address and other info to the cops upon request, which led to the arrest of a French climate activist.
Uniting for Internet Freedom: Tor Project & Tails Join Forces
The Tor Project, a global non-profit developing tools for online privacy and anonymity, and Tails, a portable operating system that uses Tor to protect users from digital surveillance, have joined forces and merged operations.
Kaspersky defends force-replacing its security software without users’ explicit consent
Earlier this week, some U.S. customers of Kaspersky’s antivirus were surprised to find out that the Russian-made software disappeared from their computers and had been replaced by a new antivirus called UltraAV, owned by American company Pango. The move was the result of the U.S. government’s unprecedented ban on Kaspersky, which prohibited the sale of any Kaspersky software in the country. The ban on selling the company’s software became effective on July 20, while the ban on providing subsequent security updates to existing customers will become effective on September 29.
The migration process started at the beginning of September, of which all Kaspersky customers in the U.S. eligible for the transition were informed in an email communication. Some users were unaware of the transition either because they did not having an email registered with Kaspersky, or didn’t read the notification.
Checkmarx Joins Forces with ZAP to Supercharge Dynamic Application Security Testing (DAST)
The Zed Attack Proxy (ZAP). ZAP project leaders Simon Bennetts, Rick Mitchell and Ricardo Pereira will join Checkmarx as employees to build the next generation of Checkmarx’ enterprise-grade DAST offering while continuing to invest in the open source project and grow the ZAP community.
Who’s watching you the closest online? Google, duh
Eight tracking systems appeared in almost all of the TOP 25 lists for the regions studied. Four of these belong to Google: Google Display & Video 360, which monitors advertising-related activities like clicks and ad metrics; Google Analytics, which is a more general user behavior and keyword tracker; another ad-tracking system called Google AdSense; and YouTube Analytics, which hoovers up data about video views and audience engagement.
LEGAL & REGULATORY
The Data Breach Disclosure Conundrum
On the first point, "certain" data breaches must be reported to "the relevant supervisory authority" within 72 hours of learning about it. When we talk about disclosure, often (not just under GDPR), that term refers to the responsibility to report it to the regulator, not the individuals. And even then, read down a bit, and you'll see the carveout of the incident needing to expose personal data that is likely to present a "risk to people’s rights and freedoms".
This brings me to the second point that has this massive carveout as it relates to disclosing to the individuals, namely that the breach has to present "a high risk of adversely affecting individuals’ rights and freedoms". We have a similar carveout in Australia where the obligation to report to individuals is predicated on the likelihood of causing "serious harm".
This leaves us with the fact that in many data breach cases, organisations may decide they don't need to notify individuals whose personal information they've inadvertently disclosed.
Promises of ‘passive income’ on Amazon led to death threats for negative online review, FTC says
It’s the latest sign of the FTC’s crackdown on e-commerce money-making schemes on top of some of the internet’s leading marketplaces, like Amazon and Airbnb. Since mid-2023, the agency has sued at least four automation companies, alleging deceptive marketing practices and falsely telling customers that they could generate passive income.
Jamaal Sanford received a disturbing email in May of last year. The message, whose sender claimed to be part of a “Russian shadow team,” contained Sanford’s home address, social security number and his daughter’s college. It came with a very specific threat. The sender said Sanford, who lives in Springfield, Missouri, would only only be safe if he removed a negative online review.
Months earlier, Sanford had left a scathing review for an e-commerce “automation” company called Ascend Ecom on the rating site Trustpilot. Ascend’s purported business was the launching and managing of Amazon storefronts on behalf of clients, who would pay money for the service and the promise of earning thousands of dollars in “passive income.” Sanford had invested $35,000 in such a scheme. He never recouped the money and is now in debt.
US proposes ban on smart cars with Chinese and Russian tech
The software ban would take effect for vehicles for “model year” 2027 and the hardware ban for “model year” 2030. If (China) or Russia, for example, could collect data on where the driver lives or what school their kids go to, where (their) doctor is, that’s data that would leave that American vulnerable. US officials are concerned that electric charging stations and other infrastructure outfitted with certain hardware or software could be exploited by hackers with ties to China, Russia or other foreign powers.
[rG: While this is part of election cycle posturing, I suspect that this is a harbinger of future international software development regulations and trade agreements; which would impact Open Source Software and application SBOMs needing to include new “country of origin” disclosures.]
Google complains to EU over Microsoft cloud practices
Google, whose biggest cloud computing rivals are Microsoft and Amazon Web Services, said Microsoft was exploiting its dominant Windows Server operating system to prevent competition. Google Cloud Vice President Amit Zavery told a briefing that Microsoft made customers pay a 400% mark-up to keep running Windows Server on rival cloud computing operators. This did not apply if they used Azure. Users of rival cloud systems would also get later and more limited security update.
And Now For Something Completely Different …
656-Foot Megatsunami Leaves 25-Foot Waves Reverberating In Greenland Fjord For 9 Days
A megatsunami grew to over 656 feet tall and crashed over the shores of the beautiful and desolate Dickson Fjord on the eastern coast of Greenland on Sept. 16, 2023. The reason this event is newsworthy one year later is that the landslide and resultant megatsunami was only just identified as the source of a mysterious seismic signal that scientists from around the world recorded.
The sudden displacement of approximately 4 billion gallons of water, an amount that could supply every person on Earth with a 16 ounce water bottle or fill approximately 10,000 Olympic sized swimming pools, produced the initial megatsunamic wave of water that scored the shoreline, stripping it of vegetation. The initial wave fractionated into smaller 25-foot waves, which reverberated across the fjord for over a week. Not a single eyewitness observed this phenomenon, but experts collaborated to decipher the signal, pulling in data from a multitude of different sophisticated sources.
AI Can (Mostly) Outperform Human CEOs
At first glance, the notion of AI replacing a CEO may seem as far-fetched as the successful promotion of a junior analyst to lead the boardroom. After all, AI is prone to significant errors, such as “hallucinations” — generating incorrect or misleading information — and a tendency to lose track of a task mid-process. These are not qualities typically associated with effective leadership.
Our experiment ran from February to July 2024, involving 344 participants (both undergraduate and graduate students from Central and South Asian universities and senior executives at a South Asian bank) and GPT-4o, a contemporary large language model (LLM) created by OpenAI. The LLM consistently outperformed top human participants on nearly every metric. It designed products with surgical precision, maximizing appeal while maintaining tight cost controls. It responded well to market signals, keeping its non-generative AI competitors on edge, and built momentum so strong that it surpassed the best-performing student’s market share and profitability three rounds ahead. However, there was a critical flaw: GPT-4o was fired faster by the virtual board. Why? The AI struggled with black swan events — such as market collapses during the Covid-19 pandemic.