Robert Grupe's AppSecNewsBits & AI 2024-11-02

Epic Fails: secrets in code, encryption implementation, privileged accounts management, CI/CD pipeline control, geolocation tracking, end-of-supported devices, AI

EPIC FAILS in Application Development Security practice processes, training, implementation, and incident response
Change Healthcare Breach Hits 100M Americans
There is little that victims of this breach can do about the compromise of their healthcare records. However, because the data exposed includes more than enough information for identity thieves to do their thing, it would be prudent to place a security freeze on your credit file and on that of your family members if you haven’t already.
The best mechanism for preventing identity thieves from creating new accounts in your name is to freeze your credit file with Equifax, Experian, and TransUnion. This process is now free for all Americans, and simply blocks potential creditors from viewing your credit file. Parents and guardians can now also freeze the credit files for their children or dependents.
Having a freeze in place does nothing to prevent you from using existing lines of credit you may already have, such as credit cards, mortgage and bank accounts. When and if you ever do need to allow access to your credit file — such as when applying for a loan or new credit card — you will need to lift or temporarily thaw the freeze in advance with one or more of the bureaus.
All three bureaus allow users to place a freeze electronically after creating an account, but all of them try to steer consumers away from enacting a freeze. Instead, the bureaus are hoping consumers will opt for their confusingly named “credit lock” services, which accomplish the same result but allow the bureaus to continue selling access to your file to select partners.
If you haven’t done so in a while, now would be an excellent time to review your credit file for any mischief or errors. By law, everyone is entitled to one free credit report every 12 months from each of the three credit reporting agencies. But the Federal Trade Commission notes that the big three bureaus have permanently extended a program enacted in 2020 that lets you check your credit report at each of the agencies once a week for free.
According to the HIPAA Journal, the biggest penalty imposed to date for a HIPAA violation was the paltry $16 million fine against the insurer Anthem Inc., which suffered a data breach in 2015 affecting 78.8 million individuals. Anthem reported revenues of around $80 billion in 2015.

 

Booking.com Phishers May Leave You With Reservations
A spear-phishing campaign ensued when a hotel had its booking[.]com credentials stolen; a targeted phishing message within the Booking mobile app just minutes after making a reservation. The missive bore the name of the hotel and referenced details from their reservation, claiming that booking[.]com’s anti-fraud system required additional information about the customer before the reservation could be finalized.
The phishing attacks stem from partners’ machines being compromised with malware, which has enabled attackers to also gain access to the partners’ accounts and to send messages.
In June 2024, booking[.]com told the BBC that phishing attacks targeting travelers had increased 900 percent, and that thieves taking advantage of new artificial intelligence (AI) tools were the primary driver of this trend.
The domain name in the phony booking[.]com website sent to our reader’s friend — guestssecureverification[.]com — was registered to the email address ilotirabec207@gmail[.]com. According to DomainTools[.]com, this email address was used to register more than 700 other phishing domains in the past month alone.
Phishers targeting booking[.]com partner hotels used malware to steal credentials. But today’s thieves can just as easily just visit crime bazaars online and purchase stolen credentials to cloud services that do not enforce 2FA for all accounts (to then find customer portal user accounts that don’t/haven’t required MFA).
[rG: MFA alone is not sufficient to prevent man-in-the-middle or session hijacking attacks.]

 

Location of world leaders including Putin, Trump and Macron ‘revealed by security teams’ Strava’
Some of the world’s most prominent leaders’ movements were tracked online through the fitness app Strata used by their bodyguards. In one case, Le Monde tracked the Strava activity of Macron’s bodyguards, revealing that the French president had spent a weekend in Honfleur, a Normandy seaside resort, in 2021—a private trip that was not publicly disclosed in his official schedule. The report further indicated that the locations of former first lady Melania Trump and current first lady Jill Biden could be identified by monitoring the Strava profiles of their security teams. Le Monde used an agent’s Strava profile to reveal the location of a hotel where Biden stayed in San Francisco for high-stakes talks with Chinese president Xi Jinping in 2023. A few hours before Biden’s arrival, the agent went jogging from the hotel and used Strava to trace his route. In a statement to the newspaper, the Secret Service said its staff aren’t allowed to use personal electronic devices while on duty during protective assignments but “we do not prohibit an employee’s personal use of social media off-duty.”

 

Gang gobbles 15K credentials from cloud and email providers' garbage Git configs
The unknown data thieves embarked on a "massive scanning campaign" between August and September, looking for servers with exposed Git configuration and Laravel environment files. Exposed Git directories make an especially attractive target for data thieves because they contain all sorts of valuable information – including commit history and messages, usernames, email addresses, and passwords or API keys. While spam and phishing campaigns appear to be the criminals' ultimate goal, the stolen credentials themselves can be sold for hundreds of dollars per account – $500, $600, $700.
Two of the malware strains tools used in the attack were primarily written in French, MZR V2 and Seyzo-v2. They can be bought and sold in underground marketplaces, and they enable scanning for vulnerabilities in exposed Git repositories for exploitation.

 

Colorado Agency ‘Improperly’ Posted Passwords for Its Election System Online
For months, the agency “improperly” hosted a publicly available spreadsheet on its website that included a hidden tab with partial passwords for its voting machines.
The Department of State said that there are two unique passwords for each of its voting machines, which are stored in separate places. Additionally, the passwords can only be used by a person who is physically operating the system and voting machines are stored in secure areas that require ID badges to access and are under 24/7 video surveillance. Colorado voters use paper ballots, ensuring that a physical paper trail that can be used to verify results tabulated electronically.

 

Okta vulnerability allowed accounts with long usernames to log in without a password
Okta bypassed password authentication if the account had a username that had 52 or more characters. Further, its system had to detect a "stored cache key" of a previous successful authentication, which means the account's owner had to have previous history of logging in using that browser.
Still, a 52-character username is easier to guess than a random password — it could be as simple as a person's email address that has their full name along with their organization's website domain. The company has admitted that the vulnerability was introduced as part of a standard update that went out on July 23, 2024 and that it only discovered (and fixed) the issue on October 30.
The bug stemmed from Okta's use of the Bcrypt algorithm to generate cache keys from combined user credentials. The company switched to PBKDF2 to resolve the issue and urged affected customers to audit system logs.
The Bcrypt algorithm was used to generate the cache key where we hash a combined string of userId + username + password. During specific conditions, this could allow users to authenticate by only providing the username with the stored cache key of a previous successful authentication. precondition for this vulnerability is that the username must be or exceed 52 characters any time a cache key is generated for the user.

 

Fired Employee Allegedly Hacked Disney World's Menu System to Alter Peanut Allergy Information
A disgruntled former Disney employee allegedly repeatedly hacked into a third-party menu creation software used by Walt Disney World’s restaurants and changed allergy information on menus to say that foods that had peanuts in them were safe for people with allergies, added profanity to menus, and at one point changed all fonts used on menus to Wingdings. The menus were caught by Disney after they were printed but before they were distributed to Disney restaurants.
The complaint alleges he did this soon after being fired by Disney using passwords that he still had access to on several different systems.
[rG: The Wingdings font and proofreaders noticing profanity was able to do what their IT Security didn’t prevent.]

 

LottieFiles supply chain attack exposes users to malicious crypto wallet drainer
Website animation plugin, LottiePlayer, confirmed on that a highly privileged developer had their account accessed via a stolen session token and attackers pushed malicious code to users. The cybercriminal(s) behind the incident pushed three new versions of LottiePlayer (2.0.5, 2.0.6, 2.0.7) in the space of an hour to the npmjs package manager. Those whose websites were configured to use the latest version of LottiePlayer instead of a manually selected one had the malicious versions automatically served to users. Outside security experts were drafted in, the attacker was ejected, a safe version (2.0.8) was released. The project hasn't officially confirmed this, but Web3 security platform Scam Sniffer spotted a transaction that it suggests shows one victim losing 10 Bitcoin ($722,508 at the time of writing) to the attack. The incident is just the latest in a long line of noteworthy wallet-draining attacks over the past year.

 

Hide the keyboard – it's the only way to keep this software running
Elusive software crashes traced to a worker who would toss their lunchbox onto a desk, where it smacked the keyboard. Investigation determined that the keyboard was interrupt-driven and pressing too many keys caused the interrupt buffer to overflow which then crashed the computer to which it was connected.

 

Local Privilege Escalation Vulnerability Affecting X.Org Server For 18 Years
CVE-2024-9632 security issue has been present in the codebase now for 18 years and can lead to local privilege escalation. Introduced in the X[.]Org Server 1.1.1 release back in 2006, CVE-2024-9632 affects the X[.]Org Server as well as XWayland too. By providing a modified bitmap to the X[.]Org Server, a heap-based buffer overflow privilege escalation can occur.

 

Here’s the paper no one read before declaring the demise of modern cryptography
The coverage of the September paper is especially overblown because symmetric encryption, unlike RSA and other asymmetric siblings, is widely believed to be safe from quantum computing, as long as bit sizes are sufficient. PQC experts are confident that AES-256 will resist all known quantum attacks. While quantum computing will almost undoubtedly topple many of the most widely used forms of encryption used today, that calamitous event won’t happen anytime soon. It’s important that industries and researchers move swiftly to devise quantum-resistant algorithms and implement them widely.
Three weeks ago, panic erupted again when the South China Morning Post reported that scientists in that country had discovered a “breakthrough” in quantum computing attacks that posed a “real and substantial threat” to “military-grade encryption.” The news outlet quoted paper co-author Wang Chao of Shanghai University as saying, “This is the first time that a real quantum computer has posed a real and substantial threat to multiple full-scale SPN [substitution–permutation networks] structured algorithms in use today.”
Among the many problems with the article was its failure to link to the paper—reportedly published in September in the Chinese-language academic publication Chinese Journal of ComputerWith no original paper to reference, many news outlets searched the Chinese Journal of Computers for similar research and came up with this paper. It wasn’t published in September, as the news article reported, but it was written by the same researchers and referenced the “D-Wave Advantage”—a type of quantum computer sold by Canada-based D-Wave Quantum Systems—in the title.
The last time the PQC—short for post-quantum cryptography—hype train gained this much traction was in early 2023, when scientists presented findings that claimed, at long last, to put the quantum-enabled cracking of the widely used RSA encryption scheme within reach. The claims were repeated over and over, just as claims about research released in September have for the past three weeks. A few weeks after the 2023 paper came to light, a more mundane truth emerged that had escaped the notice of all those claiming the research represented the imminent demise of RSA—the research relied on Schnorr’s algorithm (not to be confused with Shor’s algorithm). The algorithm, based on 2021 analysis of cryptographer Peter Schnorr, had been widely debunked two years earlier. Specifically, critics said, there was no evidence supporting the authors’ claims of Schnorr’s algorithm achieving polynomial time, as opposed to the glacial pace of subexponential time achieved with classical algorithms.

 

HACKING
From QR to compromise: The growing “quishing” threat
Attackers leverage QR codes in PDF email attachments to spearphish corporate credentials from mobile devices. The new type of email scam often involves criminals sending QR codes in attached PDFs. Experts said the strategy is effective because the messages frequently get through corporate cyber security filters -- software that typically flags malicious website links, but often does not scan images within attachments.

 

Android Trojan that intercepts voice calls to banks just got more stealthy
The malware, available on websites masquerading as Google Play, could also simulate incoming calls from bank employees. The intention of the novel feature was to provide reassurances to victims that nothing was amiss and to more effectively trick them into divulging account credentials by having the social-engineering come from a live human.
Android users should also be sure to enable Play Protect, a service Google provides to scan devices for malicious apps, whether those apps were obtained from Play or from third parties, as the case is with FakeCall.

 

Chinese attackers accessed Canadian government networks – for five years
Over the past four years, at least 20 networks within Canadian government agencies and departments were compromised by PRC cyber threat actors. PRC cyber threat actors have very likely stolen commercially sensitive data from Canadian firms and institutions.
The report also named Russia and Iran as significant hostile states – which isn't surprising. The inclusion of India, named for the first time as an emerging threat, may be.
In September of last year, Canadian prime Minister Justin Trudeau publicly accused the Indian government of involvement in the murder, on Canadian soil, of Sikh activist Hardeep Singh Nijjar. In the weeks that followed, Canada's military and parliament experienced cyber attacks from independent – but politically state-aligned – Indian hacktivists.

 

Inside a Firewall Vendor's 5-Year War With the Chinese Hackers Hijacking Its Devices
Sophos revealed this week that it waged a five-year battle against Chinese hackers who repeatedly targeted its firewall products to breach organizations worldwide.
Hackers Appear more than ever before to have shifted from finding new vulnerabilities in firewalls to exploiting outdated, years-old installations of its products that are no longer receiving updates. Device owners need to get rid of unsupported "end-of-life" devices, and security vendors need to be clear with customers about the end-of-life dates of those machines to avoid letting them become unpatched points of entry onto their network. Sophos says it's seen more than a thousand end-of-life devices targeted in just the past 18 months.
The problem isn't the zero-day vulnerability (a newly discovered hackable flaw in software that has no patch). The problem is the 365-day vulnerability, or the 1,500-day vulnerability, where you've got devices that are on the internet that have lapsed into a state of neglect.
[rG: Organizations’ risk management needs to include regular automated end-of-support reports for replacement planning and decommissioning validation.]

 

Thousands of hacked TP-Link routers used in years-long account takeover attacks
Attackers working on behalf of the Chinese government are using a botnet of thousands of routers, cameras, and other Internet-connected devices to perform highly evasive password spray attacks against users of Microsoft’s Azure cloud service.
Some of the characteristics that make detection difficult are:

  • The use of compromised SOHO IP addresses

  • The use of a rotating set of IP addresses at any given time. The threat actors had thousands of available IP addresses at their disposal. The average uptime for a CovertNetwork-1658 node is approximately 90 days.

  • The low-volume password spray process; for example, monitoring for multiple failed sign-in attempts from one IP address or to one account will not detect this activity.

It’s unclear precisely how the compromised botnet devices are being initially infected. Many experts in the past have noted that most such infected devices can’t survive a reboot because the malware can’t write to their storage. That means periodically rebooting can disinfect the device, although there’s likely nothing stopping reinfection at a later point.

 

Researchers Uncover Over 3 Dozen Vulnerabilities in Open-Source AI and ML Models
Some of which could lead to remote code execution and information theft.

 

Cast a hex on ChatGPT to trick the AI into writing exploit code
OpenAI's language model GPT-4o can be tricked into writing exploit code by encoding the malicious instructions in hexadecimal, which allows an attacker to jump the model's built-in security guardrails and abuse the AI for evil purposes.

 

  1. The Guide for Preparing and Responding to Deepfake Events
    addresses the growing threat of "hyper realistic digital forgeries." Stemming from The AI Cyber Threat Intelligence initiative that focuses on exploit detectability, differences in model outputs, and ethical AI usage, this new resource highlights practical and pragmatic defense strategies to ensure organizations are secure as deepfake technology continues to improve. Read the blog from the research team to learn more.

  2. The Center of Excellence Guide
    provides a business framework and set of best practices designed to help organizations establish an AI Security center of excellence or enhance their existing efforts establishing collaborative environments for managing generative AI security adoption and risk management that emphasizes cross-departmental cooperation among security, legal, data science and operational teams. As part of the Secure AI Adoption initiative, this guide enables organizations to develop and enforce security policies, educate staff on AI use and ensure that generative AI technologies are deployed securely and responsibly.

  3. The AI Security Solution Landscape Guide
    serves as a comprehensive reference, offering insights into both open source and commercial solutions for securing LLMs and generative AI applications. By categorizing existing and emerging security solutions, it provides organizations with guidance to address risks identified in the Top Ten list effectively.

 

GitGuardian gives 'voice' to AppSec on hard-coded secrets
Confidence in secrets security remains high: 75% of have respondents expressed moderate to high confidence in their organisation’s ability to detect and prevent hardcoded secrets in source code. The average time to remediate a leaked secret stands at 27 days. However, GitGuardian’s data suggests that implementing secrets detection and remediation solutions can significantly reduce this time to approximately 13 days within a year.
There are concerns regarding AI and supply chain risks are growing: 43% of respondents concerned about the potential for increased leaks in codebases highlighted the risk of AI learning and reproducing patterns that include sensitive information. Additionally, 32% identified the use of hard-coded secrets as a key risk point within their software supply chain.

 

 

VENDORS & PLATFORMS

Meta, Apple say the quiet part out loud: The genAI emperor has no clothes
There is no shortage of genAI skepticism among enterprise CIOs, but the mountains of vendor hype make pushback difficult.
The debate involves some fairly amorphous terms, at least when spoken in a computing environment context — things like reasoning and logic. When a large language model (LLM), for example, proposes a different and ostensibly better way to do something, is it because its sophisticated algorithm has figured out a better way? Or is it just wildly guessing, and sometimes it gets lucky? Or did it hallucinate something and accidentally say something helpful?
Would a CIO ever trust a human employee with such tendencies? Not likely, but IT leaders are regularly tasked with integrating genAI tools into the enterprise environment by corporate executives expecting miracles.
The Apple report, which was the more detailed research effort, is also the more damning of the two. “Our findings reveal that LLMs exhibit noticeable variance when responding to different instantiations of the same question. Specifically, the performance of all models declines when only the numerical values in the question are altered. Current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data… It may resemble sophisticated pattern matching more than true logical reasoning.”
Meta’s analysis featuring AI legend Yann LeCun, who serves as the chief AI scientist at Meta states, today’s models are really just predicting the next word in a text. But they’re so good at this that they fool us. And because of their enormous memory capacity, they can seem to be reasoning, when in fact they’re merely regurgitating information they’ve already been trained on. “We are used to the idea that people or entities that can express themselves, or manipulate language, are smart — but that’s not true.’ You can manipulate language and not be smart, and that’s basically what LLMs are demonstrating.”
One frequently cited selling point for genAI is that some models have proven quite effective at passing various state bar exams. But those bar exams are ideal environments for genAI, because the answers are all published. Memorizations and regurgitation are ideal uses for genAI, but that doesn’t mean genAI tools have the skills, understanding, and intuition to practice law.
GenAI seems to easily overcome — or be tricked by a user into overcoming — many of the safeguards organizations attempt to place around it. If you have a one-year-old, you wouldn’t give her a loaded gun and then try and explain to her why she shouldn’t shoot you,. GenAI is not sentient. Humans are sentient and they assume the system is intelligent, too. Letting genAI run on autopilot is crazy.
Executives are giving in to FOMO (fear of missing out), thinking that “their largest competitor is doing it, so we are going to do it,” he said. “But it doesn’t deliver. Even with the more objective mathematics, it starts falling apart. Try to get consistency out of it. You can’t. The words it predicts changes every time you tweak a little knob… Are you really OK with your product only working 80% of the time?”
[rG: Hear, hear! After two years of rainbow chasing, 2025 is going to be a pragmatic enlightenment point for investment underwriters as they tally ROI expenditures against demonstrable savings. Remember block-chain and NFTs?]

 

GitHub Copilot moves beyond OpenAI models to support Claude 3.5, Gemini
The large language model-based coding assistant GitHub Copilot will switch from exclusively using OpenAI's GPT models to a multi-model approach over the coming weeks, GitHub CEO Thomas Dohmke announced in a post on GitHub's blog.
First, Anthropic's Claude 3.5 Sonnet will roll out to Copilot Chat's web and VS Code interfaces over the next few weeks. Google's Gemini 1.5 Pro will come a bit later.
Additionally, GitHub will soon add support for a wider range of OpenAI models, including GPT o1-preview and o1-mini, which are intended to be stronger at advanced reasoning than GPT-4, which Copilot has used until now. Developers will be able to switch between the models (even mid-conversation) to tailor the model to fit their needs—and organizations will be able to choose which models will be usable by team members.
The new approach makes sense for users, as certain models are better at certain languages or types of tasks.

 

 
Facial Recognition That Tracks Suspicious Friendliness Is Coming to a Store Near You
Corsight AI began offering its global clients access to a new service aimed at rooting out what the retail industry calls “sweethearting,”—instances of store employees giving people they know discounts or free items. Traditional facial recognition systems, which have proliferated in the retail industry thanks to companies like Corsight, flag people entering stores who are on designated blacklists of shoplifters. The new sweethearting detection system takes the monitoring a step further by tracking how each customer interacts with different employees over long periods of time.

 

The Echo ecosystem has seen its fair share of failures while trying to popularize Amazon’s Alexa smart assistant.

 

LEGAL & REGULATORY
The story behind the Health Infrastructure Security and Accountability Act
In February 2024, Change Healthcare, a subsidiary of UnitedHealth Group (UHG), was the victim of a significant ransomware attack carried out by the ALPHV/BlackCat ransomware group. The attackers gained access to Change Healthcare's systems for over a week between February 12 and February 20, 2024, stealing around 4 terabytes of data, including protected health information (PHI) in the process. The breach had the potential to impact up to 110 million individuals, potentially exposing sensitive healthcare data on a massive scale.
HIPAA sets standards for how health information is handled and the privacy of PHI, but struggles on the security and accountability front. In fact, the cybersecurity requirements in HIPAA have often been viewed as voluntary and have been under-enforced.
Shortly after Andrew Witty’s congressional hearing, Senate Finance Committee Chair Ron Wyden sent a letter to FTC Chair Lina Khan and SEC Chair Gary Gensler stating that the incident was completely preventable and the direct result of corporate negligence." In his estimation, the lack of best practices and cyber hygiene in this case had led to wide-scale exposure. Additionally, he also addressed the issue of resilience: an organization's ability to prepare for, respond to, and recover from cyberattacks and other disruptions to its digital infrastructure.
The bill, named the Health Infrastructure Security and Accountability Act" (HISAA), is proposed as the solution to standardize practices for cybersecurity. It applies to “entities that are of systemic importance”. What makes an entity of significant importance is that a case of failure or disruption would have a debilitating impact on access to healthcare or the stability of the healthcare system". Highlights of the new standard include:

  • Performing and documenting a security risk analysis of exposure

  • Documentation of a business continuity plan (BCP)

  • Stress test of resiliency and documentation of any planned changes to the BCP

  • A signed statement by both the CEO and CISO of compliance

  • A third-party audit to certify compliance (no later than six months after enactment)

Failure to comply also bears civil costs:

  • No knowledge – Minimum of $500

  • Reasonable cause – Minimum of $5,000

  • Willful neglect (Corrected) – Minimum of $50,000

  • Willful neglect (Uncorrected) – Minimum of $250,000

 

The EU Throws a Hand Grenade on Software Liability
Under the status quo, the software industry is extensively protected from liability for defects or issues, and this results in systemic underinvestment in product security. Authorities believe that by making software companies liable for damages when they peddle crapware, those companies will be motivated to improve product security.
Introducing software liability is a big idea of the Biden administration’s 2023 National Cybersecurity Strategy. “Too many vendors ignore best practices for secure development, ship products with insecure default configurations or known vulnerabilities, and integrate third-party software of unvetted or unknown provenance. Software makers are able to leverage their market position to fully disclaim liability by contract, further reducing their incentive to follow secure-by-design principles or perform pre-release testing. Poor software security greatly increases systemic risk across the digital ecosystem and leave[s] American citizens bearing the ultimate cost.” The Biden strategy suggested that new legislation should define standards for secure development as well as prevent companies from fully absolving themselves of liability.
By contrast, the EU has chosen to set very stringent standards for product liability, apply them to people rather than companies. The EU Council issued a directive updating the EU’s product liability law to treat software in the same way as any other product. Under this law, consumers can claim compensation for damages caused by defective products without having to prove the vendor was negligent or irresponsible. In addition to personal injury or property damages, for software products, damages may be awarded for the loss or destruction of data. Rather than define a minimum software development standard, the directive sets what we regard as the highest possible bar. Software makers can avoid liability if they prove a defect was not discoverable given the “objective state of scientific and technical knowledge” at the time the product was put on the market. Although the directive is severe on software makers, its scope is narrow. It applies only to people (not companies), and damages for professional use are explicitly excluded. There is still scope for collective claims such as class actions, however.

 

Delta officially launches lawyers at $500M CrowdStrike problem
Delta Air Lines is suing CrowdStrike in a bid to recover the circa $500 million in estimated lost revenue months after the cybersecurity company "caused" an infamous global IT outage.
Delta argues that CrowdStrike failed to properly test the Falcon sensor update that led to the widespread blue screen errors on many of its customers' systems. "CrowdStrike caused a global catastrophe because it cut corners, took shortcuts, and circumvented the very testing and certification processes it advertised, for its own benefit and profit."
CloudStrike stated "Delta's claims are based on disproven misinformation, demonstrate a lack of understanding of how modern cybersecurity works, and reflect a desperate attempt to shift blame for its slow recovery away from its failure to modernize its antiquated IT infrastructure." Regarding Delta's allegedly aging IT kit, Microsoft made a similar accusation in response to Delta's threat of legal action against it in August, adding that the airline's suggestion that Windows was complicit in the outage was "false" and "misleading."